Thursday, January 24, 2019



 The stubborn myth of intelligent machines

 The opinion that computers will become more intelligent than man and even take command of us is neither technically nor philosophically likely, so why are so many of us frightened by these science fiction speculations? If there is something to worry about, it is rather the moral consequences of this superstition about technology.

 In 1936, 24-year-old mathematician Alan Turing published the thesis entitled "Computable numbers and the problem of decisiveness". It aimed at trying to find a logical method to determine whether an arbitrary mathematical statement was true or false. Touring's conclusion was: There is no such universal procedure. In order to prove this, he in theory designed a "universal machine", later known as the "Turing machine". It is today seen as the theoretical foundation of the computer science.

 In the next decade, development took tremendous steps. Turing himself was recruited to the secret Bletchley Park project that built the machine that broke the Germans' Enigma code. Already in 1948, mathematician John von Neumann proclaimed that computers would soon surpass human intelligence. Two years later, Turing published an article in which he claimed that machines would soon think like humans. To determine when this happens, he imagined an "imitation game" - later known as the "Turing Test" - where a man may communicate with a machine and a human being without knowing who is who. When it is no longer possible for him to determine the difference, the machine is intelligent. A few years later John McCarthy coined the term "artificial intelligence". The course was now set for a project of cosmic dimensions: Man was about to recreate the ability that through the millennia of philosophy and theology was regarded as his most unique feature and even the bond to God: The intellect of humanity.
 
This also raised the question: What does this discovery mean for the future of man? The literary and artistic answers arrived instantaneously. Yes, in fact, the answers had already been prepared for a long time. In modern literature, it became Mary Shelley's "Frankenstein" (1818) who captured the fascination and horror of the technological era facing artificial life. Victorian author Samuel Butler's book "Erewhon" (1872) describes how a new breed of machinery in Darwin's spirit achieves consciousness and takes over the world. But it was the Czech writer Karol Capek who in his play "R.U.R" from 1920 came to mint the term "robot" as the name of a machine creature who revolts against his masters (from a slavic word for serfs workers).

 The technical breakthroughs from the 40s and onwards gave the fantasies new momentum, also among the researchers themselves. A fascinating person is Irving John Good, colleague of Turing in Bletchley Park. In 1965 he published an article in The New Scientist where he imagined the emergence of an "ultra intelligent machine" that could generate new machines. In each machine generation, it would expand its abilities, which would quickly result in an "intelligence explosion," an event that von Neumann already referred to as the "singularity".

 When Stanley Kubrick and Arthur Clarke together created the film "Year 2001 - a space adventure" (1968), they used Good as scientific advisor. The movie’s (and the book's) supercomputer Hal 9000 came to be the paradigmatic design of a machine intelligence that, at a crucial moment, does not obey man but turns against him.

 Among philosophers, however, there were voices that questioned not only the concrete predictions, but also the very idea of ​​intelligence within AI. One pioneer was Martin Heidegger, who was early interested in cybernetics. In an influential essay from 1953, "The essence of technology", he argued for how our understanding of technology leads to a technical understanding of ourselves. When everything is transformed into "information", we no longer see what it means to exist in the world as finite historical beings. His way of reasoning was partly the basis of the American philosopher Hubert Dreyfus "What computers can't do" from 1972, which shows how AI research rests on an narrow image of intelligence that it is merely abstract symbolism, without connection to body and life context.

 However, it was not Dreyfus, but his colleague at Berkeley 
John Searle who, above all, was associated with the 
philosophical criticism of AI. In explicit polemics against
Turing's criterion, he meant that there is no reason to 
attribute human characteristics to machines, other than 
in metaphorical terms. A computer may perform fantastic 
chess moves, accurate translations, or sensible responses
in a conversation, but ultimately no one knows (There is
no one there) who is involved in any of these activities.
 
 Searle's criticism was about the basis on which we can 
define someone or something as intelligent. For the 
majority of practically minded engineers and computer 
scientists, these issues were irrelevant. When it comes
to designing a machine that can do one or the other, 
translate, diagnose, drive or play chess, it is about 
delivering results. That the criticism nevertheless 
provoked AI research shows that many of its representatives 
wanted to feel that they were close to understanding
human intelligence, sometimes for genuine intellectual 
reasons but probably also for economic reasons. To claim 
that AI has a philosophical relevance for our 
self-understanding or that it is about to produce a new 
higher silicon-based life-style creates attention and 
attracts money. That Ray Kurzweil, development manager 
at Google, in his acclaimed book from 2004, "The Singularity
is Near", devotes several pages to trying to refute 
Searle's arguments testifying to what is at stake.

 During the 1980s, new models were tried to improve the function of the machines, especially so-called neural networks, which one thought better imitated the function of the human brain. But even if the machine's performance increased, the major breakthroughs and commercial successes did not materialize. In retrospect, the 80s and 90s appear as two decades of no progress for AI. The process is described in detail in the excellent overview book that was compiled last year by the New Scientist magazine, entitled "Machines that think: Everything you need to know about the coming age of artificial intelligence".
 When IBM's Deep Blue defeated Kasparov in 1997, it was certainly daunting for the chess world and it helped to draw attention to research again. But no one could claim that the program was "intelligent" in any significant sense. It lacked strategy and ability to learn or draw conclusions and it was inconvenient for all other activities than to calculate chess moves.

 Today, everyone  is again talking about AI. It has its prophets, evangelists as well as doomesday preachers. Designs of mutually benevolent, sometimes threatening, intelligent robots are repeated in literature and film. The commercial interest is hot.

 What happened? The explanation holds a philosophical screw. As long as AI was believed to recreate or explain human intelligence, it arrived nowhere. The real breakthrough came when we gave up the idea of ​​building models of human thinking and instead invested in creating machines that could "learn themselves" by statistically processing huge amounts of growing data via simpler algorithms.

 The idea is perhaps best illustrated by the translation programs. They were promised already in the 50's and great efforts have been made to create algorithms for how an ideal human translator works, but in vain. Instead, the breakthrough came in the 21st century through the creation of programs that, based on a huge and ever-growing text database, generate a statistically calculated probability that a certain construction in the target language corresponds to that in the source language. These programs testify to the impressive intelligence of their creators. But no one can reasonably argue that these machines understand the languages ​​they handle or that they would be intelligent.

 Nevertheless, dramatic scenarios of how intelligent machines
are about to take over the world are repeated. Among the 
most renowned researchers in the genre, are two Swedes, 
the philosopher Nick Bostrom and the physicist and 
mathematician Max Tegmark. Their best-selling "Super 
Intelligence" (2014) and "Life 3.0" (2017) both assume 
that today's self-learning machines can generate an 
"intelligence explosion" in the near future. By referring 
to the brain's extremely complex structure of neurons and 
synapses, they mean that super intelligence is only a matter
of when the machines approach a comparable complexity in 
computational capacity. But how and why today's highly 
specialized programs according to some kind of evolutionary 
logic would generate a super brain with its own intentions 
remains to be shown. Based on the technology we have today, 
it is neither technically nor philosophically likely. It is 
one of many "esoteric possibilities" to quote The New 
Scientist, who believes that the whole field is in great 
need of a "reality check". It does not prevent the threats 
from having a deep aesthetic and religious sigh, which 
probably explains their impact.
 
 The apocalyptic 50's visions are now dusted off in the 
light of new technological successes that ultimately risk 
diverting attention from more urgent issues. One of them 
is the political and ethical consequences of the imminent 
robotisation of both work and private life. Another is 
the control of the technology.
 
 The country that focuses most on AI right now is China,
a communist dictatorship that obviously also sees it as 
an instrument of economic and political governance.
 
 Finally, it is important to also keep the issue alive 
that Heidegger once posed, namely what the origin of 
mechanical intelligence does to man's self-understanding.
 An increasing expert and consulting culture is already 
contributing to a weakening of the individual's ethical 
and professional responsibility. If man believes himself 
to be in the process of being replaced by machines that 
are more intelligent than he himself, he risks more 
easily renouncing responsibility and judgment.
 
 Rather than worrying about "super-intelligence", we might 
have to worry about the "super-stupidity" that threatens 
when man no longer thinks he needs to think because he 
believes his tools does it for him.
 
 
Todde

Also check:


No comments:

Post a Comment