Publikation Digitalisierung und Demokratie Artificial Intelligence

Spanning discourses about mechanics and mind that traditionally stood in a relationship of tension.

Information

Reihe

HCDM

Autor

Christof Ohm,

Erschienen

April 2022

Zugehörige Dateien

Eine ukrainische Studentin interagiert mit einem Roboter auf der 4th World Voice Expo in Hefei, der Hauptstadt der ostchinesischen Provinz Anhui im Oktober 2021.  Foto: picture alliance / Xinhua News Agency | Huang Bohan

The Historical-Critical Dictionary of Marxism (HCDM) is a comprehensive Marxist lexicon which, upon completion, will span 15 volumes and over 1,500 entries. Of the nine volumes published so far in the original German, two volumes have been published in Chinese since 2017. In 2019, the Rosa-Luxemburg-Stiftung teamed up with the HCDM team to advance its "globalization" into English and Spanish, with the ultimate aim of recruiting a new generation of Marxist scholars from around the globe to the project and expanding its readership and reach. The below entry is one of a selection of these translations that are made available on our website.

For more information about the project and other translated entries, check out our HCDM dossier.

A: ḏakā’ iṣṭināʽī. – F: intelligence artificielle. – G: Künstliche Intelligenz – R: iskusstvennyj intellekt. - S: inteligencia artificial. – C: yishuzhishifenzi 艺术知识分子

The term AI, coined in the USA in 1955, initially also referred to as “machine intelligence”, spans discourses about mechanics and mind which traditionally stood in a relationship of tension or mutual exclusion. The age of bourgeois revolutions and the Enlightenment brought about the mechanistic-materialist worldview. It was the epoch not only of clockwork, but also of the first musical and weaving machines controlled by punched ribbons, as well as of mechanical calculating machines, such as those by Wilhelm Schickard (1623), Blaise Pascal (1641), and Gottfried Wilhelm Leibniz (1673). The classical working out of this worldview was by Thomas Hobbes in Leviathan (1651); almost a hundred years later, the worldview culminated in Julien Offray De La Mettrie's L’homme machine (1747), where the “neurocybernetic model” of the 21st century is found in preliminary form (Tetens 1999). The apparent antithesis, which prevailed in the epoch, was realised in 1641, in the form of Descartes's dualism of “thinking” substance and “extended” substance (Meditationes de prima philosophia). This is a compromise that – while preserving the proviso of mind, and thus under the dominance of the idealistic and theologically compatible – leaves the entire world of nature and the body to the mechanical paradigm.

An impetus for the narrowing and interpenetration of both discourses emerged with cybernetics, a new “vision of the world” emerging from the “secret confluence of war sciences” (Galison 1994, 248). It abolishes the separation last asserted by philosopher and biologist Hans Driesch in his text Die Maschine und der Organismus (1935), before Alan M. Turing, a technician and researcher into the foundations of mathematics, was able, in his celebrated essay “Computing Machines and Intelligence” (1950), to permanently fuse both discourses, thus enforcing their repositioning with an eye to rendering them complementary.

What is the state of development achieved by AI at the beginning of the 21st century? Is the term still useful after the passionate AI debates conducted for about three decades from 1960 on? Computer scientists who work on “databases, compiler construction, computer graphics, and operating systems”, that is to say, in areas “frequently described as ʽcore computer scienceʼ” typically answer, at the beginning of the 2010s: “AI? That's never worked!” or “AI? Does that still exist?” (Schmid 2012, 1). The general aim of AI research, according to Ute Schmid, is nevertheless “to develop algorithms for problem areas in which human beings are still superior to computer systems” (ibid.), which at first sounds like a program of the withdrawal of competence. “However, as a part of cognitive science, AI is also an empirical science and attempts, by means of formalisation and the implementation of human information processing”, to contribute to the knowledge of “the foundations of human thought and activity” (ibid.).

Frank Rieger of the Chaos Computer Club explains that to want to incorporate AI into the now-common ʽcore computer scienceʼ is to be dangerously blind to reality. While “automation in industrial manufacturing has up to now progressed over the timespan of years and decades, there are no obstacles to revolutionary change in the automation of intellectual activity”, which makes “simple intellectual activity” obsolete, but also the labour performed by “accountants, lawyers, personnel developers, marketing staff, and even journalists and purveyors of knowledge” (2012). Frederick Kile (2012) also predicts the mass disappearance of jobs, and coins the term “AMAT” (Automation and Machine Aided Thinking) to describe the interaction and coalescence of the classic and the new forms of automation (2). The paths leading to (intellectual) upheavals and the outlines of those upheavals need to be reconstructed, as do the new qualification requirements and conflicts that arise for those working with AI-based means of production.

1. The Equiprimordiality of the Computer Age and AI. - With the target concept “Artificial Intelligence”, John McCarthy and his colleagues sought in 1955 to initiate a research program to show “that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (qtd. in Nilsson 2010, 77). The historiography of AI (such as Russell/Norvig 2010, 16 et sq.) usually overlooks how the Second World War amalgamated three of Turing’s achievements and brought together the emergence of the computer and AI age.

1.1 As far as the mathematical foundations are concerned, the “year of AI's emergence” can be “dated back to the year 1936” (Heintz 1995, 37). In 1937, Turing, as evidence for a theory of computability – it later acquired the name Church-Turing Thesis, since Alonzo Church achieved identical insights in 1936 (Copeland 2008) – published the construct of a “universal computing machine”: “It is possible to invent a single machine which can be used to compute any computable sequence” (Turing 1936/2008). “The Turing machine, although it was never built, was as revolutionary technically as it was mathematically.” (Dotzler/Kittler 1987, 213) By means of the “exact mathematical elaboration of the machine of all machines” (Dotzler 1996, 19), a historically new real dialectic between the fixed and the modifiable nature of machines is set in motion. It is structurally analogous to the “relationship of dialectical contradiction” between “phylogenetic determination and the phylogenetically programmed, individually modifiable character of behaviour” that Critical Psychology understands to be “the decisive agent of phylogenetic development” and by which it reconstructs anthropogenesis as an interaction between genetic determination and learning or developmental abilities specific to humans (Holzkamp-Osterkamp 1975, 141), this also being the foundation of Noam Chomsky’s model of language acquisition. In 1948, Turing succinctly defines the two poles of this dialectic, which will later come to be known by the non-conceptual terms hardware and software: “We do not need to have an infinity of different machines doing different jobs. A single one will suffice. The engineering problem of producing various machines for various jobs is replaced by the office work of ʽprogrammingʼ the universal machine to do these jobs.” (Turing 1948/1992, 111) The possibility of AI lies in the fact that the acts of writing that modify the machine can be shifted into it, so that the machine can modify itself, depending upon results that cannot be exactly predicted (ʽexperiencesʼ). Werner Rammert (1995) summarises Turing’s approach as a shift “from kinematics to computer science”, something that goes unrecognised when “mechanical calculating machines are presented as precursors of electronic computers”, thus suggesting “a continuity” between the two (92). Without a doubt, Turing’s “universal computing machine” initiated a profound revolution in the history of work, technology, and culture. One of its aspects can be grasped as the transition from the “Gutenberg galaxy of static print media” to the “Turing galaxy of dynamic, programmable media” (Coy 1994, 7 et sq.).

1.2 The “instructions to develop computers” did not come “from the private economy and the pressure of competition, but rather from central planning staffs of the war of wizards, as the Second World War was called by its engineers” (Dotzler/Kittler 1987, 213). In Great Britain, from 1940 onward, Turing was able to build upon the work of Polish cryptographers and decipher radio messages of the German Wehrmacht encrypted by means of ENIGMA machines (a development thought to have shortened the Second World War by about two years; see Pröse 2006, 11). Starting in 1943, COLOSSUS, the world’s first electronic mainframe computer, built on Turing’s initiative and with his help, and operating with 1500 commercially available electron tubes, was operative (Pröse 2006, 207). The fact that AI has brought about “a new type of scientist-engineer”, “comparable to the ʽartist-engineerʼ in the Renaissance” (Rammert 1995, 14), does not just apply to the engineering elite. Turing’s revolutionary achievement is rooted in the generalisable aspect of his historically novel integration of manual and intellectual labour, of theory and experimental intervention, into machinery. In that respect, his mode of working appears to be an anticipation of the “highly technological intellectual worker” (Haug 2003, 62). – Turing and his associates became victims of the Cold War. Until 1974, the British state “denied the mere existence of COLOSSUS” and its ten replicas (Pröse 2006, 209). Great Britain had convinced other states “to use the officially secure ENIGMA machines for their secret communication” (190), and then deciphered them using COLOSSUS. British secrecy paved the way for the falsification of history: it “allowed the Americans to attribute the birth of the information age to the official launching of the large computer ENIAC, which did not occur until 1946” (209).

Since Turing lived his homosexuality openly, “McCarthy’s campaign against homosexual risk profiles” (Dötzler/Kittler 1987, 232) was waged against him starting in 1950. By 1950, he was already “an unperson, the Trotsky of the computer revolution” (Hodges 1983, 514). Due to “homosexual acts”, he was sentenced in 1952 to “chemical castration” by means of hormone treatment – as an alternative to prison; he took his own life in 1954, at the age of 41. He is said to have used an apple poisoned in the manner of Snow White, and found half-eaten next to his corpse, but never examined for poison. The Apple Computer Company denies that its name and logo allude to Turing’s death.

1.3 Turing’s essay “Computing Machinery and Intelligence” (1950) ignited the AI discourse by proposing an imitation game, which was later modified and became known as the Turing Test: a human being conducts dialogues by means of a teletype machine and is supposed to decide whether his or her counterpart is a human being or a computer. In “about fifty years’ time”, according to Turing, computers would “play the imitation game so well” that “one will be able to speak of machines thinking without expecting to be contradicted” (Turing 1950/1992, 442). – In 2011, 30 volunteers each conducted four-minute written dialogues with an unknown counterpart, half with human beings, and the other half with the Internet-based AI program Cleverbot (a ʽchatterbotʼ or dialogue robot); in the concluding survey, of 1334 observers, 59 percent regarded Cleverbot as a human being and only 63 percent thought the human beings actually were human beings (Aron 2011).

In Turing’s original version, however, the test is formulated as a “gender game” (Heintz 1995, 44). For the participants are “a man (A), a woman (B), and an interrogator (C) who may be of either sex” (1950/1992, 433): C is supposed to find out which of his or her counterparts is a man and which is a woman; the woman (B) helps the interrogator. Turing’s question: “ʽWhat will happen when a machine takes the part of A in this game?ʼ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” (434). Although Turing’s essay became “one of the most frequently cited in modern philosophical literature” (Hodges 2011), the dimension of gender play, in which the male player “has to undergo a training program ʽwomanʼ” (Heintz 1995, 46), is “mostly ignored or dismissed as an eccentric prelude” (44).

1.4 Influenced to a large extent by his interest in brain research, Turing makes experiential learning the centre of AI. A machine is postulated which demonstrates intelligent behaviour, which therefore does not “give itself away by making the same sort of mistake over and over again, and being quite unable to correct itself, or to be corrected by argument from outside. If the machine were able in some way to ʽlearn by experienceʼ it would be much more impressive.” (1951/1996, 257) Turing therefore proposes “that the education of the machine should be entrusted to some highly competent schoolmaster who is interested in the project but who is forbidden any detailed knowledge of the inner workings of the machine” (ibid.). The knowledge engineer who sets up expert systems is anticipated here.

Turing is inspired by the experience that the “danger of the mathematician making mistakes is an unavoidable corollary of his power of sometimes hitting upon an entirely new method.” (256) This also applies to machines, “which will simulate the behaviour of the human mind very closely” (257). With AI technologies, forms of movement of this dialectic of error and invention develop, revolutionising human labour. A certain counterweight to the dangerous creativity of AI systems is the public accessibility and discussion of errors or potentials for error, which clashes with the interest in secrecy on the part of capital and the state.

Turing embarks upon the path of many AI pioneers and declares thinking machines to be irrepressible: “There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do in trying, say, to keep one's intelligence up to the standard set by the machines” (259). Since, however, as he predicts, the machines will become able “to converse with each other to sharpen their wits”, “[a]t some stage therefore we should have to expect the machines to take control” (260).

1.5 But why does one speak of AI, and not of cybernetic intelligence? After Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow had coined the term cybernetics in 1943, Wiener introduced it in his influential 1948 book Cybernetics or Control and Communciation in the Animal and the Machine. By contrast, the AI publications of Turing and others led a shadowy existence. Wiener attempted to make “almost every department of human endeavour a branch of cybernetics” (Hodges 1983, 508). In the 1940s, cybernetic research led to the brain being understood, with an eye to its replication, as a network of electrical circuits. This approach had the greatest priority for Wiener and Turing. The fact that in the 20th century, AI asserted itself as the leading concept worldwide, with cybernetics being subsumed under it (e.g. via robotics), is partly due to the fact that the wing of the ʽcybernetics movementʼ led by Wiener exaggerated its successes in modeling brain functions. Wiener failed to recognise the paramount importance that digital universal computers have for machine learning, brain research, and the automation of intellectual labour. Thus in 1949 his “emphasis was still placed on the similarity of the nerve-cells of the brain to the components of a computer” (Hodges 1983, 509), which Turing regarded as the wrong track: “We could produce fairly accurate electrical models to copy the behaviour of nerves, but there seems very little point in doing so. It would be rather like putting a lot of work into cars which walked on legs instead of continuing to use wheels” (Turing 1948/1992, 116).

The military competition of political systems met competition among scientists when John McCarthy coined the term AI in 1955. He chose it in order to “escape association with ʽcyberneticsʼ”, since “its concentration on analog feedback seemed misguided” (qtd. in Nilsson 2010, 78). Furthermore, he wished to “avoid having either to accept Norbert Wiener as a guru or having to argue with him” (ibid.). Starting in 1946, Wiener strictly refused to work on arms projects; furthermore, he formulated a structural critique of capitalism rooted in cybernetics: according to him, it is a “simple-minded theory” that “free competition itself is a homeostatic process”; this is a “belief” that in the USA “has been elevated to the rank of an official article of faith” (1948/1985, 158).

Bernhard Hassenstein predicted in 1976 that the associated technical-scientific disciplines – as was already often the case in the USA – “would presumably discard the name ʽcyberneticsʼ as too burdensome” if “ideological, epistemological, and philosophical-anthropological trains of thought were to be regarded (as by Wiener himself) as their legitimate components” (1468). That is what occurred. Although cybernetics made its way into numerous scientific fields in the 1960s, as Jutta Weber explains, “today only a few institutes for bio-cybernetics or medical cybernetics are the visible remnants” of this cybernetic boom (2004/5, 240 et sq.).

1.6 In her Cyborg Manifesto (1985), Donna Haraway takes up Wiener’s impetus of ruthlessly thinking through cybernetic functional and structural analysis, thus transcending the boundaries between natural, technical, social, and political science. The automatic-electronic mode of production, Haraway shows, brings metaphysical dualisms into crisis and allows for projects intended to demythologise them: “Microelectronics mediates the translations of labour into robotics and word processing, sex into genetic engineering and reproductive technologies, and mind into artificial intelligence and decision procedures.” (1985/1991, 165) She ascribes to feminist cyborg narratives the task “of recoding communication and intelligence to subvert command and control” (175). This is a counter-project to the appropriation and whipping ahead of AI through “C3I, command-control-communication-intelligence” (150), that is, through the military dispositif of the linking of masculinity, warfare, and high technology. It does not aim at the negation of AI-technology in the name of mind, or in order to rescue the old mind-machine dualism, but rather at tearing AI away from military command and embedding it in alternative labour practices that disrupt the dual order.

2. The founding conference of AI, held at Dartmouth College (New Hampshire) in 1956, has gone down in the annals of labour and technology, because it was there that Allan Newell (psychologist), Herbert A. Simon (social scientist), and Cliff Shaw (systems programmer) introduced the first AI program in history. The program, which ran on JOHNNIAC, a computer of the RAND corporation, was given the anthropomorphic name Logic Theorist. It proved that it is possible to automate work of logico-mathematical deduction that is oriented towards a given target of proof, i.e. an entire field of intellectual labour. On the basis of five axioms and rules of inference that Alfred N. Whitehead and Bertrand Russell had worked out in the Principia Mathematica (1910-13), Logic Theorist was able to transform chains of logical symbols until it encountered the theorems it was looking for. According to the robotics-pioneer Hans Moravec, however, the program proved theorems “with the proficiency of an average college freshman” (8). – As Klaus Mainzer states, “automatic proofs” were not only interesting “at the beginning of AI research”, but gradually became a high-tech means of production, deployed for example “in program verification and to test hardware configurations in industrial computer manufacturing” (1994, 119).

Cognitive psychology participated in this development. By means of the call to “think aloud” and by recording statements, Newell and Simon examined how people proceed when solving problems. They broke new ground in two ways: first, they no longer understood programmable digital computers merely as “number crunchers” for numerical purposes, but rather as symbol-processing systems; second, by postulating that “each new program that is built is an experiment” (1976, 114), they defined computer science as an empirical science that observes not only programs but also people solving problems, in order then to simulate their actions with machines. The genesis of this techno-science of human beings gave impetus to a cognitive science that assumes that by observing AI programs, one can grasp how human thinking functions, in order ultimately to understand human beings as AI machines.

Paul N. Edwards characterises the historiography of AI, “even more than that of cognitive psychology”, as “a pure history of ideas” (1996, 239). The breeding ground for adventurous armaments projects within the framework of AI research, however, was the “military-intellectual complex” that emerged in the USA during the Cold War (Robin 2001). For example, the psychologist J.C.R. Licklider participated, as did Newell and Simon, on SAGE, an air-defense system connected by a computer network, encompassing the USA and Canada and employing 200,000 workers (Edwards 1996, 97). Licklider foresightedly conceived of “man-computer symbiosis” as a new human-technology relationship, although he was primarily concerned with ensuring quick-wittedness concerning strike power. “At that time, some of the more impressionable ones of us were expecting there would be 50,000 Soviet bombers coming in over here” (qtd. in Edwards 1996, 262). In a “ten-minute war”, there would be so little time to “make critical decisions” that “automatic speech production and recognition in computers” were indispensable (Licklider 1960/2003, 81).

In 1976, Newell and Simon drew up a balance of their work and coined the influential term “physical symbol systems” (PSS). This term would replace the one of Turing machines and make possible statements in which AI and human intelligence are equated: a PSS is granted ʽintelligenceʼ in the form of “necessary and sufficient means for general intelligent action” (1976, 116). It becomes manifest in the fact “that in any real situation behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur” (ibid.).

The enormous appeal of Newell and Simon's work is not least due to the fact that they sought and created an “alliance” (119) between computer science and psychology. The alliance becomes unproductive when it erodes the difference between humans and machines: “the symbolic behavior of man arises because he has the characteristics of a physical symbol system” (119). In the cognitive sciences and psychology, there have been various explicit demands that psychological statements should be such that they can be formulated in the terms of a Turing machine (for example Boden 1988, 259). Hans-Peter Michels uses the protocols drawn up by Newell and Simon to analyse how they behaved in relation to the statements made by their test subjects when solving problems: “reflections, emotions, motivations” are viewed “as something secondary” (Michels 1998, 78). Thus the specificity of human intelligence is missed.

Connectionism. – The “conditio sine qua non” of classic cognitive science as shaped by Newell and Simon is the cognitive “equivalence of human beings and symbol-processing machines” (D’Avis 1998, 39). This has been criticised as ignoring that machine symbol processing occurs, temporally, by means of “consecutively applied rules for individual sequences”, while spatially, it remains “localised” within a narrowly defined area of the system (ibid.).

The “connectivist network model” responds to both criticisms: it consists “of multiple components operating in parallel” that are active simultaneously and distributed spatially, and that allow “global properties” to emerge (40). Central concepts of this new research current are borrowed from the language of neurobiology: emergence, self-organisation, complexity and non-linear network dynamics (ibid.). Neural networks have rapidly found broad application and make it possible, for example, to identify a specific face in a crowd, something traditional cognitive technology was not able to do.

Building on the work of Klaus Holzkamp, the computer scientists Anita Lenz and Stefan Meretz (1995) have compared basic concepts of connectionism with those of Critical Psychology. Although the concept of learning “became a universal concept in connectionism and a fashionable leading concept by which to advertise oneself in the competition for research funds, not unlike the concept of intelligence in classic AI research”, the connectionist “notion of ʽlearningʼ as an approximation process by a network” attributes “the character of a subject to the network” and thereby obscures “that one can in no way speak of human learning – which presupposes a learning subject” (145). In this computer science version of “animistic thought”, the “computer” appears as “the self-acting ʽsubjectʼ of its operations”, thus “eliminating the active subject from the language of science” (128). Critical-psychological labour research must examine how this way of thinking acts as an obstacle to be overcome from the perspective of the “self-organised convergence of various operational competences, those of the automation workers and engineers, to form a novel ensemble of production intellectuals” (PAQ 1987, 58 et sq.).

3. Expert Systems. – From the standpoint of profit as well as the direct changes to the labour process, AI research was irrelevant up to the 1970s, even when it attempted to address “real-world problems of commercial importance” (Nilsson 2010, 71), such as language translation by machines or the automatic scanning of bank checks. It dealt mainly with “game problems” in the literal and metaphorical sense: “Solving puzzles, playing games such as chess and checkers, proving theorems, answering simple questions, classifying visual images”, etc. (ibid.) This space free of profit considerations was created by ARPA (later DARPA), the research agency of the US Department of Defense, which up to the 1970s was the “primary patron” for funding AI research (Edwards 1996, 270).

That AI flourished outside of university laboratories and military think tanks was due to expert systems (ES): “computer programs that can simulate the special knowledge and judgment of a human expert in a limited field of activity” (Mainzer 1994, 150). This coincides with the beginning of a “second computer age”, namely the “transition from the traditional computer as a numerical calculator and data storage device to knowledge-processing systems” (151). A number of AI firms emerged from ES. “Excitement, especially about expert systems, peaked during the mid-1980s” (Nilsson 2010, 343). ES are by no means aimed at enclaves of mental labour: “Most of the intellectual labour that is done today is based on symbolic knowledge processing, and not on calculation.” (Mainzer 1994, 152)

AI researchers Roger C. Schank and Peter G. Childers criticise ES as “horribly misnamed, since there is very little about them that is expert” (1984, 34). Stephan Zelewski objects that nobody has “so far been able to convincingly demonstrate […] what level of expertise a skilled worker must have in order to be qualified as an expert” (2000, 238). The PAQ analyzes how ʽexpertʼ functions as a divisive category: when workers in a “productivist culture of production” struggle over high-tech jobs, the winners know that they are “competent automation workers who have mastered a technology that is mythical to everyone else. These experts look down on the many laypeople as well as the unemployed who ʽdidn’t make itʼ.” (1987, 99) ES now seemed to render this elite of experts disposable, too. Unrest took hold of “IG Metall, the churches, the German Agricultural Society, and the Bundestag”, who organised “symposiums and hearings on the topic of ES technology” (Coy/Bonsiepen 1989, 188). “Technology assessment in the field of ES will”, according to the appeasement-oriented objective of the VDI/VDE Technology Center for Information Technology, “initially have the task of demythologizing” (qtd. in ibid., 39).In general, the debate about ES oscillated fruitlessly between negation, which qualifies its novelty by comparison to previous programming, and affirmation of ES as systems of superhuman intelligence presumed to render subject matter experts superfluous. In their astute and well documented critique of ES technology, Wolfgang Coy and Lena Bonsiepen (1989) describe AI as the “most extreme ideological bastion” of computer science (152). The German Federal Ministry of Research and Technology staked out the delusional and catastrophic counterposition. It had an ES developed “for situation assessment in ʽfast breederʼ nuclear power plants” and announced in 1988 that the advantage of this ES lay in “the simplification of error diagnosis (instead of highly qualified specialists, only professionally trained people are required)” (qtd. in Coy/Bonsiepen 1989, 161). This articulates what the PAQ defines as an “engineers’ ideology”, “according to which all problems have a technical solution”, and behind which “stands capital’s distrust of workers and its attempt to become as independent from them as possible” (1987, 27). It was distrust that promises of AI’s use value built upon; in 1984, at the most important international conference of AI researchers, AI’s salesmen acted “like the prophets of a new age” and announced, in the manner of “pitchmen”: “We’ve built a better brain […]. ES don’t get sick, resign, or take early retirement!” (Nilsson 2010, 344 et sq.) “Almost everywhere, ES increase labour productivity by at least a factor of ten. Growth rates of twenty, thirty, or forty are common”, Edward Feigenbaum, a pioneer and founder of AI enterprises, declared in 1988 (qtd. in Coy/Bonsiepen 1989, 151). At the same time, the US government elevated ES to the status of a key technology “for the insane development of weapons technology such as nuclear ʽEarly Warning and Response Systemsʼ and SDI fantasies” (160). In 1989, “a 10,000-rule, real-time ES” was supposed to become operational and advise pilots in air combat or make decisions for them (Nilsson 2010, 363). It is in the form of drones that projects of this sort begin to take shape at the beginning of the 21st century.

Critical computer scientists, by contrast, cautioned that ES could replace “individual aspects of qualified skilled labor”, but “not a qualified skilled worker’s entire field of activity” (Coy/Bonsiepen 1989, 34). What remained unclear was the strategic weight such “individual aspects” have and whether AI-specific educational upheavals are imminent. The basic shortcoming of all such reassurances was and remains that the ʽteething troublesʼ of the new technology, whether due to technological immaturity or capitalist obstacles to development, are interpreted theoretically as symptoms of structural deficits that will not be overcome until the distant future, if at all.

The PAQ's analysis of ES. – The PAQ worked out the counterpart to the ʽreplacement logicʼ developed by Marx (1987, 20). As is well known, this logic had enabled Marx to make technological forecasts of astonishing historical reach. With regard to a “stocking-loom” that knits “with many thousands of needles at once”, Marx explained in C I: “The number of tools […] is from the outset independent of the organic limitations that confine the tools of the handicraftsman.” (Marx 1976, 495) The “organic limitations” that ES are emancipated from are, for example, the amount of all kinds of index cards and pieces of paper, as well as the recollective and deductive tasks based thereon and performed by workers. With the ES, there has developed, for the first time in the history of technology and labour, a means of production that reflects upon itself: it documents decision-making processes of machines and human beings in a manner that is reproducible and can be evaluated, as in a scientific experiment. On the one hand, ES create new control dispositifs for capital; on the other hand, they enable a ʽdevelopmentalʼ labour and technology research ʽfrom belowʼ.

Although the ʽreplacement logicʼ sharpens the focus on capitalist-induced catastrophic ʽredundancy potentialsʼ, it leaves open the PAQ’s question as to what new qualification requirements arise, and what opportunities these open up for workers – under capitalist conditions of production. What is decisive for forecasts about AI and ES is the PAQ’s response to the argument of replacement logic that “machine solutions or far-reaching machine support are conceivable” even with regard to the necessary task of addressing malfunctions (1987, 28). The PAQ asserts the complementary dimension of the new: “However, there is always a higher level at which human intervention becomes necessary. With sophisticated equipment for addressing malfunctions, there arises the possibility that this equipment may itself be malfunctioning.” (ibid.)

The PAQ’s direct critique of AI and ES refers to this blinkered view of replacement logic: “ʽAI systemsʼ or ʽESʼ” claim, “by their very names”, to be “able to automate specifically human abilities” (ibid.). These are operationalised as “the capacity to criticise and reshape existing reality, i.e., to process ʽcontradictory findingsʼ in a way different from any automatic information processing, and to undertake appropriate changes to objectives” (28 et sq.). The basic qualification for working with AI systems and ES is therefore the ability to analyse contradictions.

Realistic mistrust of the system on the part of ES users is necessary due to the ʽimperfectionʼ of ES as it results from technological and developmental factors: “An ES is unfinished, incomplete, and never completely error-free in terms of the definition of its task. The abrupt break between correct and erroneous work cannot be fixed technologically. This is where ES […] differ radically from traditional software systems based upon algorithmic specifications as opposed to heuristics.” (Coy/Bonsiepen 1989, 30) A central question is therefore: “How can catastrophic consequences be prevented?” (31)

The Form of Individuality of the Knowledge Engineer.– The method of producing ES cannot be generalised; they are “in almost every case” the result of “long and intensive effort by a particularly qualified practitioner” (Winograd/Flores 1987, 175). The separation of the subject matter expert and the ES designer gives birth to the historical form of individuality of the knowledge engineer, who “learns the expert rules of the human experts, represents them in programming language and implements them in a functional work program” (Mainzer 1994, 154). It is the defining contradiction of this figure that it must “pursue the mental expropriation of human experts” (Zelewski 2000, 241) with their consent. The goal is ʽdevelopmentalʼ expropriation. On the one hand, the technical experts are “mostly […] not conscious” of the “heuristic knowledge” that they deploy (Mainzer 1994, 154). On the other hand, communication with knowledge engineers also leads to a “renewed formation of knowledge on the part of the expert” (Puppe et al. 2003, 607). A culture clash needs to be shaped constructively: “Knowledge engineers will practically never achieve the mastery of technical language that the expert has at his disposal. […] For this reason alone, every arrogance […] towards the often poorly reflected knowledge of the expert is out of the question.” (608) Coy and Bonsiepen therefore already ask in 1989: “Now what is the knowledge engineer: a programmer, psychologist, expert, scientist, artist?” (59) According to their study, many users “prefer psychologically or sociologically trained specialists with additional training in AI techniques over knowledge engineers” (ibid.).

In the 1990s, the feminist anthropologist Diana E. Forsythe examined how experts in particular fields and knowledge engineers react to one another. Her findings suggest that the latter often function as instances for the smoothing over of contradictions. Thus Forsythe’s findings challenge critical-psychological labour research to begin paving the way to alternative practices. Inconsistencies in ES often result from the rationalist blindness of knowledge engineers, who, as computer scientists, generally assume “that knowledge is conscious and that experts are able to tell them what they know if they only want to”, which is why “knowledge extraction” occurs primarily through interviews, and the observation of labour practices is omitted (2001, 52); for knowledge engineers, knowledge is “what can be programmed into the knowledge base of an expert system” (53). Thus a project group that was working on an ES for the counselling and educating of headache patients was assigned two doctors, but no nurses, even though the latter are the ones who “have considerable contact with patients and regard educating them as one of their most important tasks […]. This reflects the usual practice in medical computer science of silencing the voices of nurses as opposed to the ones of doctors” (101).

4. AI and the Internet. – Sybille Krämer’s observation concerning the history of the fascination with AI, namely that “the myth of ʽAIʼ” has faded due to the Internet (1997, 83), may be accurate. However, the Internet is the de facto foundation for a new era of AI: billions of users render AI programs profitable that are suitable for everyday use (for example, the simultaneous translation of text or facial recognition); the evaluation of user data (data mining) is essentially based upon AI and drives its further development.

The Internet of Things (IoT). – After the World Wide Web in the 1990s and the mobile Internet of the first decade of the 21st century, “the IoT is ready to take off as the third phase in the rapid history of the Internet” (Horvath 2012, 1). The ʽthingsʼ – an estimated 50 billion of them by the year 2020 (2), called “everyware” from the perspective of capital (Greenfield 2006) – are the “tiniest of microprocessors, communicating with each other via radio”; they will in the future be embedded, “frequently in invisible form”, in everyday objects such as automobiles, consumer goods, electricity meters or pieces of clothing, and they can “be controlled and can independently communicate with each other over the Internet” (Horvath 2012, 1). Scenarios with considerable potential for domination revolve around computerised implants that monitor patients at home and when traveling, detect critical situations in combination with other microcomputers in the patient environment, automatically adopt countermeasures, trigger emergency calls etc. (see Friedewald et al. 2010, 158 et sq.).

It was as “ubiquitous computing” that the IoT vision was first worked out, by Mark Weiser in 1991,; Weiser built upon Licklider’s “man-computer symbiosis”. In the 1990s, a “number of almost identical concepts” emerged (see Friedewald et al. 2010, 9), such as “Smart Dust, Nomadic Computing, Pervasive Computing, Ambient Intelligence and IoT” (39), this last term being coined by Kevin Ashton in 1999 (2009). In 2012, the Central Association of the German Electrical Industry identified this “development towards cyber-physical systems” as the “driver of a fourth industrial revolution” (8).

A problem for the IoT's social perception and for supervision of its design is its inconspicuousness; the IoT “increasingly” hides “computers in everyday objects” (Mattern 2004, 9). A “world of paradoxes” thus arises, in which “the computer apparently disappears, but at the same time is everywhere” (ibid.). As “direct descendents of the classic Internet”, ubiquitous computing and the IoT “inherit the classic problems of insufficient IT security”, that is, inadequate protection from espionage and falsified data, “combined however with masses of data of unknown quantity and quality that must be protected” (ULD/HU 2006, 301). “Not only can or should keys, pets, suitcases, mail, containers, weapons, toll vehicles, theft-prone objects, environmentally harmful substances and unfaithful spouses be localised; parents would also greatly appreciate it if children’s clothing revealed their whereabouts” (Mattern 2005, 52). Jerome E. Dobson and Peter F. Fisher (2003) call the downside of the localisation of IoT objects “geoslavery”. “Technological paternalism” (Spiekermann/Pallas 2007), the “ability to systematically and automatically sanction the slightest misstep” by means of the object-object recognition inherent to the technology, e.g. “the waste paper bin” that recognises “when a battery wrongly lands in it” (ULD/HU 2006, 187), can take subtle forms.

Passivating animism – subordination under technological paternalism – is implicit in the IoT dystopia of Mike Kuniavsky (2003): “when enough things around us recognize us, remember us, and react to our presence, I suspect we’ll start to anthropomorphize all objects. […] We will see the world as animist […]”, believing “that all objects have will, intelligence, and memory and that they interact with and affect our lives in a deliberate, intelligent, and (in a sense) conscious way.”

5. In the event that the social and tax systems do not undergo an “incremental but fundamental transformation towards indirect taxation of non-human labour and thus a socialisation of the automation dividend”, Rieger (2012) predicts a new wave of mass unemployment with incalculable social crisis consequences. The wealth that could potentially arise from AI applications would then fall back, as a curse, upon a society unable to adapt its institutions and conditions to the augmented forces of production. If society is able to adapt, however, and uses the social labour that has become available for “activities in the social sphere, in art and culture, in the revitalisation of landscapes and cities that are not adequately rewarded on the market”, it may succeed in “halting and healing the loss of meaning felt by individuals from their own defeat in the race against the machines. This not only includes financial security, but also the offer of meaningful employment. […] The socialisation of the automation dividend is therefore a project of historical dimensions. However, unlike practically all other scenarios, it offers a positive utopia that guarantees long-term social, societal and economic stability and preserves human dignity.” (ibid.)

Bibliography: J.Aron, “Software tricks people into thinking it is human”, in: New Scientist, 6.9.2011 (www); K.Ashton, “That ʽInternet of Thingsʼ Thing”, RFID Journal, 22.6.2009 (www); M.A.Boden, Computer models of mind. Computational approaches in theoretical psychology, Cambridge a.o. 1988; B.J.Copeland, “The Church-Turing Thesis” (2008), in: The Stanford Encyclopedia of Philosophy (www); W.Coy, “Computer als Medien. Drei Aufsätze”, Informatik-Bericht der Uni Bremen, 3/1994; id., L.Bonsiepen, Erfahrung und Berechnung. Kritik der Expertensystemtechnik, Berlin/W a.o. 1989; W.D’Avis, “Theoretische Lücken der Cognitive Science”, in: Journal for General Philosophy of Science, vol. 29, 1998, no. 1, 37-57; J.E.Dobson, P.F.Fisher, “Geoslavery”, in: IEEE Technology and Society Magazine, vol. 22, 2003, no. 1, 47-52; B.Dotzler, “Operateur des Wissens: Charles Babbage (1791-1871). Einleitung”, in: id. (ed.), Babbages Rechen-Automate: Ausgewählte Schriften, Wien a.o. 1996, 1-29; id., F.Kittler, “Alan M. Turing – Nachwort”, in: Turing 1987, 209-33; P.N.Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America, Cambridge/MA 1996; D.Forsythe, Studying Those Who Study Us. An Anthropologist in the World of Artificial Intelligence, ed. by D.J.Hess, Stanford 2001; M.Friedewald et al., Ubiquitäres Computing. Das “Internet der Dinge” – Grundlagen, Anwendungen, Folgen, Studien des Büros für Technikfolgen-Abschätzung beim Deutschen Bundestag, vol. 31, Berlin 2010; P.Galison, “Die Ontologie des Feindes. Norbert Wiener und die Vision der Kybernetik”, in: H.-J.Rheinberger, M.Hagner, B.Wahrig-Schmidt (eds.), Räume des Wissens: Repräsentation, Codierung, Spur, Berlin 1997, 281-324; A.Greenfield, Everyware. The Dawning Age of Ubiquitous Computing, Berkeley 2006; D.Haraway, “A Cyborg Manifesto”, (1985), in: ead., Simians Cyborgs, and Women: The Reinvention of Nature, New York 1991, 149-82; B.Hassenstein, “Kybernetik”, HWPh 4, 1976, 1467 et sq.; W.F.Haug, High-Tech-Kapitalismus. Analysen zu Produktionsweise, Arbeit, Sexualität, Krieg und Hegemonie, Hamburg 2003; B.Heintz, “ʽPapiermaschinenʼ: Die sozialen Voraussetzungen maschineller Intelligenz”, in: Rammert 1995, 37-64; A.Hodges, Alan Turing: The Enigma (1983), Princeton 1983/2010; id., “Alan Turing” (2011), in: The Stanford Encyclopedia of Philosophy (www); U.Holzkamp-Osterkamp, Grundlagen der psychologischen Motivationsforschung 1, Frankfurt/M-New York 1975; S.Horvath, “Internet der Dinge”, Wissenschaftliche Dienste des Deutschen Bundestags, “Aktueller Begriff”, no. 19/2012 (www); F.Kile, “Artificial intelligence and society: a furtive transformation”, in: Artificial Intelligence & Society, vol. 27, 2012, no. 1, 1-9; S.Krämer, “Vom Mythos ʽKünstliche Intelligenzʼ zum Mythos ʽKünstliche Kommunikationʼ oder: Ist eine nicht-anthropomorphe Beschreibung von Internet-Interaktionen möglich?”, in: S.Münker, A.Roesler (eds.), Mythos Internet, Frankfurt/M 1997, 83-107; M.Kuniavsky, “User Expectations in a World of Smart Devices”, posting dated 17.10.2003 (www); A.Lenz, S.Meretz, Neuronale Netze und Subjektivität: Lernen, Bedeutung und die Grenzen der Neuro-Informatik, Braunschweig-Wiesbaden 1995; J.C.R.Licklider, “Man-Computer Symbiosis” (1960), in: The New Media Reader, ed. by N.Wardrip-Fruin, N.Montfort, Cambridge (MA) 2003, 74-82; K.Mainzer, Computer – Neue Flügel des Geistes? Die Evolution computergestützter Technik, Wissenschaft, Kultur und Philosophie, Berlin-New York 1994; F.Mattern, “Ubiquitous Computing: Schlaue Alltagsgegenstände. Die Vision von der Informatisierung des Alltags”, in: Bulletin VSE/AES, vol. 95, 2004, no. 19, 9-13; id., “Die technische Basis für das Internet der Dinge”, in: E.Fleisch, id. (eds.), Das Internet der Dinge – Ubiquitous Computing und RFID in der Praxis, Berlin a.o. 2005, 39-66; H.-P.Michels, “Moderne psychologische Theorien über Denken und Gedächtnis – kann der Computer das Vorbild sein?”, in: FKP 39, 1998, 73-88; H.Moravec, Mind Children: The Future of Robot and Human Intelligence, Cambridge (MA) 1988; A.Newell, H.A.Simon, “Computer Science as Empirical Inquiry: Symbols and Search” (1976), in: Communications of the ACM, vol. 19, no. 3, March 1976; N.J.Nilsson, The Quest for Artificial Intelligence. A History of Ideas and Achievements, www 2010; PAQ (Projektgruppe Automation und Qualifikation), Widersprüche der Automationsarbeit. Ein Handbuch, Berlin/W 1987; M.Pröse, Chiffriermaschinen und Entzifferungsgeräte im Zweiten Weltkrieg: Technikgeschichte und informatikhistorische Aspekte, Munich 2006; F.Puppe, H.Stoyan, R.Studer, “Knowledge Engineering”, in: G.Görz et al., Handbuch der Künstlichen Intelligenz, 4th, corrected edn., Munich-Vienna 2003, 599-641; W.Rammert (ed.), Soziologie und künstliche Intelligenz. Produkte und Probleme einer Hochtechnologie, Frankfurt/M-New York 1995; F.Rieger, “Bald wird alles anders sein. Doch wir können die Folgen steuern: Manifest für eine Sozialisierung der Automatisierungsdividende”, in: Frankfurter Allgemeine Zeitung, 18.5.2012, 29; R.Robin, The Making of the Cold War Enemy. Culture and Politics in the Military-Intellectual Complex, Princeton 2001; S.J.Russell, P.Norvig, Artificial Intelligence: A Modern Approach, 3rd revised edn., Boston a.o. 2010; R.C.Schank, P.G.Childers, The Cognitive Computer: On Language, Learning, and Artificial Intelligence, Boston 1984; U.Schmid, “KI und Informatik. Editorial”, in: Künstliche Intelligenz, vol. 16, 2012, no. 1, 1 et sq.; S.Spiekermann, F.Pallas, “Technologiepaternalismus – Soziale Auswirkungen des Ubiquitous Computing jenseits von Privatsphäre”, in: F.Mattern (ed.), Die Informatisierung des Alltags – Leben in smarten Umgebungen, Berlin a.o. 2007, 311-25; H.Tetens, “Die erleuchtete Maschine”, in: Die Zeit, 10.6.1999, no. 24, 51; A.M.Turing, “On Computable Numbers, with an Application to the Entscheidungsproblem” (1937), in: C.Petzold (ed.) The Annotated Turing, Indianapolis 2008; id., “Intelligent Machines” (1948), in: D.C.Ince (ed.), Collected Works of A.M. Turing: Mechanical Intelligence, Amsterdam 1992, 107-29; id., “Computing Machinery and Intelligence” (1950), in: Ince 1992, 133-60; id., “Intelligent Machinery, A Heretical Theory” (1951), Philosophia Mathematica, vol. 4, 1996, no.3, 256-60; ULD (Unabhängiges Landeszentrum für Datenschutz Schleswig-Holstein) and HU (Inst. f. Wirtschaftsinformatik der Humboldt-Universität zu Berlin), TAUCIS – Technikfolgenabschätzung Ubiquitäres Computing und Informationelle Selbstbestimmung, Berlin 2006; J.Weber, “Vom ʽTeufel der Unordnungʼ zum Engel des Rauschens. Kontroll- und Rationalitätsformen in Mensch-Maschine Systemen”, in: Blätter für Technikgeschichte, vol. 66/67, 2004/05, 239-61; M.Weiser, “Computer im nächsten Jahrhundert”, in: Spektrum der Wissenschaft, 1991, no. 11, 92-101; N.Wiener, Cybernetics or Control and Communication in the Animal and the Machine (1948), Cambridge (MA) 1985; T.Winograd, F.Flores, Understanding Computers and Cognition: A New Foundation for Design, Reading a.o. 1987; S.Zelewski, “Expertensysteme”, in: H.Corsten (ed.), Lexikon der Betriebswirtschaftslehre, 4th edn., Munich-Vienna 2000, 237-42; Zentralverband der Deutschen Elektroindustrie, Netz. Werk. Zukunft. Visionen schaffen – Impulse geben, Jahresbericht 2011/12, www 2012.

Christof Ohm

Translated by Alexander Locascio

→ automation, cloning, control, critical psychology, critique of science, cybertariat, development of productive forces, general labour/universal, labour, genetic engineering, high-technological mode of production, historical forms of individuality, information, information, war/informational warfare, information worker, informational revolution, intelligence, learning, machine-breakers/luddite, machinery, military-industrial complex, robot, scientific-technological revolution, spirit/mind, technical intelligence, technical progress, technical/technological revolution/s, technique/technics, technology, thought

→ allgemeine Arbeit, Automation, Denken, Geist, Gentechnologie, historische Individualitätsformen, hochtechnologische Produktionsweise, Information, informationelle Revolution, Informationsarbeiter, Informationskrieg/ informationelle Kriegsführung, Intelligenz, Klonen, Kontrolle, Kritische Psychologie, Kybertariat, Lernen, Maschinensturm, Maschinerie, militärisch-industrieller Komplex, Produktivkraftentwicklung, Roboter, Technik, Technikentwicklung/technologische Revolutionen, technische Intelligenz, technischer Fortschritt, Technologie, wissenschaftlich-technische Revolution, Wissenschaftskritik

Originally published as Künstliche Intelligenz in: Historisch-kritisches Wörterbuch des Marxismus, vol. 8/I: Krisentheorien bis Linie Luxemburg-Gramsci, edited by Wolfgang Fritz Haug, Frigga Haug, Peter Jehle, Wolfgang Küttler, Argument-Verlag, Hamburg 2012, col. 483-501.