Craik's+Automaton

//Walter’s Tortoise has a remarkable similarity to Craik’s Automaton, so-much-so that I had initially assumed that they were in fact the same vehicle. Years ago I remember thinking wouldn’t it be a leap forward if there was a model for intelligence that would take us from Walter’s tortoise to something much more complex – like Craik’s modelling automaton. You would think that a model for intelligence would exist, with just enough complexity, but not too much, to accomplish this. Of course ‘a control-model diagram would be ideal’ – if there were such a thing - although it may be better to sit back and wait for the Singularity Institute to get round to doing that for us.//

The British physiologist William Grey Walter (1910–1977) was an early member of the interdisciplinary Ratio Club. This was a small dining club that met several times a year from 1949 to 1955… … The founder-secretary was the neurosurgeon John Bates, who had worked (alongside the psychologist Kenneth Craik) on servomechanisms for gun turrets during the war. The club was a pioneering source of ideas in what Norbert Wiener had recently dubbed ‘cybernetics’. [|1] Indeed, Bates’ archive shows that the letter inviting membership spoke of ‘people who had Wiener’s ideas before Wiener’s book appeared’. [|2] In fact, its founders had considered calling it the Craik Club, in memory of Craik’s work—not least, his stress on ‘synthetic’ models of psychological theories. [|3] In short, the club was the nucleus of a thriving British tradition of cybernetics, started independently of the transatlantic version. The aim was to discuss novel ideas: their own, and those of guests—such as Warren McCulloch. Indeed, McCulloch—the prime author, a few years earlier, of what became the seminal paper in cognitive science (McCulloch and Pitts 1943)—was their very first speaker in December 1949… …Turing himself gave a guest talk on //Educating a Digital Computer// exactly a year later, and soon became a member. (His other talk to the club was on morphogenesis.) Professors were barred, to protect the openness of speculative discussion. …Our specific interest here, however, is in the machines built by one member of the Ratio Club: Grey Walter’s tortoise…Grey Walter was the first Director of Physiology (from 1939)… he discovered the delta and theta rhythms, and designed several pioneering EEG-measuring instruments…[//in another reference he helped Hans Berger fix his EEG machine to detect delta and theta waves//] …From 1949 onwards, Grey Walter built several intriguing cybernetic machines. These were intended to throw light on the behaviour of biological organisms—although he did point out that they could be adapted for use as ‘a better “self-directing missile”’…

…He’d been inspired, in part, by a wartime conversation with Craik. Craik was then working on scanning and gun aiming, and visited Grey Walter at the Burden Institute to use some of his state-of-the-art electronic equipment.. During his visit he suggested that the EEG might be a cortical scanner, affected by sensory stimuli. This idea became influential in neuroscientific circles. [|15] And it was later modelled by Grey Walter as a rotating photoelectric cell, whose ‘scanning’ stopped when its robot carrier locked onto a light source… One of his tortoises, the //Machina speculatrix// showed surprisingly lifelike behaviour. ‘Lifelike’ rather than (human) mindlike—the Latin word meant exploration, not speculation. But Grey Walter clearly had his sights on psychology as well as physiology. This was the first step in a research programme aimed at building a model having: these or some measure of these attributes: exploration, curiosity, free-will in the sense of unpredictability, goal-seeking, self-regulation, avoidance of dilemmas, foresight, memory, learning, forgetting, association of ideas, form recognition, and the elements of social accommodation …During Grey Walter’s lifetime, his tortoises—like Vaucanson’s flute player, which in fact had also been theoretically motivated [|41] —were commonly dismissed by professional scientists as mere robotic ‘toys’. That remained true for nearly thirty years. During all that time, the general verdict was that they were superficially engaging gizmos, of little scientific interest. This largely negative reception was due partly to the vulgar publicity they’d attracted in the mass media around 1950. The //brou-ha-ha// surrounding the tortoises put off even some of Grey Walter’s fellow Ratio members, who were better placed than anyone to appreciate their significance….

[]

In 1943, psychologist Kenneth Craik named “synthetic method” the process of testing behavioral theories through machine models. The “synthetic method”, envisaged by Loeb in his reflections on heliotropic machines, has been enjoying increasing popularity in the modelling and explanation of animal and human behavior from Cybernetics up to the present time.

Miessner described in some detail the behavior of the electric dog in Radiodynamics: The Wireless Control of Torpedoes and Other Mechanisms [Miessner, 1916]. The electric dog orientation mechanism included two selenium cells. These cells, when influenced by light, effect the control of two sensitive relays.

The self-directing capacity of the electric dog attracted the attention of Jacques Loeb, described by Miessner as “the famed Rockefeller Institute biologist, who had proposed various theories explaining many kinds of tropism.” The explanation of the orientation mechanism, Miessner emphasized, was “very similar to that given by Jacques Loeb, the biologist, of reasons responsible for the flight of moths into a flame.” In particular, the electric dog’s lenses corresponded to “the two sensitive organs of the moth” (p. 196).

in particular, Loeb carefully documented the ways in which the orientation of bilaterally symmetrical lower animals, like the moth, depends on light. These are in fact “natural heliotropic machines”. Now, he claimed to have found an instance of “artificial heliotropic machine”, as he called it, in the orientation mechanism of the electric dog.

A material model taking the form of a machine may enable the carrying out of suitable tests on the theoretical or formal model, because the latter “served as the basis in the construction of the machine”, as Loeb put it. This epistemological and methodological standpoint is at the core of the cybernetic programme and motivates much AI and robotics research up to present time2. The knowledge flow to and from machine-based investigations into adaptive biological behaviors and applied warfare research is evident in the work of British scientists from the early 1940’s. The work of Cambridge psychologist Kenneth Craik is a significant case in point. His investigations on scanning mechanisms and control systems were a major source of inspiration for epistemological claims made in his book The nature of explanation [Craik, 1943].

The scientific activity of Craik and other pioneers of automatic computing and control got a shot in the arm from military research projects carried out during World War II. Grey Walter’s recollections graphically convey the interconnection of scientific and defence goals in the control mechanism research community in general, and in Craik’s work in particular: The first notion of constructing a free goal-seeking mechanism goes back to a wartime talk with the psychologist Kenneth Craik, whose untimely death was one of the greatest losses Cambridge has suffered in years [Craik died in 1945 at 31]. When he was engaged on a war job for the Government, he came to get the help of our automatic analyser with some very complicated curves he had obtained, curves relating to the aiming errors of air gunners. Goal-seeking missiles were literally much in the air in those days; so, in our minds, were scanning mechanisms [Walter, 1953, p. 53]. In his 1943 book, Craik stated that thought’s function is “prediction” which, in its turn, involves three steps: “translating” processes of the external world, perceived by means of a sensory apparatus, into an internal, simplified or small-scale model; drawing from this model possible inferences about the world by appropriate machinery; “retranslating” this model into external processes, i.e. acting by means of a motor system (pp. 50-51). According to Craik, both organisms and newly conceived feedback machines are predictive systems, even though the latter are still quite rudimentary in the way of prediction. As an example of such machines, Craik mentioned the anti-aircraft gun with a predictor, so familiar to Wiener and other pioneers of Cybernetics. And he described the human control system as a “chain” that includes a sensory device, a computing and amplifying system, and a response device. This is what Craik called “the engineering statement of man”, whose abstract functional organization was a source of inspiration for his military investigations as well. The concept of man as computing and control system (the engineering statement of man) was admittedly a radical simplification, neglecting many dimensions of human psychology that Craik mentioned in The Nature of Explanation. But this simplification served to unveil deep connections across academic subjects: psychology, in Craik’s words, was to bridge “the gaps between physiology, medicine and engineering”, by appeal to the shared functional architecture of computing and control systems.

[]

Models are abstract simplifications of a complex reality: The more concrete (‘mechanical’) the abstraction, the simpler the model, the more likely we are to accept it according to Occam’s principle of parsimony—always provided it passes the test of satisfactorily explaining the observations. Yet analogical reasoning is far from a sound or secure route to scientific knowledge (Knorr 1981). Its appeal seems to transcend scientific theorizing and to reflect a strong operational principle of human thought itself. This is the essential argument of Craik (1943) who writes [pp. 120–1]: “I have outlined a symbolic theory of thought, in which the nervous system is viewed as a calculating machine capable of modelling or paralleling external events, and have suggested that this process of parallelling is a basic feature of thought and explanation”.

These considerations raise the question of the exact relation between a model, a theory and an analogy. We will not pursue this in detail here, other than to point out that the question is vexed. For instance, Leatherdale (1974, p. 41) writes: “. . . the literature on models displays a bewildering lack of agreement on what exactly is meant by the word ‘model’ in relation to science”. (See also Nagel 1961, Moor 1978 and Wartofsky 1979.) Our main purpose is different.

If analogy is, as we have argued above, a strong operational principle of human thought, is there something to be gained by considering the brain as a machine for making analogies—an //analog// computer? This is effectively the thesis of Craik, and it is salutory to reflect that his book predates by some years the equally seminal paper of Turing (1950) which so firmly established the //digital// computer as the accepted metaphor/model of brain in the fields of artificial intelligence (AI) and cognitive science. While Turing readily conceded that “Everything really moves continuously” still he argued “there are many kinds of machines that can profitably be thought of as being discrete-state machines.” Turing’s typically bold assertions led the way for other thinkers to develop (often in equally assertive fashion) //computationalism//—“the hypothesis that cognition is the computation of functions” (Dietrich 1990, p. 135). Influential work in this tradition, post-Turing, includes Newell (1980), Pylyshyn (1984), Pagels (1988) and Dietrich (1990). Only relatively rarely has this foundational assumption been questioned by, e.g., Searle (1980, 1990), Johnson-Laird (1983), Rubel (1985), McGinn (1989) and Penrose (1989).

[]

Someone who deserves a lot of credit in coming up with the current—artificial intelligence and cognitive science—way of dealing with this problem is Kenneth Craik. Craik stated that one of the most fundamental properties of thought is its power to predict events (Craik, 1943, p. 50). He mentioned three essential processes: 1. ‘translation’ of external process into words, numbers or other symbols, 2. arrival of other symbols by a process of ‘reasoning’, deduction, inference, etc., and 3. ‘retranslation’ of these symbols into external processes (as in building a bridge to design) or at least recognition of the correspondence between these symbols and external event (as in

[]

= The Future = …Although technological progress has been accelerating, it has been limited by the basic intelligence of the human brain, which has not changed significantly for millennia. However with the increasing power of computers and other technologies, it might soon be possible to build a machine that is fundamentally more intelligent than man…

…In 1958, Stanisław Ulam wrote in reference to a conversation with John von Neumann: One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

In 1965, I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to a cascade of self-improvements and a sudden surge to superintelligence (or a singularity).

In 1982, Vernor Vinge proposed that the creation of smarter-than-human intelligence represented a breakdown in humans' ability to model their future. The argument was that authors cannot write realistic characters who are smarter than humans: if humans could visualize smarter-than-human intelligence, we would be that smart ourselves. Vinge named this event "the Singularity". He compared it to the breakdown of the then-current model of physics when it was used to model the gravitational singularity beyond the event horizon of a black hole. In 1993, Vernor Vinge associated the Singularity more explicitly with I. J. Good's intelligence explosion, and tried to project the arrival time of artificial intelligence (AI) using Moore's law, which thereafter came to be associated with the "Singularity" concept.

Good (1965) speculated on the effects of machines smarter than humans: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Hawkins (2008) responded to this speculation in the //IEEE Spectrum// special report on the singularity: The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run. We would end up in the same place; we'd just get there a bit faster. There would be no singularity.

Vinge continues by predicting that…This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period. Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind uploading. Hanson (1998) is also skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find. []

Our Phase I research divides into three categories:

 * Theoretical Research
 * Tools and Technologies
 * System Design and Implementation (primarily Phase II)

** Theoretical Research **
Our research in this area will focus on using algorithmic information theory and probability theory to formalize the notion of general intelligence, specifically ethical general intelligence. Important work in this area has been done by Marcus Hutter, Jürgen Schmidhuber, Shane Legg, and others, as well as by our team; but this work has not yet been connected with pragmatic AGI designs. Meeting this challenge is one of our major goals going forward. Specific focus areas within this domain include: One of our objectives in this area is to create a systematic framework for the description and comparison of AGI designs, concepts, and theories. We will also make selective contributions relevant to the practicalities of creating, engineering, and understanding real-world AGI systems. A central view of our research team is that ethical issues must be placed at the center of AGI research, rather than tacked on peripherally to AGI designs created without attention to ethical considerations. Several of our focus areas have direct implications for AGI ethics (particularly the investigation of goal system stability), but we also intend to heavily investigate several other issues related to AGI and ethics, including:
 * Research Area 1: Mathematical Theory of General Intelligence **
 * Research Area 2: Interdisciplinary Theory of AGI **
 * Research Area 3: AGI Ethical Issues **

** Tools and Technologies **
This is a broad but critical area. One thing that has delayed AGI research is the scarcity of useful software tools, including for measuring ethicalness. In order to serve our R&D and the R&D of external researchers, the creation of a suite of relevant software tools will be invaluable. Our initial work in this area will focus on customizing and further developing existing open-source software projects. There are valuable, preexisting projects moving slowly due to lack of funding, which can be morphed into specific tools for aiding the creation of safe, beneficial AGI. Three main examples are the AGISim simulation world project, the Lojban language for human-machine communication, and the Mizar mathematics database. Like any complex engineering challenge, building an AGI involves a large number of tools, some of which are quite complex and specialized. One delay of progress in AGI is the lack of appropriate tools. Each team must develop their own, which is time-consuming and distracts attention from the actual creation of AGI designs and systems. One of the key roles SIAI can play going forward is the creation of robust tools for AGI development, to be utilized in-house and by the AGI research community at large. Some key areas of tool development are not adequately addressed by any current open-source project, for example, the creation of programming languages and operating systems possessing safety as built-in properties. SIAI researchers would not be able to complete such large, complex projects on their own, but SIAI can potentially play a leadership role by articulating detailed designs, solving key conceptual problems, and recruiting external partners to assist with engineering and testing. The creation of safe, beneficial AGI would be hastened if there were well-defined, widely-accepted means of assessing general intelligence, safety, and beneficialness. The provision of such means of assessment is a tractable task that fits squarely within the core mission of the Institute.
 * Research Area 4: Customization of Existing Open-Source Projects **
 * Research Area 5: Design and Creation of Safe Software Infrastructure **
 * Research Area 6: AGI Evaluation Mechanisms **

** System Design and Implementation **
This is arguably the most critical component of the path to AGI. As noted earlier, AGI design and engineering will be our central focus in Phase II. In Phase I, however, our work in this area will focus on the comparison and formalization of existing AGI designs. This is crucial, as it will lead to a better understanding of the strong and weak points in our present understanding of AGI, and form the foundation for creating new AGI designs, as well as analyzing and modifying existing AGI designs. Our in-house R&D is founded, in part, on the premise that appropriate use of probability theory is likely to play an important role in the development of safe, beneficial AGI. With this in mind, the "cognitive technologies" aspect of our Phase I centers on the creation of several cognitive components utilizing probability theory to carry out operations important to any AGI. Our research in this area will differ from most work on probabilistic AI due to our focus on generality of scope rather than highly specialized problem-solving. In order to reason probabilistically about real-world situations, including situations where ethical decisions must be made, powerful probabilistic reasoning tools will be needed, and tools different-in-kind than ones currently popular for narrow-AI applications. []
 * Research Area 7: AGI Design **
 * Research Area 8: Cognitive Technologies **

Other About the intelligence singularity [] [] Ray Kurzweil – and the singularity [] Monitoring of the intelligence/ technological singularity [] The singularity summit [] - 2009 [] - 2010

[]
 * MUST SEE **