STANFORD, CALIFORNIA – Early Thursday morning, a century-long study panel called the “One Hundred Year Study on Artificial Intelligence (AI100)” hosted by Stanford University published its second report on the state of AI.
More than a scientific assessment, the paper is an AI policy-shaping document prepared by parties with a vested interest in the technology. Launched and funded by Microsoft’s chief scientific officer, Eric Horvitz, AI100 emanates from the university’s Human-Centered Artificial Intelligence (HAI) program, which boasts an advisory board comprised of high-powered individuals like Linked-In founder, Reid Hoffman, former National Security Adviser Condoleezza Rice, Blackstone CEO Stephen Schwarzman and Alphabet’s Eric Schmidt, who along with fellow HAI board-member Horvitz, sits on the U.S. National Security Commission on AI (NSCAI).
The NSCAI just issued its final report in May and put forth actual policy recommendations surrounding the role of AI in matters of national defense, civil liberties and other aspects of governance, which are reframed in the AI100 report in order to advance the same goals within the private sector, specifically “experts in the social sciences, the legal system and law enforcement”.
“AI has made the leap from something that mostly happens in research labs or other highly controlled settings to something that’s out in society affecting people’s lives”, AI100 panel chair Michael Littman told GeekWire. Indeed, the proliferation of AI technology seems to be approaching warp speed as the ubiquity of computerized gadgets and digital platforms have created a level of dependency – whether through workplace directives or self-imposed, dopamine-inducing behaviors – that has allowed algorithms to replace personal agency to a disturbing degree.
AI100’s hubristic timeline mirrors this dispossession of agency by presenting artificial intelligence in the illusory light of inevitability and permanence. Bombarded with tales of its benefits and subjected to interminable discourses by the technology’s evangelizers, AI has been surreptitiously deployed in every nook and cranny of our society without a single vote cast for or against such a radically invasive change in how people decide to organize their communities and interactions.
Technological imperatives only go so far in accounting for this reality. With AI, in particular, the implications foreseen by the forefathers of this technology make it inseparable from the most deranged and pathological ambitions of power. The field of artificial intelligence has never left the confines of the military research and development efforts that gave birth to it through pioneers like J.C.R. Licklider, who was among the first to embark on the quest to reproduce the brain by mapping its “circuitry”, framing human consciousness and behavior as mechanical expressions of a soulless entity in the service of the Pentagon.
His paper, Man-Computer Symbiosis, is considered a seminal document for AI, which only began to appear as a course on academic curriculums after its publication in the early 1960s. Nevertheless, he and others had been exploring the subject decades prior and by the time Licklider’s paper was being passed around the offices of the fledgling Advanced Research Projects Agency (ARPA) on the eve of ARPANET – the prototype of the Internet –, classical AI had developed the solid principles, which would carry the discipline for the next three decades.
In the late 1980s, a new paradigm emerged that would take AI to the next level and clear many of the hurdles that had prevented the fraternity of computer scientists, neuroscientists, behavioral psychologists and biologists in the field from making the jump from stationary interfaces like mainframe computers running a clever piece of software to the holy grail of artificial intelligence: autonomous robots.
Bombs, Beer and the Nouvelle Robot Engineers
Dubrovnik, Croatia had been the original location where thirteen of the world’s leading AI researchers and biologists were scheduled to meet in the spring of 1991 for yet another workshop to discuss the “new AI paradigm” that had been gestating for several years among the “vanguard group”. The breakout of the Yugoslav civil war forced a change of venue to a 14th century brewery north of Brussels called the Priory of Corsendonk.
Since 1988, the multi-disciplinary scientists had gathered at various events sponsored by NATO to hash out the “essential ingredients of this new paradigm” and the strong shift in research methodologies brought about by the rise of concepts like “connectionism”, described as a movement in cognitive science that sought to explain intellectual abilities using artificial neural networks and the biological basis underpinning much of their explorations so far.
This particular meeting had been organized by two of the top people in the field of AI: Luc Steels from the Free University of Brussels’ AI laboratory and Rodney Brooks from the MIT Artificial Intelligence Laboratory. Intended as a follow up to a 1988 workshop called “Representation and Learning in an Autonomous Agent“, the workshop in Belgium would define the key ideas underlying the new paradigm, as posited by celebrated Chilean neuroscientist and attendee, Francisco Varela.
Up until then, classical AI had been grounded in a strong engineering tradition, that analyzed all possible situations and designed solutions for each of them. Varela and his colleagues were beginning to propose a different approach where “other mechanisms” allowed “the agent itself to handle the continuous novelty of the real world”. In AI parlance, ‘agent’ means robot.
This new approach revolved around changing the focus of AI from so-called ‘higher-level’ cognitive activities such as logic or problem-solving to ‘lower-level’ skills “associated with sensorimotor intelligence”. The idea was to design “autonomous agents” (robots) by relying on the principles of ‘Embodiedness’ (physical form) and ‘Situatedness’ (environmental context) and limiting their functionality to concrete experiences, which Varela referred to as “microworlds”.
The construction of robots was, therefore, a tool for AI research and not as machines “built with the prime goal of automating parts of sensory processing or action control”, but rather “as a first step toward the study of full cognitive agents”. The distinction here is subtle, yet crucial as the robotics industry, as we know it today, would grow from these very precepts and from the minds of these very men and women.
Rodney Brooks and his group at MIT was credited with doing “more than any other group to develop the technological basis for carrying out concrete experiments in the spirit of the new AI paradigm.” Brooks’ Mobile Robotics (Mobot) Lab in MIT’s Artificial Intelligence Lab, which he also directed, was the breeding ground for the next generation of AI researchers and robotics engineers. One of his students, Maja Matarić, had accompanied Brooks to the workshop and made one of two technical contributions produced at the workshop.
The paper, titled “Integration of Representation Into Goal-Driven Behavior-Based Robots”, was one of the earliest academic treatise of Brooks “subsumption architecture” robot-building technique, which Matarić had already spent years working on with her teacher and would go on to forge a career with in the robotics industry of the 21st century.
But, she would not be the only one. A few years before the workshop, Brooks took the work he and another of his students had done with a six-legged robot called Genghis and presented it to NASA’s Jet Propulsion Lab (JPL) in California as a cheap alternative to space exploration – an idea he had cooked up with yet another Mobot alum, Anita Flynn. Though some accounts portray the meeting as a failure, the student who had worked on the robot, Colin Angle, would be brought on at JPL in 1989 to work under David Miller, who was then supervising JPL’s Robotic Information Systems Group just as NASA was in the planning stages for a Mars rover mission called Mars Rover Sample Return (MRSR).
After building a small rover prototype called Tooth for JPL, Angle continued to work with Brooks at Mobot for his Masters’ thesis – a more complex version of Genghis called Attila. Soon thereafter, JPL, Brooks and his student spun-off Mobot into IS Robotics, Inc. to capitalize on the entertainment value of the machines and generate some extra funding for the broader project to build fully autonomous agents demanded by the new AI paradigm.
Bolting to the Top
A smooth, female voice narrates the Discovery Channel’s profile on IS Robotics and Rodney Brooks in the early 1990s: “Brooks is not an engineer, ” she calmly states, pausing for effect before informing viewers that “his specialty is building brains.”
Media coverage of Brooks’ new company and its star engineer, Colin Angle, was considerable and his robot project “IT” even landed on the cover of National Geographic. The “internally funded research project” was billed as an effort to create a “robot with a human looking face and simple sensors to respond to its environment and communicate an internal emotional state”.
Built in 1994, IT was a collaborative effort between Colin and newcomer Helen Greiner, who had also studied under Brooks in his Computer Science and Engineering course at MIT. Greiner would be made the company’s Vice President, technically under Angle who was listed as President. Both Angle and Greiner would supplant Brooks as the faces of the company as the Professor slowly moved into the background and continued to pursue the advancement of the broader AI agenda.
Angle and Greiner would also get credit for certain products they really had no role in making, but served to maintain the company’s mystique and otherwise hide some less than flattering alliances, which might have impinged on the carefully-crafted image being built around the company.
In 1998, IS Robotics became iRobot after it merged with a company called Real World Interfaces (RWI), which was among the first to pioneer the “ready-made autonomous robots” market. Run by entrepreneur Grinnell More, RWI’s clients included MIT, IBM, NASA and the Army and Navy Research Labs. More introduced the “PackBot”, which was later deployed to Afghanistan and which Greiner claimed to have introduced to the Pentagon in a 2018 Linked-In post. But, it was More who was running iRobot’s Military Systems Division at the time.
Most of iRobot’s popular consumer products, like the Roomba, were developed by Joseph Lee Jones, who began working for the company in 1992 while it was still IS Robotics. Greiner and Angle have nevertheless reaped the rewards of being Rodney Brooks’ students and remain committed to advancing the new AI paradigm. But, perhaps no other pupil of Brooks has done more for the cause than Maja Matarić, whose company Embodied, Inc., recently launched “an in-home socially assistive robot for supporting child development” called Moxie.
That’s Some Moxie
Matarić’s career after Mobot took her to California, where iRobot was also reincorporated after its merger with More’s outfit. She joined the faculty at the University of Southern California (USC) in 1997 after a brief stint at Brandeis University. She now heads USC’s Viterbi K-12 STEM Center and is co-director of the USC Robotics Research Lab.
The Belgrade native co-founded the company making “artificial-intelligence-enabled robotic companions” for children with venture capitalist, Paolo Pirjanian who also spent three years as iRobot’s Chief Technology Officer. Matarić remains on the board of Embodied, Inc., though she claims to have nothing to do with day-to-day operations.
She is, nonetheless, the extension of the Rodney Brooks school of artificial intelligence and the new AI paradigm. She takes full credit for inventing the concept of “Socially Assistive Robots” (SAR), which she breaks down in her paper, titled Embodiment in Socially Interactive Robots and which has become an AI sub-field in itself.
SARs are at the base of Moxie, which is a direct manifestation of new AI paradigm principles as delineated back in Corsendonk. “Physical embodiment does not only mean physically interacting with the environment to perform tasks; embodiment also has to do with non-verbal communication”, Matarić tells Robotics Business Review in a 2019 interview. Displaying a concerning level of cognitive dissonance, Matarić points out that “we are in an age of disembodied on-line communication” that “is stripping away human empathy and our sense of real connectedness”, and follows what is a statement most of us can agree with by suggesting that robots can bring “those properties back naturally”.
Pirjanian is optimistic that Moxie is able to fool children into believing that the robot is “a feeling, thinking being” and that the company’s goal is to “extend that to pretty much everyone in the next few years”. In a recent interview with The Robot Report, the Embodied CEO stated that Moxie is “creating an entirely new category that, for now, looks believable to a 5-year-old and in the foreseeable future will be believable for anyone and everyone.” he said, adding that this “goal” was “more relevant than the Turing test to have social impact.”
Launched at the start of the pandemic in April 2020, Moxie was perfectly positioned to exploit the stress placed on families as a result and has moved quickly to create a SAR market with Big Tech partnerships and acquisitions of conversational AI, machine learning, and natural language generation startups, like Kami Computing.
Embodied’s Chief Creative Officer Craig Allen minces no words: “Moxie is really in the unique position to basically be a direct eyewitness into a child’s abilities, needs, and behaviors. These interactions can inform parents, or perhaps help therapists, understand how a child is progressing and make informed assessments by utilizing data analytics.”
Embodied’s privacy policy virtually guarantees that your children won’t be the only ones under surveillance, as “anyone in range of the video or audio recording capabilities of Moxie may be recorded, including your child, members of your family or others in the home at the time the robot is recording”. In addition, Moxie’s sensors will be “utilized to identify if and where other objects or persons may be located in a room”. This data will be “collected and processed” and stored for three years.
“The robot encourages children to go out and practice things in the real world and report back,” says Pirjanian. Why a child would need to “report back” to anyone other than their parents, family or community is a question that only a robot would never think to ask.