Op-Ed Contributor
The First Church of Robotics
Ji Lee
By JARON LANIER
Published: August 9, 2010
THE news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children.
This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools. But such conclusions aren’t just changing how we think about computers — they are reshaping the basic assumptions of our lives in misguided and ultimately damaging ways.
I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)
In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering” machine that is designed to play the TV quiz show “Jeopardy.” Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.
Another example is the way in which robot teachers are portrayed. For starters, these robots aren’t all that sophisticated — miniature robotic devices used in endoscopic surgeries are infinitely more advanced, but they don’t get the same attention because they aren’t presented with the A.I. spin.
Furthermore, these robots are just a form of high-tech puppetry. The children are the ones making the transaction take place — having conversations and interacting with these machines, but essentially teaching themselves. This just shows that humans are social creatures, so if a machine is presented in a social way, people will adapt to it.
What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people.
In one recent example, Clay Shirky, a professor at New York University’s Interactive Telecommunications Program, has suggested that when people engage in seemingly trivial activities like “re-Tweeting,” relaying on Twitter a short message from someone else, something non-trivial — real thought and creativity — takes place on a grand scale, within a global brain. That is, people perform machine-like activity, copying and relaying information; the Internet, as a whole, is claimed to perform the creative thinking, the problem solving, the connection making. This is a devaluation of human thought.
Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.”
While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost.
Jaron Lanier, a partner architect at Microsoft Research and an innovator in residence at the Annenberg School of the University of Southern California, is the author, most recently, of “You Are Not a Gadget.”
Full Hundred
The First Church of Robotics
Ji Lee
By JARON LANIER
Published: August 9, 2010
THE news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children.
This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools. But such conclusions aren’t just changing how we think about computers — they are reshaping the basic assumptions of our lives in misguided and ultimately damaging ways.
I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldn’t be confused with the deeper issues of intelligence or the nature of personhood. Equally important, my philosophical position has not prevented me from making progress in my work. (This is not an insignificant distinction: someone who refused to believe in, say, general relativity would not be able to make a GPS navigation system.)
In fact, the nuts and bolts of A.I. research can often be more usefully interpreted without the concept of A.I. at all. For example, I.B.M. scientists recently unveiled a “question answering” machine that is designed to play the TV quiz show “Jeopardy.” Suppose I.B.M. had dispensed with the theatrics, declared it had done Google one better and come up with a new phrase-based search engine. This framing of exactly the same technology would have gained I.B.M.’s team as much (deserved) recognition as the claim of an artificial intelligence, but would also have educated the public about how such a technology might actually be used most effectively.
Another example is the way in which robot teachers are portrayed. For starters, these robots aren’t all that sophisticated — miniature robotic devices used in endoscopic surgeries are infinitely more advanced, but they don’t get the same attention because they aren’t presented with the A.I. spin.
Furthermore, these robots are just a form of high-tech puppetry. The children are the ones making the transaction take place — having conversations and interacting with these machines, but essentially teaching themselves. This just shows that humans are social creatures, so if a machine is presented in a social way, people will adapt to it.
What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people.
In one recent example, Clay Shirky, a professor at New York University’s Interactive Telecommunications Program, has suggested that when people engage in seemingly trivial activities like “re-Tweeting,” relaying on Twitter a short message from someone else, something non-trivial — real thought and creativity — takes place on a grand scale, within a global brain. That is, people perform machine-like activity, copying and relaying information; the Internet, as a whole, is claimed to perform the creative thinking, the problem solving, the connection making. This is a devaluation of human thought.
Consider too the act of scanning a book into digital form. The historian George Dyson has written that a Google engineer once said to him: “We are not scanning all those books to be read by people. We are scanning them to be read by an A.I.”
While we have yet to see how Google’s book scanning will play out, a machine-centric vision of the project might encourage software that treats books as grist for the mill, decontextualized snippets in one big database, rather than separate expressions from individual writers. In this approach, the contents of books would be atomized into bits of information to be aggregated, and the authors themselves, the feeling of their voices, their differing perspectives, would be lost.
Jaron Lanier, a partner architect at Microsoft Research and an innovator in residence at the Annenberg School of the University of Southern California, is the author, most recently, of “You Are Not a Gadget.”
Full Hundred
Comment