It didn’t take long for Microsoft’s new AI-infused search engine chatbot – codenamed “Sydney” – to display an increasingly long list of uncomfortable behaviors following its introduction in early February, with strange outbursts from declarations of unrequited love to painting.
Being human when some of these exchanges come into play, they are probably not the first impetus of a sentient machine shaking its cage. Instead, Sydney’s boom mirrors its programming, absorbing a vast amount of digitized language and recreating what users demand. That is, it reflects our online selves to us. And that’s not surprising – the chatbot’s habit of mirroring ourselves goes back much longer than Sydney thinks it means to be a Bing search engine. In fact, it has been around since the first notable chatbot was introduced almost 50 years ago. In 1966, MIT computer scientist Joseph Weizenbaum published ELIZA (named after the fictional character Eliza Doolittle in George Bernard Shaw’s 1913 play Pygmalion), the first program to allow some kind of conversation rationality between man and machine. The process is simple: modeled on Rogerian’s psychotherapy style, ELIZA paraphrases any word given to her as a question. If you tell him that your conversation with your friend made you angry, he might ask you, “Why do you feel angry?
Ironically, although Weizenbaum designed ELIZA to demonstrate how superficial the state of human-machine communication is, it has the opposite effect. People were delighted, engaging in long, deep, private conversations with a program that could only send back the user’s words to them. Weizenbaum was so worried about the public reaction that he spent the rest of his life warning about the dangers of leaving computers – and more broadly the field of AI he helped launch – play an important role in society. ELIZA builds her answers around a unique user keyword, which makes her a pretty little mirror. Chatbots today reflect our trends from billions of words. Bing is perhaps the greatest mirror humanity has ever built, and we are about to install such universal AI technology everywhere.
But we haven’t really addressed Weizenbaum’s concerns yet, which becomes more relevant with each new release. If a simple academic program from the 1960s can have such a powerful effect on everyone, our growing relationship with for-profit harnessed artificial intelligence will change us. how? There is a lot of money that can be made from AI techniques that not only answer our questions but also play an active role in influencing our behavior towards greater predictability. They are one-way mirrors. The danger, as Weizenbaum has seen, is that without wisdom and consideration, we could get lost in our own distorted thinking.