When the arrow appeared next to the birdcage, I finally understood what my partner was trying to say.
The game was a clone of Pictionary—I had to guess the phrase based on a drawing. My partner had initially depicted a duck next to a cage, plus a hand, and a pond. Only after I asked for another drawing and the arrow was added did I realize the hand was “releasing” the duck, not feeding it. “You win!!!” I was told, after typing in the full answer.
There was no prize, but it felt good. My partner felt nothing—because it was a bot. Despite our mutually incompatible hardware and wetware, we’d found shared meaning, of a kind, in a tangle of pixels and characters.
The game, named Iconary, is now available for anyone to play online. If you play, you’ll be contributing to a research project trying to make computers better coworkers and collaborators. You don’t have to play long to see that the bot needs the help. How showing me a cat superimposed with a crucifix was supposed to help signify “laughing in the yard,” is unclear.
Getting computers and humans to play games together isn’t new. It’s been an obsession of some artificial intelligence researchers since the field’s early days in the 1950s. Yet that history is largely one of conquest by machines. Over the decades, computers have progressively beaten human champions at checkers, chess, Go, and in December, the popular strategy videogame StarCraft II.
“You might think AI stands for antagonistic intelligence,” says Oren Etizioni, CEO of the Allen Institute for AI, the independent research lab in Seattle that created Iconary, and the bot that plays it, named AllenAI. The project aims to explore lessons when people and software play a game that involves collaboration, not a zero-sum fight for dominance.
Iconary works like Pictionary: A player must get a teammate to guess a word or phrase solely by drawing. The name differs in part because Mattel has a trademark on Pictionary, and also because the “draw-er” builds an image from a series of icons. Human players generate those icons piecemeal, drawing items freehand with a mouse, and then selecting from icons the computer suggests. To depict “fruit salad,” you might draw and then select icons for bananas, lemon, apple, and a knife.
“It’s kind of fun,” says David Forsyth, a professor who researches AI at the University of Illinois at Urbana-Champaign, after receiving a preview of Iconary this week. Like WIRED, he found that AllenAI is in general a better guesser than drawer. But that a bot can play a game involving remixing visual concepts and converting them into language is notable, he says.
Recent advances in machine learning have made computers pretty good at recognizing specific objects, if they’ve been trained to look for them. That’s good for searching your phone’s camera roll for cats, or a crowd for a specific face. Reading—or creating—higher-level meaning from a combination of visual concepts is much more challenging, Forsyth says.
The bot you’ll play Iconary with was created by applying machine learning algorithms to records of more than 100,000 Iconary games played by humans, drawing and guessing around 75,000 different phrases. The researchers drew on a recent leap in the capability of software that extracts meaning from text, brought about in part by research at the Allen Institute for AI.
The phrases used in the online game were not part of the software’s prior training. Etzioni says his team believes gathering new data from the bot’s encounters with humans will help researchers improve the software’s ability to understand images, text, and the ways people use them.
Eventually, they hope to use Iconary to stage a version of the Turing Test, a way to probe a computer’s intelligence proposed by British mathematician Alan Turing in 1950. In Turing’s version, a person converses with an unseen party via text and must guess whether or not that interlocutor is human. Etzioni’s version would see people play Iconary with an unknown teammate, and then guess if it was a person, or the AllenAI bot.
That achievement would be a nice moment of cross-species collaboration to place alongside earlier AI milestones in which successive human champions have been roundly defeated at a game they love by software. Algorithms that could do that might also have potential in more practical situations.
Etzioni believes that algorithms able to work with humans to remix verbal and graphical concepts could help compose business documents, or on creative projects. Forsyth says that software able understand novel combinations of imagery could help computers venture out into the messiness of the real world. For example, complex domestic robots will need to extract meaning from endless novel combinations of household items in order to function reliably in homes, he says.