In the new movie Her, a lonely introvert played by Joaquin Phoenix begins a relationship with the talking operating system of his computer. Set in the familiar future, it’s been described as a “science fiction romantic-comedy drama,” another way of saying it’s a film by Spike Jonze, who likes to mess with genres not to mention reality. But is the notion of highly responsive artificial intelligence—of tech so adapted to the human experience that it’s practically human itself—really such a stretch?
Last fall, the MIA reopened its African galleries with an initial foray into new, engaging technologies, an initiative called TDX (not TDK). There’s a touchscreen (pictured above) that allows multiple users to scroll through stories and photographs of African art, and there are iPads loaded with more stories, videos, and photographs. It appears high-tech, and it is. But it’s all intended to engage visitors in narratives, storytelling, communication as old as art itself.
“What we’re trying to do,” says Douglas Hegley, the MIA’s director of technology, “is craft narratives that are meaningful, personal, interesting, and engaging for a wide spectrum of potential audiences, and then deliver those narratives through channels that are familiar to them. And right now that’s not wall labels or scholarly journals or exhibition catalogs. It’s social media, it’s web, it’s mobile—it’s digital media.”
Recently, Hegley and others at the MIA have brought in some prototypes of emerging technologies for the museum to experiment with—Google glass, a 3-D printer, a virtual-reality headset—none of which are ready for gallery use. But who knows. Hegley can imagine a future in which artificial intelligence can sense museum visitors’ interests and take them deeper into that experience, perhaps noticing that you’ve looked at several African masks and some Picasso sculptures—how about some other Cubist art?
But what technology is in the galleries will depend on how people are communicating in the future. “I’m not interested in technology for its own sake,” Hegley says. “I’m interested in people. And I think museums are vitally important in helping people discover what’s great about humankind. It’s a message that’s more valuable as it’s ever been.”
Some of this technology has made its debut, albeit in a mix of science fair/party trick kind of way. At the January Third Thursday event, staff photographer Charles Walbridge showed off the 3-D printer, making tiny models of artworks at the MIA, one thin layer of (corn-based) plastic at a time. (“It smells like syrup,” he says.) Using 3-D scanning technology to break down objects into printable layers, he’s programmed the machine to make all manner of things,even the tools he needs to work on it. But the materials are still limited, and at a reasonable price-point the setup isn’t good enough—yet—to make something that comes anywhere close to capturing the detail of great art. Holding up a 7-inch statue of the MIA’s Doryphoros statue, he says, “No one’s going to mistake one of these for the real thing.”
Elsewhere, however, Jay Leno has used 3-D scanners to replicate extinct car parts for his auto collection. And the Smithsonian has scanned the entire Wright Brothers Flyer. “With a sophisticated enough printer,” Walbridge says, “you could make your own.”
Here’s a video of the 3-D printer at Third Thursday: