It may be a nebulous concept to define, but artificial intelligence (AI) is proving to be a definite draw for museum and gallery audiences. Perhaps less obvious is AI’s increasing influence on both back-office processes and the visitor experience.
There have recently been a flurry of museums using the technology in their displays. The Barbican’s AI: More than Human exhibition ran from 16 May to 26 August in London. With good media coverage, positive attendance figures and widespread engagement on social media, it showed how AI can be extremely fertile ground for museums.
It is the latest touring show from Barbican International Enterprises, which produces shows that focus on digital technology that launch biennially at the Barbican before travelling the world (the show’s next stop is Holland’s Groninger Forum, December 2019 to May 2020).
The exhibition presents a sweeping history of AI as a concept, starting with the historical context through the centuries-old idea of the Golem, an anthropomorphic being created from clay. It then introduces several ethically dubious uses of the technology (such as autonomous weapons, or “deepfake” videos), and ends with crowd-pleasingly brilliant examples of cutting-edge cybernetics and digital art.
The Barbican worked with two external curators for the exhibition – London-based Suzanne Livingston, who has a background in cybernetics and philosophy; and Maholo Uchida, a senior curator at Miraikan, Japan’s National Museum of Emerging Science and Technology, in Tokyo.
“We saw AI in a very broad context as the human endeavour to recreate intelligence, whatever that might mean, at a certain period in time,” says Anna Holsgrove, an assistant curator at the Barbican. “We looked at the philosophy, the history, and how that spans a lot of cultures and gives AI a global perspective. The exhibition has resonated with a lot of people from different backgrounds.”
Elsewhere, in the publicity for an exhibition held at Lady Margaret Hall, Oxford, in June and July, it was claimed that the humanoid robot Ai-Da is “an artist in her own right” because of its ability to draw and paint from sight.
The robot was created by Aidan Meller, a gallery director specialising in modern and contemporary art. The solo show presented a selection of Ai-Da’s artwork, including drawing, painting, sculpture and video art. Meller’s creation whetted the appetite of buyers for AI-influenced art – even before the exhibition opened, sales of Ai-Da’s artwork had covered the project’s significant costs.
Another reflection of this hunger for AI art was the sale of Mario Klingemann’s Memories of Passersby I at Sotheby’s auction house in March. Klingemann’s work uses a complex system of networks to generate a neverending stream of portraits – uncanny and eerie representations of male and female faces created by a machine.
Sold for £40,000, the Klingemann work fetched a significant pricetag, but it was dwarfed by the 2018 sale of a work by French art collective Obvious for $432,500 (£347,000). Created by a trio of 25-year-old French students, Portrait of Edmond Belamy was made via a type of machine learning algorithm known as a generative adversarial network. The network was trained on a dataset of historical portraits and then attempted to create one of its own.
A project that does take the ethics of AI seriously is the new Museums and Artificial Intelligence Network, the result of a partnership between London’s National Gallery and the Metropolitan Museum of Art and American Museum of Natural History, both in New York. The network will connect academics and professionals to discuss “the key parameters, methods and paradigms of AI in a museum context”.
The network is led by Oonagh Murphy, a lecturer at Goldsmiths, University of London. She says AI use in museums could include anything from employing machine vision to analyse large digital collections, to using facial recognition AI to provide targeted interpretation to different people.
These usages open a range of ethical questions, and most museums already depend on tech multinationals whose use of data intersects with AI. The network aims to produce an ethically robust professional framework for dealing with such issues. As Murphy says: “Just because it’s possible, doesn’t mean it’s good.”