The museum is temporarily closed. Learn more.

3D Modeling of Mia Collections

Project: 3D Modeling of Mia Collections

Mia Staff: Charles Walbridge, Lead Collections Photographer;

Daniel Dennehy, Senior Photographer, Head of Visual Resources

Division/Department: Media and Technology

Project date(s): ongoing – began summer 2014

Audience/user: Art documentarians, conservators, curators and anyone interested in the study or enjoyment of cultural heritage

Project goals: To create virtual 3D models of Mia collection objects that accurately represent the form, surface textures, color appearance and reflective properties of the art work.

A photograph of an ancient Chinese you wine vessel illuminated by an on-camera-axis flash (left), next to two synthetic images of the same vessel under different illumination conditions, based on 116 such flash photographs (right).

Project description:

The robot used for 3D photography at Mia

The Media and Technology department at Mia is always striving to improve – in terms of both quality and efficiency – the imaging techniques we use to document the wide variety of objects in our collection. By employing new imaging techniques, the staff of the Visual Resources department can now create 3D models that are virtual representations of works of art. Our preferred scanning method is photogrammetry, a process of deriving measurement data from photographs. This involves taking pictures of an object from many angles of view and then using software to find the common features on the surface of an object within hundreds of photos. By calculating this data along with the camera positions and a reference scale placed next to the object, it is possible to reliably map the geometry of a form down to a fraction of a millimeter. The result, a cloud of thousands, sometimes millions of points representing the common pixels gathered from those photos, is then wrapped with color and tone derived from the photographs to make a 3D facsimile of the artwork that can be viewed in virtual space. The ability to freely rotate the object and zoom in for close examination of fine detail provides new opportunities for art historians, conservators, researchers and public audiences worldwide. 

There are some limitations to the types of things that can be recorded with photogrammetry or any other 3D scanning technology. Transparent or highly reflective objects, such as glass or silver, are problematic since they challenge the software’s ability to distinguish reflection or transparency from opacity. Also, standard commercially available software cannot capture the level of color accuracy or surface detail that museums expect to see when reproducing works of art.

Sample photograph of the Celestial Horse taken with a camera-mounted light source at Mia (left), next to a synthetic image of the Celestial Horse using 149 such photographs (right).

In an attempt to address some of these shortcomings, we began a collaboration with a research team at the University of Minnesota’s Department of Computer Science and Engineering that has proven to be be of great value to Mia and to the entire museum community. Professor Gary Meyer and PhD candidate Michael Tetzlaff have developed a novel, image-based rendering system that preserves the realistic qualities of the original photographs used with photogrammetry. It offers enhanced color accuracy and convincing specular reflections that allow the viewer to better interpret the tonal qualities, surface textures and material properties of the object depicted.

The capture method employs a simple flash-on-camera setup that can be achieved with conventional photographic equipment. Knowing the relative position of the camera and the mounted flash allows the algorithm to factor in the angle of the light illuminating the object. With this information, it is possible to achieve dynamic “relighting” of the object after the fact. In other words, the viewer has the options to move a set of virtual lights around the model independently. This is a remarkable advantage to anyone wishing to imagine how an object will appear under various lighting scenarios. If a curator or conservator would like to see the object illuminated with a raking light to accentuate the shadows falling across a textured surface, for example, this is as simple as navigating the virtual lights within the software interface. And rather than making abstract assumptions about how the light reflects off the surface of the object, the program dynamically applies the jpeg photographs. The result is a highly realistic and accurate rendering of the original work of art.  

Photograph of the Chinese ding food vessel, illuminated by three small lights, one of which is the on-axis light used to capture the object, alongside a synthetic image of the ding under the same viewpoint and similar illumination conditions, using 536 views.

Mia took advantage of this new approach to begin documenting a large collection of ancient Chinese vessels for a forthcoming touring exhibition and catalogue, Eternal Offerings: Chinese Ritual Bronzes from the Minneapolis Institute of Art. Significant research and study has been underway for the project over the last several years.  A team of experts was brought in from China to assist Dr. Yang Liu, Curator of Chinese Art and Head of Chinese, South and Southeast Asian Art at Mia, in gathering detailed information about this renowned collection. Along with the scholars who closely studied the objects, specialized technical artists made precise measurements, detailed line drawings, and graphite rubbings of the intricately decorated vessels. Photogrammetry is the perfect companion to supplement the other documentation techniques used to gather knowledge on this subject.

 Evaluation tools: In addition to photogrammetry, we have tested a number other 3D scanning technologies in order to evaluate the accuracy and efficiency on various object types. For a project involving Mia’s collection of Japanese netsuke (small carved ivory figures), we experimented with both time-of-flight and structured light scanners. The results of our testing were mixed, but in general photogrammetry produced finer visual detail and greater color accuracy than either of the light range imaging methods. We have also tested longer range LIDAR devices for modeling galleries, period rooms and other architectural forms. While the results were good, we have not invested in any dedicated equipment for these purposes.

Resources Used

Michael Tetzlaff was contracted for several months at Mia to help develop a studio workflow for the capture of photogrammetry data using a robotic arm and turntable donated to the museum by Mia Trustee Mr. John Hus.

A number of hardware resources were necessary to fulfill the work of processing the 3D model data. Thompson Reuters provided a 16 core processor and  a pair of enterprise GPUs to power the intense calculations required to make photogrammetric models.

Mia’s Information Systems team of Mike Tibbets and Ryan Jensen helped to set up and maintain all the computer equipment and network infrastructure. In the Visual Resources department, Josh Lynn, Digital Imaging Specialist, and Heidi Raatz, Visual Resources Librarian, worked to develop a viable metadata model for this new content. Josh Lynn also researched and implemented a backup strategy to secure the immense amounts of data.

Environment used for environment-mapping on a bronze statue of Kuan Yu.

Image-based rendering of the object using environment.

Reflection

What worked? Collaboration has proven to be the key to navigating this new arena of cultural heritage documentation.The variety of skill sets required to make this work at scale is not possible with one person alone. We have performed considerable experimentation and learned a tremendous amount over the last several years. The support of various colleagues has allowed us to take a leading role among museums in exploring this new technology.

What were the challenges? Time and resources are the two greatest challenges. The steps involved in photogrammetry are very labor and processing-intensive. We are fortunate to have the assistance of robotics to capture the 300-600 images often needed to fully document all sides of an object. But this is only where the work begins. The large volume of content makes all subsequent post-production stages exponentially greater than typically encountered with conventional still photography. This necessitates more file handling, more computer power, more storage, and more support in general_ all possible with   the right equipment, proper training, solid workflows, and sufficient time. However, the computer power required to crunch the data into a sparse cloud, then a dense cloud and ultimately a finished 3D textured model is more than one can expect from a typical imaging studio workstation. Our early attempts at high resolution models were taking upwards of a week of solid around-the-clock processing. Eventually, we were able to build an array of servers with the proper balance of CPUs and GPUs to accommodate the particular demands of the various processing stages of the PhotoScan software. Now we can create a high resolution model in a matter of hours.

What was surprising? The rate at which the technology is developing is certainly noteworthy. New tools are being developed to build models and enhance the color, texture and lighting. This will certainly continue as the technology becomes more broadly adopted and more channels are available to share 3D content. But while 3D imaging is quickly becoming standard practice for documenting heritage sites, archaeological artifacts and natural science specimens, it is not yet fully embraced by the art museum community as a viable replacement for conventional still photography.

Relevance

At Mia: The purpose in documenting our collection has always been to provide as much visual information about an artwork as possible. Less than twenty years ago, we were still making black and white reference prints for curatorial files. Eventually, digital photography made it practical to share full-color still images across multiple electronic channels. Without question, the information about form, size, volume, surface and general appearance one can extrapolate from a 3D model is far greater than is available in any other reproductive medium. For this reason, we believe it is worth the effort in time and resources to further develop this program and make it part of Mia’s standard practice for documenting important works of art.

In the museum field: Photogrammetry of cultural heritage is being adapted quickly in an effort to study and preserve important historical sites and artifacts. What once required hand measuring, exacting map building and detailed notation, both written and photographic, can now be achieved with greater accuracy and less intrusion using 3D scanning. The ease of universal sharing and the broader utility of photogrammetric data means that more research can be conducted remotely with less disruption to fragile sites and artifacts. It is arguably a more effective way of monitoring change and deterioration over time. There’s an amazing future where these techniques change the way institutions share knowledge about their collections.

Public: Rich media is supplanting written language as the preferred vehicle for conveying information, and there is a growing expectation among the public to  communicate via multi-media channels. Virtual and augmented reality experiences hold enormous potential for engaging and educating the people about art. Rare and precious examples of human creativity must naturally be protected from unnecessary handling and transport, but 3D surrogate models can offer detailed and intimate relationships with art that might not otherwise be possible at all for most people. While we can never fully replicate the sublime feeling of standing before a masterpiece, creating dynamic visual content is an important way of supporting Mia’s mission to enrich the community by collecting, preserving, and making accessible outstanding works of art from the world’s diverse cultures.

Additional Reading:

Acknowledgements

We are grateful to our colleagues at the The University of Minnesota, Professor Gary Meyer and Michael Tetzlaff who shared their knowledge and used our collection as a test bed for their research. Thank you also to Professor Seth Berrier for his knowledge and insights and Professor Dave Anderson in the Archaeology Department at the University of Wisconsin-La Crosse for his unique perspective on documenting cultural heritage sites in Egypt and introducing us to Egyptology student Sofie Kinzer.

We received enthusiastic support and advice from Mia Trustee Rick King and his colleagues at Thomson Reuters, Betsy Lulfs and Mahadev Wudali, as well as Mia Trustee Mr. John Huss.

Our neighbors at the Minneapolis College of Art and Design, have been a great resource for information and have allowed use to borrow their equipment for testing. Don Myhre, Director of MCAD’s 3D Studio, helped scan a number of Mia’s galleries using their long-range laser devices. Professor Dave Novak of the Department of Animation at MCAD placed three students – Layton Nosbush, Katheryn Peterson, and Jonny Enslow – in internship positions in the museum’s Visual Resources studio to investigate post-production processes.

Our friends, Carla Schroer and Mark Mudge, at Cultural Heritage Imaging in San Francisco have provided training and advice throughout the years.