(Art)ificial intelligence – Creation, destruction, restoration

From colourising Klimt’s black and white work to reconstructing Monet’s masterpiece, AI has played a huge role in art history.

Throughout the years, technology has infiltrated our lives. One of the areas where it has had a significant impact is art. Unfortunately, several unfavorable conditions have caused permanent damage to the most important paintings over many years. While original pieces may never be recovered, constant efforts are being made to resurrect these paintings. The aim is to restore fragments of historical paintings that have been lost over time and create new work in the digital era.

Even though museums and art galleries today are designed to protect and conserve the work, the issue is that it has already been damaged. Centuries of storage in not-so-ideal conditions have unfortunately greatly impacted some of the original artwork. However, engineers and researchers are applying AI/ML to the field of artwork restoration with results.

While art restoration has been a major focus for most big tech companies, machines are improving at creating art like never before. One of the classic examples is DALL-E 2.

Recently, a group of Cosmopolitan editors and digital artist Karen X. Cheng created the first magazine cover designed by AI within 20 seconds. The artwork was created using OpenAI’s DALL-E 2. The AI ​​turns users’ verbal requests and creates a new pixel-by-pixel artwork from the massive data set of images it has been fed. The AI ​​will present the output in any style one wants, be it Van-Gogh-y or just an outline.

Ironically, the question is whether “I” is in AI-generated artwork anymore. This calls for more clarity as to some of the restrictions imposed by OpenAI on commercial use of the images that humans are generating using DALL-E 2.

AI restoring art – a timeline

In 2016, the 1642 masterpiece ‘The Night Watch’ was digitally crafted and later restored to its original size after 300 years. The researchers of the Next Rembrandt project analyzed about 350 paintings of the artist throughout the process. 3D scanners allowed the network to capture the minutest details of each work and copy the style of the Dutch artist.

When a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with RePaint, a new system aimed at reproducing paintings, it made it easier to reproduce an authentic-looking Monet or Gogh for homes.

RePaint uses a combination of 3D printing and deep learning to recreate authentic paintings disregarding lighting conditions. RePaint has several uses, including remaking artwork for a house, protecting original work from wear and tear in museums, and helping create prints and postcards of historical pieces.

In 2018, A Monet at the National Gallery of Canada showed signs of oxidation invisible to the human eye using Arius’ technology in its digital restoration.

With pioneering 3D mapping and digitisation, the degradation can be identified at the initial stage without touching the surface of a painting. The conservators could inspect the fragile surfaces of masterpieces, recording data points finer than 1/10th of a human hair.

Image: Jean-Pierre Hoschedé and Michel Monet on the Banks of the Epte by Monet; MONET – DIGITAL RESTORATION by Arius

In 2019, researchers used AI to analyze the Van Eyck brothers’ Ghent Altarpiece (1432), one of the world’s most renowned paintings. Because many of the altarpiece’s 12 panels are painted on both sides, X-ray images can be difficult to interpret. The research team, therefore, used a newly developed algorithm to deconstruct the information within the X-rays. This allowed them to uncover, at this moment, unknown details about the double-sided panels of Adam and Eve.

‘Artificial Intelligence for Art Investigation: Meeting the Challenge of Separating X-Ray Images from Ghent Altarpiece’ states how educators used a developed augmented algorithm to study mixed X-ray images featuring front and back images of two-sided science panels deconstructed into a clear image. These are a comprehensive set of high-resolution images obtained by the Royal Institute for Cultural Heritage (KIK-IRPA) using various imaging techniques as part of the ongoing preservation of Bedipis, providing plenty of data for interrogation and interpretation.

In March 2019, Microsoft announced an artwork-based image generation project. The project used deep neural network microservice architecture, Azure services, and BLOB object storage to create the service. Visual Studio Code and Azure Kubernetes Service helped create new images in real-time and showcased their interactive display on the website.

In 2021, a September paper posted to the physics arXiv by the University College London described how machine learning (ML) was used to rebuild a full-colour image of Picasso’s original underpainting. The technique called neural style transfer was used, which was initially developed a few years ago at the University of Tübingen in Germany.

To study the paintings of Gustav Klimt, most of which were destroyed during the 1945 Nazi looting, historians have had to make do with black-and-white photographs. WW2 led to the destruction of the Faculty Paintings: three enormous allegorical scenes titled Philosophy, Medicine and Jurisprudence. Thanks to machine learning, however, researchers have restored historical images of the Faculty trio to approximations of their original colours, offering viewers a sense of what Klimt’s works looked like.

Image: The Stories Behind Klimt’s Faculty Paintings; Google Arts and Culture

In a statement, Franz Smola, curator at the Belvedere Museum who worked on the restoration with Wallner, said, “The result for me was surprising because we were able to color [Klimt’s works] even in the places where we did not know. We have good machine learning (ML) assumptions that Klimt used certain colours.”

Google engineer Emil Wallner spent nearly six months coding the artificial intelligence (AI) algorithm to generate color predictions.

Art lovers can explore these recreations through Klimt vs Klimt: The Man of Contradictions, a page dedicated to the artist. The page created by Google with over 30 partners explores the painter’s personal life and legacy.

In 2018, an art piece created by Obvious, a Paris-based collective, was auctioned by Christie’s. It auctioned its first work of art using an algorithm at a whopping $432,500. The collective created a series of portraits of the Belamy family using a ‘generative adversarial network’ (GAN).

Image: Edmond de Belamy by Obvious (collective)

The algorithm was built of two parts: the Generator and the Discriminator. First, the system is fed with a data set of 15,000 portraits made between the 14th century and the 20th. Then, the Generator composes a new image based on the data. After that, the Discriminator tries to spot the difference between a human-made image and one generated. The aim is to fool the Discriminator while differentiating the new images and the real-life portraits. And then it concludes.

Remaining challenge(s)

Apart from restoration, AI is also being applied to solve art analysis and conservation challenges. For example, one is to reconstruct an underpainting in greater detail, and the other is to make it easier to image double-sided wing panels.

With a large color scope to work with, the issue of what inks to use for which paintings remained. The team-taught deep-learning model predicts the stack of different inks. Once the model got the hang of it, it was fed with pictures of the artwork and utilized the approach to decide which colors should be used in particular areas for specific paintings.

New Technology Era

Leave a Reply

Your email address will not be published. Required fields are marked *