Tuesday, October 18, Los Angeles. The Adobe Max main event is about to begin. Several thousand spectators gather in Microsoft’s huge theater with a simple question in mind: are we going to talk about generative artificial intelligence (AI)? In recent months, the art world has only had its eyes on this technology. Its pioneers are called Dall·E 2, Stable Diffusion or Midjourney. These sites make it possible to create anything digitally. A bit like Adobe, via Photoshop, one of the most popular software released by the company. With the difference that they only need a few words of description, a command (translated to fast in English) so that works can be created for free. Like magic.
Their limit seems to be only that of our imagination. A driven fox in a painting by Claude Monet, legend as Freddy Mercury or a Michael Jackson aged as if they were still alive, or a simple field of poppies flooded with sunlight… These visions of the mind appear on a screen in just a few seconds, against hours of work on a drawing board or in Adobe and many programs others. .
In August, an American artist even won an art competition in Colorado, USA, using the Midjourney platform. Of course, without understanding anything from the jury. The information circled the globe while delivering a simple message: AI, now used sporadically for technical enhancements, has not only tricked the human mind— those deepfakes already do – but it has also shaped what is unanimously recognized as ‘beautiful’.
Limited offer. 2 months for €1 without obligation
In the large auditorium, the suspension is finally quickly lifted by an Adobe manager, David Wadhwani. The company will also offer “magical” alternatives to its creation tools, especially for designing new typefaces. The announcement gets its share of hype and immediately raises new questions. Will art professionals be made obsolete by technology and a wave of neo-artists for whom all technical barriers have now been removed? Will creative tools like Adobe become useless? “We see a time coming when not everyone has the time or the will to learn Photoshop,” admits Maria Yap, head of AI at the company. Dall·E 2, a time reserved for a small number of insiders, has become accessible to all and now attracts more than 1.5 million users, while generating 2 million images per day.
Looking for the spark
Among the designers, graphic designers or even photographers encountered at the Adobe Max show, the vast majority have tested generative AI. Often, at first, for fun. “I put the words ‘alien’ and ‘ice’ into Dall·E 2 to see what came out,” smiles Estella, an artist specializing in virtual reality, wearing a helmet. Lionel Koretzky, a French photographer, has also dealt with the “unknown and exciting” side that emerges from these platforms. “When you’re an artist, you’re always looking for an artistic accident, this spark you didn’t expect, and these tools can provoke it,” he confesses. He then entered a real “summary” of Midjourney work, an order from a customer, to see what came out of it. The result, not uninteresting, however required adjustments to be presentable.
Because the tool is not as magical as it claims. Designer at creative company Monotype, also based in Los Angeles, Terrance Weinzierl believes that AI generative models mostly create “empty and artificial” works. Nothing in nature to compete with human vision and sensitivity. Or just talent. Creations made using Dall·E and its cousins are far from successful. With incorrect commands, large defects are easily visible in faces or textures. The computer does not always understand what it is told. And then there are bad ideas. A Twitter account, Weird Generation Dall-E (weird work Dall·E), followed by a million people, compiles these completely failed essays: a cat in a coal mine, actor The Rock hugs dictator Kim Jong-un… At the moment, many software refuse to show nudity or violence which can be part of a respectful creative process. The internet is what it is.
This takes nothing away from the revolution these tools represent. For artists of all kinds. “Artificial intelligence is to a primer what photographic prints were to painters at the beginning of the last century. Like the pencil, drafting,” notes information and communication science researcher Olivier Ertzscheid. But also for creative businesses. “We will quickly assess those who possess fastbecause these orders to create illustrations, if mastered well, will save a lot of time”, says Frédéric Cavazza, consultant specialized in marketing and advertising. These parts of sentences are already the basis of a new business: Count from 3 to 4 dollars for those that are sure to generate interesting results, heroic female fantasy characters or beautiful clothing.
Keep your hand on the wheel
“Soon we’ll be able to ask him to explore 30, 130, or 130,000 different options of the same creative idea. Before letting us pick a small number to work on personally, to delving into them to finally find the best options to present to customers It’s a real superpower,” enthuses Scott Belsky, product manager at Adobe, which carefully refers to a “co-pilot”. It means: of one’s own means. A vision that works: Adobe is one of the world’s largest sellers of creative software, a company valued at nearly $150 billion that doesn’t want to see its business go up in smoke. The company also has reason to be cautious on this topic. Scott Belsky compares the rise of generative AI to that of autonomous car. For safety, “it is still necessary to keep one hand on the wheel”.
First for ethical issues. Dall·E 2 is privately owned. His tool is based on a language model that allows the machine to recognize text and its meaning (through machine learning). It uses one of the most advanced in the world, OpenAI’s GPT-3, released in May 2020. As well as a neural network called Clip, still developed by OpenAI, which operates very roughly the connection between text and an image within a large corpus. The bigger it is, the more different the results. But this platform, like many others, today maintains this jealously guarded database, which raises legal questions. “If we can’t hear it, we accept the idea that there will be serious road trips, such as racist or sexist prejudices,” emphasizes Olivier Ertzscheid. Which wouldn’t be the first in AI history. Building your own artist bases may be the future of generative AI, but this formula still seems a long way off. The technology is not within everyone’s reach, and the costs in terms of operation and maintenance – the machine requires a large amount of information to learn – can quickly become significant.
Legal issues, especially related to copyright, are the next big question mark for this technology, whose advancement makes it possible to handle not only images, but also code, sound, video even 3D. Should artists be paid if their work is used to inspire others? And how do you know? Lionel Koretzky says he tested an order by placing the name of Japanese artist Yayoi Kusama, who became famous for her dotted works. A faithful image of the artist’s style came out. Express has also tested styles, such as that of the famous Ghibli studios, which the car has imitated without the slightest problem. Dall·E 2 has taken some precautions on this subject, for example refusing to consider “Philippe Starck” from the French creator’s name, to protect his works. A restraining order to avoid any legal problems. While after all, pastiche, thanks to AI, seems to have good times ahead.
The Chronicle of Christophe Donner
The Chronicle of Christophe Donner