top of page
Writer's pictureAnton Grishin

AI-tools in InDesign

Adobe has released a beta version of its generative neural network, which creates an image from text, and implemented it in the InDesign. This is a great occasion to talk about the possibilities of using modern AI tools, their quality, and what I, as an InDesign user, really expect from the company.


What's new

InDesign version 19.4 introduces the ability to generate images based on a piece of text. Not all users will be able to use this directly in InDesign yet, but you can test the functionality in a separate Adobe web application called Firefly, available with the same subscription as the main software package.


There are already hundreds of neural networks that generate images based on prompts, but what's interesting here is that the AI tool is embedded in the layout software, becomes part of the Adobe ecosystem and requires no additional actions or costs (but there are still some limitations, we'll talk about them below).


In this article, I will give examples of how Firefly illustrated Arthur Conan Doyle's story The Leather Funnel.


My friend, Lionel Dacre, lived in the Avenue de Wagram, Paris. His house was that small one, with the iron railings and grass plot in front of it, on the left-hand side as you pass down from the Arc de Triomphe. I fancy that it had been there long before the avenue was constructed, for the grey tiles were stained with lichens, and the walls were mildewed and discoloured with age. It looked a small house from the street, five windows in front, if I remember right, but it deepened into a single long chamber at the back.
Firefly: picture 1

Firefly capabilities

Firefly Image 3 is the latest version of Adobe's generative neural network, released in April 2024. The interface is as simplified and clear as possible. The user can select an illustration as a photo or art, and trust Firefly to choose.


The settings of techniques, themes, artistic directions are quite a lot, and you can experiment with them for a long time first to understand how the neural network stylises the images created by each of the templates, and then start mixing them. The surest way to tell Firefly what you expect the result to be is to upload your own picture with a reference. Firefly warns that you can only upload images that you have permission to use, but doesn't seem to check this in any way.


You can train this neural network by giving positive or negative ratings to the resulting pictures. And if you like one of the variants, but it's a bit short of the ideal, you can generate several more variants based on it. Speaking of variants Firefly gives 4 variants of the picture for each request in a given size.


When you're done with all the template concepts, materials, effects, you can still play with colours, lighting and even camera angle.


The bad news is that all these experiments can't last forever. Right now Firefly allows you to send 1000 requests per month. For experiments and dabbling it is more than enough, but for serious work and a large volume of options may not be enough.


There was a room which bore the appearance of a vault. Four spandrels from the corners ran up to join a sharp, cup-shaped roof. The architecture was rough, but very strong. It was evidently part of a great building.
Firefly: picture 2

How well does Adobe's neural network understand text?

Yes, it does! It does not like too short prompts of a couple of words, and from too long prompts of 3-4 sentences it will ignore what it considers unimportant. The optimal request, as far as I've been able to find out from experience, is one or two sentences.


Firefly honestly tries to draw what the query says. The more abstract a piece of text you offer it, the more abstract something it will render. The basic scenario for using Firefly in InDesign is to select a piece of text directly in the layout and generate an image based on that query from the context menu. But be prepared to be disappointed with the first results, and you'll want to add clarifying words to the prompt that reveal the context.


If the query contains something specific, such as a description of a character's interior or appearance, the result will be good by formal criteria. But a particular fragment of a fiction text may not contain a detailed description of a mise-en-scene. In such a case, you will have to edit the author's text.


He was seated as he spoke at one side of the open fire-place, and I at the other. His reading-table was on his right, and the strong lamp above it ringed it with a very vivid circle of golden light. A half-rolled palimpsest lay in the centre, and around it were many quaint articles of bric-a-brac.
Firefly: picture 3

Will Firefly replace a book illustrator?

Maybe it will replace a mediocre one. It seems that in the near future the criterion for choosing an artist will be not only the portfolio, but also whether the artist can bring something to the book that is inaccessible to neural networks.


But for now, it's hard for me to imagine a serious publisher illustrating their books with AI. In addition to spending a lot of time on the preparation of prompts, you also need to try to keep the style of illustrations throughout the book, and neural network every time gives something new even with the same settings.


Surprisingly, but the developers of this feature did not look at all in the direction of auxiliary artistic elements, which are almost every book dropcaps, endings, headpieces and other decorations. Neural network is set up to draw something unseen, but it can't yet remove the chore of digging through photo banks from a designer. The same decorations from Shutterstock can be redrawn by a neural network taking into account the genre and context of each particular book, and our idea of beauty will not suffer much from this.


I sank my throbbing head upon my shaking hands. And then, suddenly, my heart seemed to stand still in my bosom, and I could not even scream, so great was my terror. Something was advancing toward me through the darkness of the room. It is a horror coming upon a horror which breaks a man's spirit. I could not reason, I could not pray; I could only sit like a frozen image, and glare at the dark figure which was coming down the great room.
Firefly: picture 4

3 proposals for InDesign

Neural network generation of images is great, but I doubt that every tenth designer who creates print or ebook layouts in InDesign will use this new feature. And here are 3 AI-enabled features that would really come in handy for a lot of people.


  1. Generating layout design suggestions based on text analysis. The idea is very simple: evaluating millions of book layouts of all eras by formal features is not a difficult task for AI. And only a neural network will be able to make suggestions on fonts, layout style, design elements, etc. for a particular book based on the analysis of the text of the work. If AI can offer dozens of templates suitable for a particular book, half the job is already done.

  2. Adapting the print layout for the electronic version of the book. Like it or not, one-button export to EPUB from InDesign doesn't work. Besides a set of technical problems, there is a creative element to adapting layouts for EPUB: floating layout and small screens require very different approaches. And if Adobe already has such a creative AI, why not commission it to adapt electronic versions of books as well.

  3. (Designers won't like it, but...) it's about time to add a feature for automatic layout design. I don't even know if this is a task for AI or for InDesign's usual algorithms, but spending precious working hours on technical debugging of sources, placing text, tables, illustrations and other things on the pages is some XX century. We need to look to the future!

Comments


Commenting has been turned off.
bottom of page