Generative AI is increasingly penetrating game development. Writing for Game World Observer, Nexters art producer Alisa Roz has detailed how the company started using Midjourney in the art creation process for its titles.
Alisa Roz
Art holds tremendous significance in the gaming world. Our game Hero Wars relies heavily on art pieces that represent the various levels featuring engaging hidden object gameplay. In the past, each artwork took several weeks to create, and we needed around 200 pictures every three months. This was a daunting task for our designers, who already had other responsibilities.
As the game evolves, its style undergoes changes as well. Even if we try to adhere to the same guidelines, over time we realize that our older images no longer align with the desired atmosphere. Consequently, in addition to creating new art, we find ourselves burdened with the task of revising and updating the older ones.
Exploring AI Generation for art
Ever since the concept of AI image generation emerged, I’ve been intrigued by its potential. However, until the introduction of Midjourney and Stable Diffusion, AI generation didn’t have practical applications in real-life scenarios.
The release of Midjourney 3 marked a significant turning point, leading me to develop a hypothesis that I wanted to test. I generated a few images based on the given specifications, made manual adjustments to refine them, and shared them with my fellow designers. The quality of the output made it hard to believe that these images were actually generated with the help of AI.
Revising the Aurora character using Midjourney and InPaint
Following this revelation, we delved into researching Midjourney in particular and sought seamless ways to integrate it into our workflow. I can say right away that this allowed us to boost creative process and efficiency and free up hours of designers’ time. We can now create an entire pack of 10 images in 4-5 hours.
Not a replacement but an enhancement
Don’t expect Midjourney to replace designers any time soon. But what it can do is level up the floor in terms of quality and speed up the process, especially for more junior designers. To generate a pack of 10 images, for example, one must go through 400 prompts and their variations. The more experience you gain in manipulating MidJourney, the better you become at achieving the desired results. The AI won’t present you with a polished image ready for immediate use. However, it can provide an intriguing sketch that you can reference, adjust and stylize, which is a much simpler task.
To achieve this, you need to understand how prompts work. Here are some tips:
- I suggest you enable the remix mode, which allows you to adjust the prompt when you request variations of the image;
- One thing a lot of people don’t know is that you can feed in references to Midjourney, so you can use your existing artwork to produce something novel in the same style;
- Learn more about the parameters you can add, for instance, to choose the right version of the AI.
After this, all you can do is trial and error. Prompt engineering is a whole new area worth exploring, and the results are sometimes quite surprising. Midjourney understands most requests and phrases, but sometimes you want the green object, and it makes the entire background green.
Another example of restyling a character model using AI
Learn how to use it and where to use it
We are also learning how to adjust AI imagery better:
- We almost never use the original background as it rarely fits the style and has too many artifacts;
- We add details to the foreground and finesse it to fit our pre-existing style.
With each new version of Midjourney, you can leave more and more of the original image in place. The fifth generation in particular saw a huge jump in quality.
Moreover, Midjourney has become an indispensable tool in our character-creation process. Some of our characters were designed long ago and no longer perfectly match the current style. We need them in different poses, and at times, they don’t look quite right in some of those poses. Drawing realistic human figures is a daunting task, and many designers struggle with it.
Since the release of version 5, Midjourney has been capable of generating impressive portraits that we can manipulate and reference from, resulting in a 40% reduction in effort spent on characters and significantly enhancing their appearance.
Also, learn where not to use it
However, there were some ideas that didn’t quite materialize as we had hoped. We thought one could make a rough sketch, feed it as a reference and get a near-final image.
Unfortunately, this approach didn’t work as Midjourney doesn’t merely trace the image but rather creates something entirely new. While each subsequent version of the AI produced vastly improved images, certain features would occasionally be lost along the way — e.g. a setting for how much of the original image it should take from Midjourney v3.
Since Stable Diffusion is open source, you can train it on your own graphics to produce something eerily similar, but as image-generating AI go, it is not the most powerful one for sure. And there are certain areas AI won’t help you with. Interfaces, for one thing, especially their adaption and slicing. Animation is extremely challenging.
Still, you can now perform most non-core tasks for generating game art with AI. But you need to learn how to work with these tools and how to process the results to make them indistinguishable from fully original images.
Artist can generate certain elements of the image, but manual work will still take most of the time
AI is extremely helpful at the prototyping stage, as you can try multiple directions without spending hours sketching them out. Concepts are often built from well-known archetypes that Midjourney can easily produce in different styles in minutes. You want a girl pilot in a leather jacket drawn in the comic book? AI can give you this right away.
This empowers developers with limited budgets to construct a compelling game concept solely based on gameplay, enabling them to sell the idea to others and subsequently develop the art later on. In my experience, attempting to fully convey the idea using plain 2D and 3D shapes alone is almost impossible.
GUI icons and 2D trees were generated fully by AI, which is faster and easier than using ready-made asset libraries
As designers, our role is to oversee the process and maintain control over the final outcome. AI tools can undoubtedly save time and enhance quality, but they do not generate truly innovative ideas on their own. And as of right now, no tool can reliably produce convincing images you can use right away. This is where real artists step in.