[ZEPETO: October Release Items] Ecoplant Bag: Combining Geometric Nodes with AI Generation

[ZEPETO: October Release Items] Ecoplant Bag: Combining Geometric Nodes with AI Generation

Introduction

Hello, I'm Ushiyama from the OpenFashion CG team.

Recently, we released a new item from accelerando on Zepeto.

Stylish and eco-friendly botanical bags.. and more! From the future fashion brand "accelerando.Ai" created by humans and AI, new items available for avatars to wear are on sale from October 10th (Tue). (*This article is written in Japanese.)

Product page (The Zepeto app will open)

Bag [Ecoplant Bag]
Wear [Green Explorer]
Headpiece [GreenBreathe Helmet]

In this article, I would like to explain how we produced the bag among these items. First, please take a look at the concept image generated by AI.

Concept

The bag, named "With plants - Coexistence with plants", is not only stylish in appearance, but also has plants inside that perform photosynthesis, absorbing carbon dioxide from the air and releasing oxygen.

The production flow combining AI and 3DCG is still a rare approach in the current CG industry. In this article, we will also delve into the respective strengths and challenges that have not yet been overcome.

 

Production ① AI Modeling with Kaedim

Replacing the glass

We have used "Kaedim" several times for AI-based model production.

 

From these experiences, we predicted that reproducing the inside of the glass would be difficult if we used the concept image as is.
As a countermeasure, we used Stable Diffusion to replace only the inside of the glass with a different texture.

We tried replacing only the glass portion with Soft edge contour extraction, but the details changed, so we discarded it.

Next, we took a mask and regenerated only the glass part.
I wonder if we can generate it like this. Let's give it a try!

AI-generated 3D model

Here's the model that was generated 3-4 hours later.
Considering it was originally a glass bag, I think it went well! That's what I thought when I first saw it.

However, as the image shows, the body part completely has to be remade.
If the regenerated image after replacing the glass was not a wrinkled fabric, but a material like a briefcase, I think a more similar 3D model would have been generated.

The other parts also have too many polygons, making them difficult to adjust, so we'll remake them...

Was Kaedim useful?

In the end, we had to remake a significant portion.
It would have been fine if it remained a high polygon, but when converted to low polygon, the model from Kaedim has too many polygons.

We tried to be creative with the input images this time, but there are many aspects, such as component structure and perspective, that cannot be reproduced.

In the future, I'd like to see if we can improve accuracy by using things like pseudo three-view drawings.

CSM Test Results

Like Kaedim, CSM is a service that generates models with AI.
We have also tested it before.

2D to 3D: Creating Coordinated Items with CSM

However, this time the results were completely flawed. Maybe it's difficult if the input image is diagonal.

 

Production ② AI Generation of Textures

Now, we have made the external model.
Plants grow inside the bag. For plants, textures of leaves and flowers are needed. Let's try making them with image-generating AI.

Generating Leaf

We will use Stable Diffusion XL.

Prompt: photorealistic, full body shot of a leaf, top view, facing to camera, black background

The finished images are all over the place, so I'll specify the silhouette a bit more clearly.

I tried using ControlNet's Scribble/Sketch.
It made a leaf in the shape of the line I drew casually with the mouse on the spot.
However, it's a bit too accurate and looks unnatural.

I adjusted the settings of ControlNet.
starting control step: 1.0 → 0.4

We are reducing the influence of the guide at the start of generation.
As a result, leaves branched as per the image were generated without perfectly following the guide.

Flower generation

Promptphotorealistic, flower, top view, facing to camera, black background,

It's too big for the screen and gets cut off.

Prompt: photorealistic, full body shot of a flower, top view, facing to camera, black background,

By adding "full body shot", it now fits nicely on the screen.

black backgroundと指定しているので、Photoshopで簡単に抜きを作ることができます。
少し色調整をすればテクスチャの準備完了です。Since I specified a black background, I can easily create a cutout in Photoshop.
After a little color adjustment, the texture preparation is complete.

 

Production ③ Geometry Nodes

With the texture ready, it's time to make the model of the plant. There are 10 or 20 plants growing inside the bag. Modeling them all, adjusting the direction of the stems and leaves, is a daunting task.

So, I decided to try using Blender's Geometry Nodes.
Geometry Nodes allow the user to adjust the model using parameters by connecting nodes.
This method is distinct from traditional modeling; it leans more towards a no-code approach, resembling programming without actually writing code.

Here's what we're doing:

  1. Aligning along a specified curve,
  2. Copying the leaves of a plant,
  3. Increasing the size from the starting point to the endpoint of the curve, and
  4. Orienting them in the direction of the curve.

As a result, just by drawing a curve, leaves are attached, automatically generating something akin to plant vines!

Even after creating the curve, adjustments such as the direction, scale, and rotation of the leaves can be made. The curve of the stem can also be freely controlled, allowing customization up until the very end.

The texture generated earlier with AI can also be replaced as many times as desired. 

Conclusion

We introduced a production flow that combines various AI and 3D techniques. How did you find it?
With the utilization of geometry nodes, users can customize using parameters. Additionally, in texture generation, the power of AI can be harnessed to quickly produce various variations.

However, in terms of 2D to 3D model generation, it's not always best to rely entirely on AI. Manual adjustments post-model creation and careful consideration of the input image are indispensable. In reality, when creating efficient low-polygon data for games, it can't be denied that it's sometimes quicker to create it manually from scratch.

New AI services are emerging almost every month. Still, I believe it's quite challenging to find a service that completes everything perfectly with just AI.

Being open to combining existing 3D techniques with AI, and having the flexibility and inspiration to try new things, is essential.

Back to blog

Leave a comment

Please note, comments need to be approved before they are published.