Blender can now use AI to create images and effects from text descriptions

Even 3D modelling software is using AI art generators. Stability AI has introduced a Stability for Blender tool that, as the name implies, brings Stable Diffusion’s image creation tech to the open-source 3D tool. You can create AI-based textures, effects and animations, whether using source material from your renders or nothing more than a text description. You may not need to be (or hire) a skilled 2D artist to put the finishing touches on a project.

Stability for Blender requires an API (programming interface) key and an internet connection, but it’s free to use. It doesn’t require any software dependencies or a dedicated GPU. This might help if you need to complete some texture or video work on a laptop that isn’t as robust as your main workstation.

The addition theoretically saves time and money, and might help streamline your work. It can also help you make truly custom content, Stability says. It’s safe to say this may be useful if you were already planning to use AI-generated art, as it could save you jumping between apps and services.

This isn’t likely to give Stable Diffusion a major advantage over rivals like OpenAI’s DALL-E. It also won’t create 3D objects from scratch. You’ll need a tool like POINT-E for that. However, it does hint at a way AI image generation can help creatives without as much risk of copyright issues. Stability for Blender can rely on your own content for source material — you shouldn’t have to worry about legal trouble.

This article originally appeared on Engadget at https://www.engadget.com/blender-can-now-use-ai-to-create-images-and-effects-from-text-descriptions-175001548.html?src=rss 

Leave a Reply

Your email address will not be published. Required fields are marked *

Generated by Feedzy
Exit mobile version