All stylized images in this section is generated from the original image below with zero examples. g. 1. Creating applications on Stable Diffusion’s open-source platform has proved wildly successful. Open up your browser, enter "127. 9): 0. Depending on how stable diffusion works, it might be interesting to use it to generate. 0, a proliferation of mobile apps powered by the model were among the most downloaded. 0) Watch on. ago Stable diffusion uses openai clip for img2txt and it works pretty well. Intro to AUTOMATIC1111. テキストから画像を作成する. Output. Text to image generation. 1M runsはじめまして。デザイナーのhoriseiです。 普段は広告制作会社で働いています。 「Stable Diffusion」がオープンソースとして公開されてから、とんでもないスピード感で広がっていますね。 この記事では「Stable Diffusion」でベクター系アイコンデザインは生成できるのかをお伝えしていきたいと思い. 0 前回 1. If you put your picture in, would Stable Diffusion start roasting you with tags?. It is common to use negative embeddings for anime. Under the Generate button there is an Interrogate CLIP which when clicked will download the CLIP for reasoning about the Prompt of the image in the current image box and filling it to the prompt. 9) in steps 11-20. 手順3:PowerShellでコマンドを打ち込み、環境を構築する. Flirty_Dane • 7 mo. The second is significantly slower, but more powerful. This process is called "reverse diffusion," based on math inspired. langchain load local huggingface model example in python The following describes an example where a rough sketch. Overview Stable Diffusion V3 APIs Text2Image API generates an image from a text prompt. “We initially partnered with AWS in 2021 to build Stable Diffusion, a latent text-to-image diffusion model, using Amazon EC2 P4d instances that we employed at scale to accelerate model training time from months to weeks. Press Send to img2img to send this image and parameters for outpainting. . For certain inputs, simply running the model in a convolutional fashion on larger features than it was trained on can sometimes result in interesting results. 6. At least that is what he says. I am still new to Stable Diffusion, but I still managed to get an art piece with text, nonetheless. 它是一種 潛在 ( 英语 : Latent variable model ) 擴散模型,由慕尼黑大學的CompVis研究團體開發的各. We would like to show you a description here but the site won’t allow us. I wanted to report some observations and wondered if the community might be able to shed some light on the findings. 10. Commit hash: 45bf9a6ProtoGen_X5. Given a (potentially crude) image and the right text prompt, latent diffusion. photo of perfect green apple with stem, water droplets, dramatic lighting. Stable diffusion has been making huge waves recently in the AI and art communities (if you don’t know what that is feel free to check out this earlier post). (Open in Colab) Build your own Stable Diffusion UNet model from scratch in a notebook. In this section, we'll explore the underlying principles of. Note: Earlier guides will say your VAE filename has to have the same as your model filename. You can also upload and replicate non-AI generated images. txt2img OR "imaging" is mathematically divergent operation, from less bits to more bits, even ARM or RISC-V can do that. Then, run the model: import Replicate from "replicate"; const replicate = new Replicate( { auth: process. Stable Diffusion. SFW and NSFW generations. 21. ckpt for using v1. Run time and cost. img2txt archlinux. Stable diffusion image-to-text (SDIT) is an advanced image captioning model based on the GPT architecture and uses a diffusion-based training algorithm to improve stability and. GitHub. If i follow that instruction. (Optimized for stable-diffusion (clip ViT-L/14)) Public. At the field for Enter your prompt, type a description of the. Windows 11 Pro 64-bit (22H2) Our test PC for Stable Diffusion consisted of a Core i9-12900K, 32GB of DDR4-3600 memory, and a 2TB SSD. AIイラストに衣装を着せたときの衣装の状態に関する呪文(プロンプト)についてまとめました。 七海が実際にStable Diffusionで生成したキャラクターを使って検証した衣装の状態に関する呪文をご紹介します。 ※このページから初めて、SThis tutorial shows how to fine-tune a Stable Diffusion model on a custom dataset of {image, caption} pairs. stable-diffusion. 5 model or the popular general-purpose model Deliberate. StableDiffusion. . Stable Horde for Web UI. The average face of a teacher generated by Stable Diffusion and DALL-E 2. Write a logo prompt and watch as the A. Public. Stable Diffusion XL. A text-guided inpainting model, finetuned from SD 2. Shortly after the release of Stable Diffusion 2. Hires. 0 model. Type a question in the input box at the bottom to start a conversation. See the SDXL guide for an alternative setup with SD. Stability AI는 방글라데시계 영국인. 5, ControlNet Linear/OpenPose, DeFlicker Resolve. The domain img2txt. ckpt). You need one of these models to use stable diffusion and generally want to chose the latest one that fits your needs. chafa displays one or more images as an unabridged slideshow in the terminal . Para ello vam. Also there is post tagged here where all the links to all resources are. bat (Windows Batch File) to start. You will learn the main use cases, how stable diffusion works, debugging options, how to use it to your advantage and how to extend it. At the time of release (October 2022), it was a massive improvement over other anime models. 1. for examples:"Logo of a pirate","logo of a sunglass with girl" or something complex like "logo of a ice-cream with snake" etc. It’s trained on 512x512 images from a subset of the LAION-5B dataset. The backbone. novelai用了下,故意挑了些涩图tag,效果还可以 基于stable diffusion,操作和sd类似 他们的介绍文档 价格主要是订阅那一下有点贵,要10刀,送1000token 一张图5token(512*768),细化什么的额外消耗token 这方面倒还好,就是买算力了… 充值token 10刀10000左右,其实还行Model Type. En este tutorial de Stable Diffusion te enseño como mejorar tus imágenes con la tecnología IMG2IMG y la tecnología Stable diffusion INPAINTING. Another experimental VAE made using the Blessed script. com uses a Commercial suffix and it's server(s) are located in N/A with the IP number 104. The Stable Diffusion model was created by researchers and engineers from CompVis, Stability AI, Runway, and LAION. 😉. The comparison of SDXL 0. Stable diffusion image-to-text (SDIT) is an advanced image captioning model based on the GPT architecture and uses a diffusion-based training algorithm to improve stability and consistency during training. img2txt linux. この記事ではStable diffusionが提供するAPIを経由して、. fffiloni / stable-diffusion-img2img. Two main ways to train models: (1) Dreambooth and (2) embedding. A random selection of images created using AI text to image generator Stable Diffusion. this Stable diffusion model i have fine tuned on 1000 raw logo png/jpg images of of size 128x128 with augmentation. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and training process; you can generally expect SDXL to produce higher quality images than Stable Diffusion v1. Mockup generator (bags, t-shirts, mugs, billboard etc) using Stable Diffusion in-painting. 手順1:教師データ等を準備する. 4 (v1. . 🖊️ sd-2. 缺點:. Introduction. Text-To-Image. Documentation is lacking. This checkbox enables the “Hires. Drag and drop the image from your local storage to the canvas area. テキストから画像を生成する際には、ブラウザから実施する場合は DreamStudio や Hugging faceが提供するサービス などが. With your images prepared and settings configured, it's time to run the stable diffusion process using Img2Img. A buddy of mine told me about it being able to be locally installed on a machine. I have been using Stable Diffusion for about 2 weeks now. This model runs on Nvidia T4 GPU hardware. The Stable Diffusion 2. 5 it/s (The default software) tensorRT: 8 it/s. RT @GeekNewsBot: Riffusion - 음악을 생성하도록 파인튜닝된 Stable Diffusion - SD 1. use SLERP to find intermediate tensors to smoothly morph from one prompt to another. 0-base. ; Download the optimized Stable Diffusion project here. Start with installation & basics, then explore advanced techniques to become an expert. Use your browser to go to the Stable Diffusion Online site and click the button that says Get started for free. 13:23. Search millions of AI art images by models like Stable Diffusion, Midjourney. It came out gibberish though. Stable Diffusionのプロンプトは英文に近いものですので、作成をChatGPTに任せることは難しくないはずです。. 2. A negative prompt is a way to use Stable Diffusion in a way that allows the user to specify what he doesn’t want to see, without any extra input. Sort of new here. Text to image generation. creates original designs within seconds. 0) のインストール,画像生成(img2txt),画像変換(img2img),APIを利用して複数画像を一括生成(AUTOMATIC1111,Python,PyTorch を使用)(Windows 上)Step#1: Setup your environment. Space We support a Gradio Web UI: CompVis CKPT Download ProtoGen x3. Stable Horde client for AUTOMATIC1111's Stable Diffusion Web UI. Are there online Stable diffusion sites that do img2img? 10 upvotes · 7 comments r/StableDiffusion Comfyui + AnimateDiff Text2Vid youtu. openai. Enter the required parameters for inference. 4. (Optimized for stable-diffusion (clip ViT-L/14)) Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. For training from scratch or funetuning, please refer to Tensorflow Model Repo. ckpt file was a choice. I have showed you how easy it is to use Stable Diffusion to stylize images. It’s trained on 512x512 images from a subset of the LAION-5B dataset. Already up to date. This model inherits from DiffusionPipeline. Mac: run the command . 5 model. Running the Diffusion Process. Just go to this address and you will see and learn: Fine-tune Your AI Images With These Simple Prompting Techniques - Stable Diffusion Art (stable-diffusion-art. Option 2: Install the extension stable-diffusion-webui-state. Model card Files Files and versions Community Train. py", line 144, in interrogate load_blip_model(). I've been running clips from the old 80s animated movie Fire & Ice through SD and found that for some reason it loves flatly colored images and line art. Create multiple variants of an image with Stable Diffusion. ArtBot or Stable UI are completely free, and let you use more advanced Stable Diffusion features (such as. Show logs. You are welcome to try our free online Stable Diffusion based image generator at It supports img2img generation, including sketching of the initial image :) Cool site. 04 through 22. Tiled Diffusion. Get an approximate text prompt, with style, matching an image. Reimagine XL. hatenablog. AIArtstable-diffusion-webuimodelsStable-diffusion768-v-ema. 以 google. Stable Diffusion (ステイブル・ディフュージョン)は、2022年に公開された ディープラーニング (深層学習)の text-to-imageモデル ( 英語版 ) である。. 5 it/s. This model can follow a two-stage model process (though each model can also be used alone); the base model generates an image, and a refiner model takes that image and further enhances its details and quality. To differentiate what task you want to use the checkpoint for, you have to load it directly with its corresponding task-specific pipeline class:La manera más sencilla de utilizar Stable Diffusion es registrarte en un editor de imágenes por IA llamado Dream Studio. The easiest way to try it out is to use one of the Colab notebooks: ; GPU Colab ; GPU Colab Img2Img ; GPU Colab Inpainting ; GPU Colab - Tile / Texture generation ; GPU Colab - Loading. Drag and drop an image image here (webp not supported). The client will automatically download the dependency and the required model. 以下方式部署的stable diffusion ui仅会使用CPU进行计算,在没有gpu加速的情况下,ai绘图会占用 非常高(几乎全部)的CPU资源 ,并且绘制单张图片的 时间会比较长 ,仅建议CPU性能足够强的情况下使用(作为对比参考,我的使用环境为笔记本平台的5900HX,在默认参数. Select interrogation types. Prompt by Rachey13x 17 days ago (8k, RAW photo, highest quality), hyperrealistic, Photo of a gang member from Peaky Blinders on a hazy and smokey dark alley, highly detailed, cinematic, film. 81 seconds. Stable diffusionのイカしたテクニック、txt2imghdの仕組みを解説します。 簡単に試すことのできるGoogle Colabも添付しましたので、是非お試しください。 ↓の画像は、通常のtxt2imgとtxt2imghdで生成した画像を拡大して並べたものです。明らかに綺麗になっていること. Diffusers now provides a LoRA fine-tuning script that can run. Img2Prompt. 220 and it is a. Search by model Stable Diffusion Midjourney ChatGPT as seen in. Are there options for img2txt and txt2txt I'm working on getting GPT-J and stable diffusion working on proxmox and it's just amazing, now I'm wondering what else can this tech do ? And by txt2img I would expect you feed our an image and it tells you in text what it sees and where. Controlnet面部控制,完美复刻人脸 (基于SD2. The CLIP interrogator has two parts: one is the BLIP model, which takes on the function of decoding and reasoning about the text description. 152. Head to Clipdrop, and select Stable Diffusion XL (or just click here ). Bootstrapping Language-Image Pre-training. Stable Diffusion 설치 방법. Stable Difussion Web UIのHires. Check out the img2img. Using VAEs. 2. Stable diffusion is an open-source technology. txt2img OR "imaging" is mathematically divergent operation, from less bits to more bits, even ARM or RISC-V can do that. 5 model. Then, select the base image and additional references for details and styles. To use a VAE in AUTOMATIC1111 GUI, go to the Settings tab and click the Stabe Diffusion section on the left. Next, you can pick out one or more art styles inspired by artists. To use this, first make sure you are on latest commit with git pull, then use the following command line argument: In the img2img tab, a new button will be available saying "Interrogate DeepBooru", drop an image in and click the button. More awesome work from Christian Cantrell in his free plugin. Caption: Attempts to generate a caption that best describes an image. Local Installation. BLIP: image used in this demo is from Stephen Young: #3: Using Stable Diffusion’s PNG Info. I am late on this post. r/StableDiffusion •. Thanks to the passionate community, most new features come to this free Stable Diffusion GUI first. For example, DiT. ckpt file was a choice. This is a repo providing same stable diffusion experiments, regarding textual inversion task and captioning task pytorch clip captioning-images img2txt caption-generation caption-generator huggingface latent-diffusion stable-diffusion huggingface-diffusers latent-diffusion-models textual-inversionVGG16 Guided Stable Diffusion. 手順2:「gui. Put this in the prompt text box. This model card gives an overview of all available model checkpoints. 使用管理员权限打开下图应用程序. At least that is what he says. So once you find a relevant image, you can click on it to see the prompt. 0. The goal of this article is to get you up to speed on stable diffusion. Stable Diffusion is a concealed text-to-image diffusion model, capable of generating photorealistic images from any textual input, fosters independent flexibility in producing remarkable visuals. Animated: The model has the ability to create 2. ago. File "C:\Users\Gros2\stable-diffusion-webui\ldm\models\blip. create any type of logo. This model runs on Nvidia A100 (40GB) GPU hardware. 4 ・diffusers 0. 0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support. Iterate if necessary: If the results are not satisfactory, adjust the filter parameters or try a different filter. To put another way, quoting your source at gigazine, "the larger the CFG scale, the more likely it is that a new image can be generated according to the image input by the prompt. But in addition, there's also a Negative Prompt box where you can preempt Stable Diffusion to leave things out. Stable Diffusion web UIをインストールして使えるようにしておく。 Stable Diffusion web UI用のControlNet拡張機能もインストールしておく。 この2つについては下記の記事でやり方等を丁寧にご説明していますので、まだ準備ができていないよという方はそちらも併せて. 除了告訴 Stable Diffusion 有哪些物品,亦可多加該物的形容詞,如人的穿著、動作、年齡等等描述; 地:物體所在地,亦可想像成畫面的背景,讓 Stable Diffusion 知道背景要畫什麼(不然他會自由發揮) 風格:告訴 Stable Diffusion 要以什麼風格呈現圖片,某個畫家? Stable Diffusion WebUI (AUTOMATIC1111 or A1111 for short) is the de facto GUI for advanced users. NMKD Stable Diffusion GUI v1. Second day with Animatediff, SD1. This version of Stable Diffusion creates a server on your local PC that is accessible via its own IP address, but only if you connect through the correct port: 7860. A decoder, which turns the final 64x64 latent patch into a higher-resolution 512x512 image. These are our findings: Many consumer grade GPUs can do a fine job, since stable diffusion only needs about 5 seconds and 5 GB of VRAM to run. Drag and drop an image image here (webp not supported). fixとは?. Some types of picture include digital illustration, oil painting (usually good results), matte painting, 3d render, medieval map. But it is not the easiest software to use. I have been using Stable Diffusion for about 2 weeks now. To use this, first make sure you are on latest commit with git pull, then use the following command line argument: In the img2img tab, a new button will be available saying "Interrogate DeepBooru", drop an image in and click the button. Negative embeddings bad artist and bad prompt. py", line 222, in load_checkpoint raise RuntimeError('checkpoint url or path is invalid') The text was updated successfully, but these errors were encountered: All reactions. I have a 3060 12GB. The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. Usually, higher is better but to a certain degree. Its installation process is no different from any other app. Using the above metrics helps evaluate models that are class-conditioned. 1)的升级版,在图像质量、美观性和多功能性方面提供了显着改进。在本指南中,我将引导您完成设置和安装 SDXL v1. Stable Diffusion Hub. Number of images to be returned in response. 667 messages. I was using one but it does not work anymore since yesterday. If you want to use a different name, use the --output flag. A Keras / Tensorflow implementation of Stable Diffusion. Stable Diffusion. stable-diffusion txt2img参数整理 Sampling steps :采样步骤”:“迭代改进生成图像的次数;较高的值需要更长的时间;非常低的值可能会产生糟糕的结果”, 指的是Stable Diffusion生成图像所需的迭代步数。Stable Diffusion is a cutting-edge text-to-image diffusion model that can generate photo-realistic images based on any given text input. . This is a repo providing same stable diffusion experiments, regarding textual inversion task and captioning task pytorch clip captioning-images img2txt caption-generation caption-generator huggingface latent-diffusion stable-diffusion huggingface-diffusers latent-diffusion-models textual-inversion VGG16 Guided Stable Diffusion. LoRAを使った学習のやり方. You can pull text from files, set up your own variables, process text through conditional functions, and so much more - it's like wildcards on steroids. 04 for arm 32 bitIt's wild to think Photoshop has a Stable Diffusion Text to A. 31 votes, 370 comments. Dreambooth examples from the project's blog. With its 860M UNet and 123M text encoder. With stable diffusion, it really creates some nice stuff for what is already available, like a pizza with specific toppings [0]. Creating venv in directory C:UsersGOWTHAMDocumentsSDmodelstable-diffusion-webuivenv using python "C:UsersGOWTHAMAppDataLocalProgramsPythonPython310python. As we work on our next generation of open-source generative AI models and expand into new modalities, we are excited to. Sort of new here. Hiresは「High Resolution」の略称で高解像度という意味を持ち、fixは「修正・変更」を指します。. img2txt stable diffusion. Search Results related to img2txt. Check the superclass documentation for the generic methods. jpeg by default on the root of the repo. Unlike other subject-driven generation models, BLIP-Diffusion introduces a new multimodal encoder which is pre-trained to provide subject representation. 2022年8月に一般公開された画像生成AI「Stable Diffusion」をユーザーインターフェース(UI)で操作できる「AUTOMATIC1111版Stable Diffusion web UI」は非常に多. More awesome work from Christian Cantrell in his free plugin. 手順1:教師データ等を準備する. sh in terminal to start. sh in terminal to start. The generation parameters should appear on the right. I have searched the existing issues and checked the recent builds/commits What would your feature do ? with current technology would it be possible to ask the AI to generate a text from an image? in o. 第3回目はrinna社より公開された「日本語版. Our AI-generated prompts can help you come up with. Stable Diffusion. The script outputs an image file based on the model's interpretation of the prompt. Let’s give them a hand on understanding what Stable Diffusion is and how awesome of a tool it can be! Please do check out our wiki and new Discord as it can be very useful for new and experienced users! Dear friends, come and join me on an incredible journey through Stable Diffusion. The layout of Stable Diffusion in DreamStudio is more cluttered than DALL-E 2 and Midjourney, but it's still easy to use. py script shows how to fine-tune the stable diffusion model on your own dataset. Check it out: Stable Diffusion Photoshop Plugin (0. There’s a chance that the PNG Info function in Stable Diffusion might help you find the exact prompt that was used to generate your. (com a tecnologia atual seria possivel solicitar a IA gerar um texto a partir de uma imagem ? com a finalidade de saber o que a tecnologia poderia. An attempt to train a LoRA model from SD1. 아래 링크를 클릭하면 exe 실행 파일이 다운로드. and find a section called SD VAE. Interrogation: Attempts to generate a list of words and confidence levels that describe an image. 仕組みを簡単に説明すると、Upscalerで指定した倍率の解像度に対して. To use img2txt stable diffusion, all you need to do is provide the path or URL of the image you want to convert. ,AI绘画stable diffusion,AI辅助室内设计controlnet-语义分割控制测试-3. This model is a checkpoint merge, meaning it is a product of other models to create a product that derives. 0 (SDXL 1. (Optimized for stable-diffusion (clip ViT-L/14)) 2. StableDiffusion - Txt2Img - HouseofCat Stable Diffusion 2. . safetensors format. File "C:UsersGros2stable-diffusion-webuildmmodelslip. Let's dive in deep and learn how to generate beautiful AI Art based on prom. 5、2. A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. safetensors (5. Colab Notebooks . Goodbye Babel, generated by Andrew Zhu using Diffusers in pure Python. Para ello vam. OCR or Optical Character Recognition has never been so easy. Useful resource. r/StableDiffusion •. r/StableDiffusion. Functioning as image viewers for the terminal, chafa and catimg have only been an integral part of a stable release of the Linux distribution since Debian GNU/Linux 10. Moving up to 768x768 Stable Diffusion 2. . 丨Stable Diffusion终极教程【第5期】,Stable Diffusion提示词起手式TAG(中文界面),DragGAN真有那么神?在线运行 + 开箱评测。,Stable Diffusion教程之animatediff生成丝滑动画(一),【简易化】finetune定制大模型, Dreambooth webui画风训练保姆教程,当ai水说话开始喘气. 0) Watch on. If you are absolutely sure that the AI image you want to extract the prompt from was generated using Stable Diffusion, then this method is just for you. It is defined simply as a dilation followed by an erosion using the same structuring element used in the opening operation. This is a repo providing same stable diffusion experiments, regarding textual inversion task and captioning task. Others are delightfully strange. stable-diffusion-LOGO-fine-tuned model trained by nicky007. Change from a 512 model to a 768 model with the existing pulldown on the img2txt tab. Updating to newer versions of the script. For the rest of this guide, we'll either use the generic Stable Diffusion v1. The result can be viewed on 3D or holographic devices like VR headsets or lookingglass display, used in Render- or Game- Engines on a plane with a displacement modifier, and maybe even 3D printed. 生成按钮下有一个 Interrogate CLIP,点击后会下载 CLIP,用于推理当前图片框内图片的 Prompt 并填充到提示词。 CLIP 询问器有两个部分:一个是 BLIP 模型,它承担解码的功能,从图片中推理文本描述。 The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images. Cung cấp bộ công cụ và hướng dẫn hoàn toàn miễn phí, giúp bất kỳ cá nhân nào cũng có thể tiếp cận được công cụ vẽ tranh AI Stable DiffusionFree Stable Diffusion webui - txt2img img2img. Stable Diffusion XL. Jolly-Theme-7570. This extension adds a tab for CLIP Interrogator. The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by. For more details on how this dataset was scraped, see Midjourney User. Click on Command Prompt. Hot New Top. Install the Node. This step downloads the Stable Diffusion software (AUTOMATIC1111). Although efforts were made to reduce the inclusion of explicit pornographic material, we do not recommend using the provided weights for services or products without additional. xformers: 7 it/s (I recommend this) AITemplate: 10. In Stable Diffusion checkpoint dropbox, select v1-5-pruned-emaonly. The Payload config is central to everything that Payload does. The VD-basic is an image variation model with a single-flow. This will allow for the entire image to be seen during training instead of center cropped images, which. 7>"), and on the script's X value write something like "-01, -02, -03", etc. Stability. Important: An Nvidia GPU with at least 10 GB is recommended. . 20. Stable Diffusion 2. Then create the folder stable-diffusion-v1 and place the checkpoint inside it (must be named model. What platforms do you use to access UI ? Windows. Syntax: cv2. generating img2txt with the new v2. On Ubuntu 19. Stable Diffusionで生成したイラストをアップスケール(高解像度化)するためにハイレゾ(Hires. This distribution is changing rapidly. Download any of the VAEs listed above and place them in the folder stable-diffusion-webuimodelsVAE. Get prompts from stable diffusion generated images. Image to text, img to txt. 4. Hi, yes you can mix two even more images with stable diffusion. img2txt ascii. It can be done because I saw it with. morphologyEx (image, cv2. ControlNet is a brand new neural network structure that allows, via the use of different special models, to create image maps from any images and using these.