We continue exploring the different user interfaces to manipulate Stable Diffusion applied to Architectural Visualization. In this video we continue using the ComfyUI node-based UI to increase realism in people. On this occasion we will see applications in large-scale projects, which can contain many people in the same image and in which the possibility of completely automating the improvement process is very interesting.
We will see the automatic segmentation process of people step by step, using different nodes to create precise selection masks on the people we are interested in improving with AI. We will do different tests that will allow us to adjust the results to our needs.
To take advantage of this tutorial, we recommend first watching the previous video on ComfyUI and Control Net, which works as a general introduction to these more specific applications.
WHAT ARE YOU GOING TO LEARN WITH THIS TUTORIAL?
How to enhance people in crowd images automatically with AI
- Automatic selection of people using the ComfyUI Impact Pack node set and BBOX Detector
- Automatic selection with ComfyUI Impact Pack and SEGM Detector that generates a more accurate mask around people
- We will use META’s SAM Model, for automatic feature recognition and to create more accurate masks.
- Using the Simple Detector node to combine the accuracy of the previous selections.
Processing the selections using a Detailer Debug and the positive and negative prompts. - Applying ControlNet.
- Tiling options to optimize resource consumption and increase detail.
*This tutorial has been recorded in Spanish. You can configure YouTube to translate it with automatic subtitles