PRODU

Best comfyui workflows reddit

Best comfyui workflows reddit. - Play with Post process film grain, chromatic aberration and glow. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples. Allo! I am beginning to work with ComfyUI moving from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows Svelte is a radical new approach to building user interfaces. 13K subscribers in the comfyui community. 1K subscribers in the comfyui community. Txt-to-img, img-to-img, Inpainting, Outpainting, Image Upcale, Latent Upscale, multiple characters at once, LoRAs, ControlNet, IP-Adapter, but also video generation, pixelization, 360 image generation, and even Live painting ComfyUI workflow. 2. Awesome for replicating other's gens though or getting a repeatable process. Simply load / drag the png into comfyUI and it will load the workflow. I've been looking for a ComfyUI workflow that can do this and I've tried creating one myself Release: AP Workflow 9. This uses more steps, has less coherence, and also skips several important factors in-between. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. 9. Hi, I'm looking for input and suggestions on how I can improve my output image results using tips and tricks as well as various workflow setups. the base generation is quite a bit faster than the refining. The API workflows are not the same format as an image workflow, you'll create the workflow in ComfyUI and use the "Save (API Format)" button under the Save button you've probably Welcome to the unofficial ComfyUI subreddit. Jumping from one thing to another takes reloading or re-doing everything. AnimateLCM-I2V is also extremely useful for maintaining coherence at higher resolutions (with ControlNet and SD LoRAs active, I could easily upscale from 512x512 source to 1024x1024 in a single pass). If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". diffusers/stable-diffusion-xl-1. Then I pressed Fetch updates and Update ComfyUI and the line got up as it should and those two items disappeared. ☺️🙌🏼🙌🏼. Looking forward to seeing your workflow. Before inpainting the workflow will blow the masked size up to 1024x1024 to get a nice resolution and resize before pasting back. This model is a T5 77M parameter (small and fast) custom trained on prompt expansion dataset. ComfyICU - imgur for sharing ComfyUI workflows. I notice you have a lot of math spaghetti at the top left - I used to find this quite distracting, and eventually switched to doing stuff like this (aspect ratio calculations, value clamping, etc) in the ASTERR python evaluator node. Still broken. The workflow first generates an image from your given prompts and then uses that image to create a video. A lot of people are just discovering this technology, and want to show off what they created. co) Thanks for sharing this setup. so/r/nPOQRd. You can input one or more images for image to video. mysticfallband. (for 12 gb VRAM Max is about 720p resolution). - lots of pieces to combine with other workflows: SV3D ComfyUI workflow how to get it working. 0-inpainting-0. Nodes interface can be used to create complex workflows like one for Hires fix or much more advanced ones. I recommend you do not use the same text encoders as 1. " Go into the comfyui manager, try to install missing nodes. People aren't afraid to help you out, share workflows, give you advice. I wonder what the point is of sv3d at the moment if it can't export it to a 3d model. Bring back old Backgrounds! I finally found a workflow that does good 3440 x 1440 generations in a single go and was getting it working with IP-Adapter and realised I could recreate some of my favourite backgrounds from the past 20 years. Breakdown of workflow content. For example. My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. 5. Feb 1, 2024 · 12. I played for a few days with ComfyUI and SDXL 1. 1. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. and Crystools, or try to update missing nodes from comfy manger. My current workflow involves going back and forth between a regional sampler, an upscaler, and Krita (for inpainting to fix errors & fill in the details) to refine Welcome to the unofficial ComfyUI subreddit. Hey guys, I've generated a face with RunDiffusion, I also have different images of girls posing (took them from IG), I want to use the face I generated and the different poses from the IG models to generate more images of my own model. Does anyone know of a "head swap" workflow - not just face, but entire head. ComfyUI for product images workflow. 1 if people still use that?) People want to find workflows that use AnimateDiff (and AnimateDiff Evolved!) to make animation, do txt2vid, vid2vid, animated controlNet, IP-Adapter, etc. Hi! I just made the move from A1111 to ComfyUI a few days ago. Reply. 0 for ComfyUI - Now with support for SD 1. I like to create images like that one: end result. AP Workflow 6. ago. Best Comfyui Workflows, Ideas, and Nodes/Settings. . I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner Yes, this is the way to go. Noticed everyone was getting on the ComfyUI train lately but sharing the workflows was kind of hassle, most posted it on pastebin. This is normal SVD workflow, and my objective is to make animated short films, and so I am learning comfyui. 0 for ComfyUI - Now featuring SUPIR next-gen upscaler, IPAdapter Plus v2 nodes, a brand new Prompt Enricher, Dall-E 3 image generation, an advanced XYZ Plot, 2 types of automatic image selectors, and the capability to automatically generate captions for an image directory This workflow allows you to change clothes or objects in an existing image If you know the required style, you can work with the IP-Adapter and upload a reference image And if you want to get new ideas or directions for design, you can create a large amount of variations in a process that is mostly automatic. Press go 😉. Saving/Loading workflows as Json files. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. It seems I may have made a mistake in my setup, as the results for the faces after Adetailer are not Welcome to the unofficial ComfyUI subreddit. I guess once you draw all your workflows it is faster. Welcome to the unofficial ComfyUI subreddit. Thank you for taking the time to help others. With a higher config it seems to have decent results. I would like to use ComfyUI to make marketing images for my product that is quite high tech and I have the images from the photo studio. Maxnami. ControlNet and T2I-Adapter Hi there. - Generate text with various control parameters: - `prompt`: Provide a starting prompt for the text generation. EDIT: For example this workflow shows the use of the other prompt windows. The workflow in the example is passed in via the script in inline string, but it's better (and more flexible) to have your python script load that from a file instead. Let us know what you think! 8. Exactly this, don't try to learn ComfyUI by building a workflow from scratch. Step three: Feed your source into the compositional and your style into the style. Just to add, the thing I love most about the Comfyui community is that people share there workflows, and rarely try to profit off of them. * important note just if you have missing nodes like ComfyUI-post-processing-nodes. This is the concept: Generate your usual 1024x1024 Image. You can take a look at the paper HS-Diffusion. The idea is very reasonable and easy to reproduce. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. The adventage that this platform has is its built in community, ease of use, and the ability to experiment with stable THE LAB EVOLVED is an intuitive, ALL-IN-ONE workflow. 5 upvotes. TODO: add examples. Thanks for the video, here is a tip at the start of the video show an example of why we should watch the video, at this example show us 1pass vs 3pass. i'm finding the refining Really nice results - will share this with my friends who also work in Comfy. The blurred latent mask does its best to prevent ugly seams. Loading full workflows (with seeds) from generated PNG files. Comfyworkflows. I would like to include those images into Hi community, I wanted to finally share with you the results of a month of hard work. Ipadaptor for all. Please share your tips, tricks, and workflows for using this software to create your AI art. Then you generate an accessible unique Comfy URL to connect a websocket to and pass prompts via the API. 1 at main (huggingface. 5 and HiRes Fix, IPAdapter, Prompt Enricher via local LLMs (and OpenAI), and a new Object Swapper + Face Swapper, FreeU v2, XY Plot, ControlNet and ControlLoRAs, SDXL Base + Refiner, Hand Detailer, Face Detailer, Upscalers, ReVision [x-post] 19K subscribers in the comfyui community. Reply reply More replies SDrenderer Comfy is good for set workflows but bad for iterating quickly. It includes literally everything possible with AI image generation. I had sometime to burn this weekend and the domain was available for $3 lol. 20K subscribers in the comfyui community. Following this, I utilize Facedetailer to enhance faces (similar to Adetailer for A1111). Let us know what you think! Welcome to the unofficial ComfyUI subreddit. Step one: Hook up IPAdapter x2. It was hard to have a quick view of the workflow to get sense of what was used. I have a very simple workflow that uses sparse control RGB. I also had this problem in the beginning. Problem solved. 21K subscribers in the comfyui community. Allows you to choose the resolution of all output resolutions in the starter groups. HOW TO USE: - Start with GREEN NODES write your prompt and hit queue. Please share your tips, tricks, and workflows for using this…. How to use: 1/Split your video into frames and reduce to the FPS desired (I like going for a rate of about 12 FPS) 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. 🌟 Features : - Seamlessly integrate the SuperPrompter node into your ComfyUI workflows. 4. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. In a few weeks, I have understood, little about images and videos and want to work on the quality of generations, I want my workflows to be under 19GB of session storages and so please guide me as to what Welcome to the unofficial ComfyUI subreddit. This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. Go to the Custom Node Type Glossary, clone the specified node into the custom node folder. A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. Your best bet is to set up an external queue system and spin up ComfyUI instances in the cloud when requests are added to the external queue. using the settings i got from the thread on the main SD sub. Not sure why you even posted this here??? 5. Just my two cents. For my second workflow, I would like to combine the same portrait but with this body. Step two: Set one to compositional and one to style weight. Belittling their efforts will get you banned. Scaling and GPUs can get overwhelmingly expensive so you'll have to add additional safeguards. I set up a workflow for first pass and highres pass. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: MoonRide workflow v1. I think it was 3DS Max. ComfyUI SDXL simple workflow released. if you want to stack lora you have to keep adding nodes. Vid2QR2Vid: You can see another powerful and creative use of ControlNet by Fictiverse here. People want to find workflows that are based on SDXL, SD1. If you really want the json, you can save it after loading the png into comfyui. I created a platform that will enable you to share your comfyui workflows (for free) and run them directly on the cloud (for a tiny sum). But for a base to start at it'll work. Please share your tips, tricks, and workflows for using this… Help me create 2 workflows! Hello, For my first workflow, I would like to generate a workflow that allows me to feed a portrait photo of this woman and have an output of a randomly generated body to match, with the identical face included. - `max_new_tokens`: Set the maximum number of new Welcome to the unofficial ComfyUI subreddit. Please keep posted images SFW. 5 (or maybe SD2. Will upload the workflow to OpenArt soon. Area Composition; Inpainting with both regular and inpainting models. The best result I have gotten so far is from the regional sampler from Impact Pack, but it doesn't support SDE or UniPC samplers, unfortunately. ComfyUI Txt2Video with Stable Video Diffusion. Still no fix. If you want to use Stable Video Diffusion in ComfyUI, you should check out this txt2video workflow that lets you create a video from text. I've tried downloading a number of workflows from comfyworkflows and inevitably I get a "such and such node is missing. - Play with the Upscale models for upscale 4x or 8x. will output this resolution to the bus. Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). While the normal text encoders are not "bad", you can get better results if using the special encoders Welcome to the unofficial ComfyUI subreddit. 5 and 2. Help with Facedetailer workflow SDXL. Hello! I am starting to work with ComfyUI transitioning from a1111 - I know there are so so many workflows published to civit and other sites- I am hoping to find a way to dive in and start working with ComfyUI without wasting much time with mediocre/redundant workflows and am hoping someone can help me by pointing be toward a resource to find some of the better developed comfy workflows SVD Anime Workflow - need help. it is about multi prompting, multi pass workflows and basically how to set up a really good workflow for pushing your own projects to the next level. I've been messing with this for the last few days and cannot for the life of me get the Detailer panel to work. • 4 mo. Search about IPAdapter plus face, IPAdapter Full face and IPAdapter Faceid, they capture the whole aspect of the face, including head format and the hair. Forgot to mention, you will have to download this inpaint model from huggingface and put it in your comfyUI "Unet" folder that can be found in the models folder. on my system with a 2070S (8gb vram), ryzen 3600, 32gb 3200mhz ram the base generation for a single image took 28 seconds to generate and then took and additional 2 minutes and 32 seconds to refine. SDXL has 2 text encoders on its base, and a specialty text encoder on its refiner. 16K subscribers in the comfyui community. Hi, I am fairly new to ComfyUI stable diffusion, and I must say that the whole AI image generation field really captivated me. . Upload multiple output images/videos per workflow Upload & manage new versions of your workflows We’re also announcing a Creator program — we’re dedicating 10% of our revenue from runnable workflows on our site to our best creators: https://tally. Also embedding the full workflow into images is so nice coming from A1111, where half the extensions either don't embed their params, or don't reuse those params when Welcome to the unofficial ComfyUI subreddit. 7K subscribers in the comfyui community. tv iu td er mp gz ro ug sv ek