Best upscale model for comfyui reddit. 5 to get a 1024x1024 final image (512 *4*0. 15-0. After generating my images I usually do Hires. You can construct an image generation workflow by chaining different blocks (called nodes) together. I love to go with an SDXL model for the initial image and with a good 1. After borrowing many ideas, and learning ComfyUI. to the corresponding Comfy folders, as discussed in ComfyUI manual installation. If you want a better grounding at making your own comfyUI systems consider checking out my tutorials. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Welcome to the unofficial ComfyUI subreddit. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. New to Comfyui, so not an expert. I am curious both which nodes are the best for this, and which models. I have used: - CheckPoint: RevAnimated v1. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. I haven't been able to replicate this in Comfy. The resolution is okay, but if possible I would like to get something better. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. with a denoise setting of 0. I would like to know or get some advice on how to do it properly to squeeze the maximum quality of the model. Best method to upscale faces after doing a faceswap with reactor It's a 128px model so the output faces after faceswapping is blurry and low res. So I'm happy to announce today: my tutorial and workflow are available. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. safetensors , clip model( it's name is simply model. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Jul 6, 2024 · What is ComfyUI? ComfyUI is a node-based GUI for Stable Diffusion. Now you can use the model also in ComfyUI! Workflow with existing SDXL checkpoint patched on the fly to become an inpaint model. 5GB vram and swapping refiner too , use --medvram-sdxl flag when starting Welcome to the unofficial ComfyUI subreddit. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. A pixel upscale using a model like ultrasharp is a bit better -and slower- but it'll still be fake detail when examined closely. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. there is an example as part of the install. DirectML (AMD Cards on Windows) pip install torch-directml Then you can launch ComfyUI with: python main. I find Upscale useful, but as I often Upscale to 6144 x 6144 GigaPixel has the batch speed and capacity to make 100+ Upscales worthwhile. 202 votes, 58 comments. And when purely upscaling, the best upscaler is called LDSR. - image upscale is less detailed, but more faithful to the image you upscale. example Welcome to the unofficial ComfyUI subreddit. In the saved workflow its at 4, with 10 steps (Turbo model) which is like a 60% denoise. Please share your tips, tricks, and workflows for using this software to create your AI art. safetensor) ii. model: base sd v1. But I spend more time just using my layout than I do editing it. Using text has its limitations in conveying your intentions to the AI model. Appreciate just looking into it. fix. I want to upscale my image with a model, and then select the final size of it. 5 I'd go for Photon , RealisticVision or epiCRealism . Within ComfyUI use extra_model_paths. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Warning: the workflow does not save image generated by the SDXL Base model. These same models are working in A1111 but I prefer the workflow of ComfyUI. I share many results and many ask to share. Does anyone have any suggestions, would it be better to do an ite Tried the llite custom nodes with lllite models and impressed. I'm getting these messages when trying to load some models in ComfyUI. As evident by the name, this workflow is intended for Stable Diffusion 1. 6. The downside is that it takes a very long time. 5=1024). Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. If you'd like to load LoRA, you need to connect "MODEL" and "CLIP" to the node, and after that, all the nodes that require these two wires should be connected with the ones from load LoRAs, so of course, the workflow should work without any problems. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. 2 If you want to use RealESRGAN_x4plus_anime_6B you need work in pixel space and forget any latent upscale. 25 i get a good blending of the face without changing the image to much. Ty i will try this. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. 20K subscribers in the comfyui community. There is no tiling in the default A1111 hires. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). . There are also "face detailer" workflows for faces specifically. 2. For illustration/anime models you will want something smoother that would tend to look “airbrushed” or overly smoothed out for more realistic images, there are many options. For photo upscales, I'm a sucker for 1:1 matches so I'm using topaz. Please share your tips, tricks, and… Cause I run SDXL based models from start and through 3 ultimate upscale nodes. Although the consistency of the images (specially regarding the colors) is not great, I have made it work in Comfyui. 5 for the diffusion after scaling. This is a very short animation I have made testing Comfyui. Jan 13, 2024 · TLDR: Both seem to do better and worse in different parts of the image, so potentially combining the best of both (photoshop, seg/masking) can improve your upscales. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. the factor 2. You can use it in any picture, you will need ComfyUI_UltimateSDUpscale Always wanted to integrate one myself. Reply reply Top 1% Rank by size messing around with upscale by model is pointless for high res fix. in a1111 the controlnet I run some tests this morning. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. Superscale is the other general upscaler I use a lot. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) Super late here but is this still the case? I've got CCSR & TTPlanet. 5 it s in mature state where almost all the models and loras are based on it, so you get better quality and speed with it. 5, see workflow for more info Jul 28, 2024 · I've tried to work a perfect match between checkpoint models, samplers. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". Edit: you could try the workflow to see it for yourself. Hi! I've been experimenting and trying some workflows / tutorials but I don't seem to be getting good results with hires fix. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. animate diff models, upscalers and VAE's for 9 hours non-stop straight Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. Action Movies & Series; Animated Movies & Series; Comedy Movies & Series; Crime, Mystery, & Thriller Movies & Series; Documentary Movies & Series; Drama Movies & Series I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. Tip: The latest version of ComfyUI is prone to excessive graphics memory usage when using multiple FLUX Lora models, and this issue is not related to the size of the LoRA models. This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent space, and perform the sampler pass. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP We would like to show you a description here but the site won’t allow us. It s not necessary an inferior model, 1. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. 0 Alpha + SD XL Refiner 1. Upgrade your FPS skills with over 25,000 player-created scenarios, infinite customization, cloned game physics, coaching playlists, and guided training and analysis. well sure - if you gotta edit your layout those organic curves are helpful. For SD1. Usually I use two my wokrflows: From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Upscale x1. Also, both have a denoise value that drastically changes the result. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) FWIW, i was using it WITH the PatchModelAddDownscale node to generate with RV 5. 5 models and is a very beginner-friendly workflow allowing anyone to use it easily. I'm trying to find a way of upscaling the SD video up from its 1024x576. Must download - stage_a. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. Fooocus came up with a way that delivers pretty convincing results. 2. 10 votes, 15 comments. The world’s best aim trainer, trusted by top pros, streamers, and players like you. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Honestly you can probably just swap out the model and put in the turbo scheduler, i don't think loras are working properly yet but you can feed the images into a proper sdxl model to touch up during generation (slower and tbh doesn't save time over just using a normal SDXL model to begin with), or generate a large amount of stuff to pick and Best aesthetic scorer custom node suite for ComfyUI? I'm working on the upcoming AP Workflow 8. It only generates its preview. 1 and LCM for 12 samples at 768x1152, then using a 2x image upscale model, and consistently getting the best skin and hair details I've ever seen. 19K subscribers in the comfyui community. I'm using SIAX models or real_ersgan or foolhardy depending on the need when it need to 'go fast' or have an intermediary step to complete with something like zeroscope. 0. 5 if you want to divide by 2) after upscaling by a model. 0-RC , its taking only 7. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. ControlNet, on the other hand, conveys it in the form of images. Please share your tips, tricks, and… 44 votes, 21 comments. Please share your tips, tricks, and… The hires script is overriding the ksamplers denoise so your actually using . This is what I have so far (using the custom nodes to reduce the visual clutteR) . py --directml That's because of the model upscale. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling You just have to use the node "upscale by" using bicubic method and a fractional value (0. Feb 1, 2024 · The first one on the list is the SD1. For the samplers I've used dpmpp_2a (as this works with the Turbo model) but unsample with dpmpp_2m, for me this gives the best results. However, due to the more stringent requirements, while it can generate the intended images, it should be used carefully as conflicts between the interpretation of the AI model and ControlNet's enforcement can lead to a degradation in quality. r/StableDiffusion • finally , AUTOMATIC1111 has fixed high VRAM issue in Pre-release version 1. Reactor has built in codeformer and GFPGAN, but all the advice I've read said to avoid them. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. A step-by-step guide to mastering image quality. All images are generated using both the SDXL Base model and the Refiner model, each automatically configured to perform a certain amount of diffusion steps according to the “Base/Refiner Step Ratio” formula defined in the dedicated widget. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. (also may want to try an upscale model>latent upscale, but thats just my personal preference really) Tip: The latest version of ComfyUI is prone to excessive graphics memory usage when using multiple FLUX Lora models, and this issue is not related to the size of the LoRA models. The custom node suites I found so far either lack the actual score calculator, don't support anything but CUDA, or have very basic rankers (unable to process a batch, for example, or only I wanted a flexible way to get good inpaint results with any SDXL model. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Welcome to the unofficial ComfyUI subreddit. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… Welcome to the unofficial ComfyUI subreddit. pth or 4x_foolhardy_Remacri. I love Comfyui, but it is difficult to set a workflow to create animations as easily as it can be done in Automatic1111. ComfyUI Upscaling is best for a dozen or so Upscales alas would take all week to do 100+ Note: Remember to add your models, VAE, LoRAs etc. so i. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). info Thanks. Look at this workflow : 43 votes, 16 comments. Even high-end graphics cards like the NVIDIA GeForce RTX 4090 are susceptible to similar issues. With this method, you can upscale the image while also preserving the style of the model. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. There's "latent upscale by", but I don't want to upscale the latent image. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. Downloading the model - It's best if you download the model using the comfymanager itself, it creates the correct path and it doesn't create any mess. Good for depth, open pose so far so good. 5 Template Workflows for ComfyUI which is a multi-purpose workflow that comes with three templates. Please keep posted images SFW. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. - latent upscale looks much more detailed, but gets rid of the detail of the original image. same seed probably not nessesary and can cause bad artifacting by the "Burn in" problem when you stack same seed samplers. I took a 2-4 month hiatus, basically when the OG upscale checkpoints came out like SUPIR so I have no heckin' idea what is the go-to these days. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. In A1111, you can do hires fix with any model upscaler that you want, like 4xUltraSharp and you can also choose the dimensions and denoising strength? Is there a way to do this in ComfyUI? I know of the Hires Script node, but when you choose the Upscaler Model on that one, you can't choose the denoising strength or number of steps. 56 denoise which is quite high and giving it just enough freedom to totally screw up your image. 5 model, and can be applied to Automatic easily. If you want actual detail at a reasonable amount of time you'll need a 2nd pass with a 2nd sampler. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. Though, from what someone else stated it comes to use case. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Because the upscale model of choice can only output 4x image and they want 2x. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… Welcome to the unofficial ComfyUI subreddit. just remove . For SD 1. ssitu/ComfyUI_UltimateSDUpscale: ComfyUI nodes for the Ultimate Stable Diffusion Upscale script by Coyote-A. This way it replicates the sd upscale/ultimate upscale scripts from A1111. Upscaling: Increasing the resolution and sharpness at the same time. Finding that there's an add-on for both straightening my lines and pinning the position of all nodes has been awesome! The following allows you to use the A1111 models etc within ComfyUI to prevent having to manage two installations or model files / loras etc . Indeed SDXL it s better , but it s not yet mature, as models are just appearing for it and as loras the same. yaml file. You could also try a standard checkpoint with say 13, and 30. We would like to show you a description here but the site won’t allow us. Search for upscale and click on Install for the models you want. e. You should bookmark the upscaler DB, it’s the best place to look: https://openmodeldb. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. Hi! I'm an absolute layman/newbie so please excuse any ignorance of mine. Here it is, the method I was searching for. So you workflow should look like this: KSampler (1) -> VAE Decode -> Upscale Image (using Model) -> Upscale Image By (to downscale the 4x result to desired size) -> VAE Encode -> KSampler (2) Oct 21, 2023 · Non-latent upscale method. I upscaled it to a… For a dozen days, I've been working on a simple but efficient workflow for upscale. But for the other stuff, super small models and good results. Reply reply Reddit page for Nucleus Co-op, a free and open source program for Windows that allows split-screen play on many games that do not initially support it, the app purpose is to make it as easy as possible for the average user to play games locally using only one PC and one game copy. The idea is simple, use the refiner as a model for upscaling instead of using a 1. That's because latent upscale turns the base image into noise (blur). 0 and want to add an Aesthetic Score Predictor function. For AI-generate video upscales, something like a chain of AD LCM + Ipadapter + Ultimate Upscale. 5 there is ControlNet inpaint, but so far nothing for SDXL. Which models to download - i. Solution: click the node that calls the upscale model and pick one. This is the 'latent chooser' node - it works but is slightly unreliable. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. Generates a SD1. qan hvximh elxa rng zvddt cnsfypc sksrch sprbujq irmqlv tbpofqbu
- TOP ENTRY
- PICK UP
- CONTACT