Flux Tools: New Outpainting, Redux IP Adapter Solution, and Controlnet LoRAs? 

Flux Experimental Art by Daniel Sandner
"Unsettled" by Daniel Sandner, ©2024

This is a newly published set of tools for Flux1, addressing some missing features of this otherwise very useful model. Flux Tools add interesting options to Flux workflows. It consists of working Depth and Canny models (also with LoRA versions), an interesting Redux model resembling an IP Adapter (allowing the combination of multiple images and the creation of input variations), and an Inpainting/Outpainting (Fill) model. I have experimented with various settings and will share some tips in this article.

Redux: Create Image Variations or Combine Multiple Images 

FLUX.1 Redux adapter model works a little bit like an IP Adapter, as we are used from other models (it seems it was developed for restyling and rescaling via API for advanced Flux Pro models). When you input an image without a prompt, it outputs a variation which is close to the original. You may also create a very similar image in a different format ratio this way.

Flux tools Redux altering the input image in Flux1 dev using ComfyUI workflow
Input | Without prompts with different seeds | Different format | Prompt altering the scene (using conditioning combine)

You don't need to input any prompt for this adapter to work. However, if you do want to use a prompt for image changes and achieve a measurable effect, you'll need to use Conditioning Combine or a similar solution to combine the Redux output with a prompt. Check redux-dev-useprompt.json in FLUX-TOOLS for an example.

Flux Tools Redux combining multiple photos
Flux Tools Redux in Flux1 (dev): Combine Multiple Images
Flux Tools Redux in Flux1 (schnell): Combine Multiple Images
Flux Tools Redux image combining works in Schnell version too, albeit not so precise

To change the final image when using a prompt, you need to use Conditioning Combine and adjust the weights of important tokens. By adding 1.2 to the weight of a token (and also adding Flux Guidance), you increase its influence on the generated image (the weight of the redux is quite strong, so it tends to beat the lower strength tokens). 

  • Redux works well in Flux Schnell for creating variations, but when combining multiple images, it tends to merge the components into a single composition without a clear logic of the components and the result is often garbled collage. I recommend using the Dev model for this (check Flux Tools workflows).

Depth and Canny Control by LoRAs

Finally, working models for depth and canny control are here. I am using the LoRA version in these examples, check the workflows in the References.

Structural conditioning (the technique/term used in flux tools) leverages edge or depth detection to maintain control during image generation. You can make prompt-guided edits while keeping the core composition of the input image. This is particularly effective for retexturing images, but it can sometimes even replace pose controlnet, if the control image has some flexibility (try some less advanced Canny or depth preprocessors). I am not sure how similar or different this technique is to ControlNet, but the results are indeed very good.
Full model weights are available under the Flux dev license.

Flux Tools Depth Controlnet Solution
Flux Tools Depth Control (check the file in resources)
Flux Tools Canny Controlnet Solution in ComfyUI
Flux Tools Canny Control (simple ComfyUI canny preprocessor)

Outpainting and Inpainting (Flux1 Fill) 

GPU with at least 24GB VRAM recommended (until some quantized version appears). The workflow is quite simple, you create an image padding and simple prompt to reconstruct a new part of an image. Outpainting and inpainting workflows from my experiments are in github FLUX-TOOLS folder.

FluxTools (Fill) model for outpainting example workflow
Outpainting with Flux Tools Fill model: Output, Padded input

Good news: You can use Turbo Alpha LoRA for faster rendering with Flux1 Fill model.

  • When using outpainting, the Fill model often fails to create a seamless image. If that happens, try a different seed.

How to Inpaint

  1. Load the workflow fluxtools-inpainting-turbo.json 
  2. In ComfyUI Workflow, right click on "Load Image" node (with your source image) 
  3. Choose "Open in Mask Editor"
  4. Paint mask, "Save to Node" when finished
  5. This mask will be used in the workflow for inpainting
  6. Write the prompt and render with Queue button

Conclusion

Flux Tools are a long-awaited, yet unexpected addition to the Flux toolbox. They open up new possibilities for employing Flux workflows in image editing, adjustment, and creation. Note that the solutions work best in Flux dev, although you can also use them with Schnell for some applications (Redux, Depth, Canny). This addition could also shorten the wait for a proper new IP Adapter for Flux, if there will be any (you can read about some features an issues of the old Flux IP Adapter here). It would be great if other "ControlNet" (or Structural Conditioning) models were available in the form of LoRAs, especially a solution for segmentation—we are moving closer to a full 3D scene render or retexturing with Depth and Canny structural conditioning.

References and Resources:

Updated:

You may also like:

Subscribe

Stay connected to make sure you don’t miss anything. Join our newsletter community for artists, designers, and art and science enthusiasts.