| ▲ | FLUX.2: Frontier Visual Intelligence(bfl.ai) |
| 203 points by meetpateltech 7 hours ago | 63 comments |
| |
|
| ▲ | vunderba 4 hours ago | parent | next [-] |
| Updating the GenAI comparison website is starting to feel a bit Sisyphean with all the new models coming out lately, but the results are in for the Flux 2 Pro Editing model! https://genai-showdown.specr.net/image-editing It scored slightly higher than BFL's Kontext model, coming in around the middle of the pack at 6 / 12 points. I’ll also be introducing an additional numerical metric soon, so we can add more nuance to how we evaluate model quality as they continue to improve. If you're solely interested in seeing how Flux 2 Pro stacks up against the Nano Banana Pro, and another Black Forest model (Kontext), see here: https://genai-showdown.specr.net/image-editing?models=km,nbp... Note: It should be called out that BFL seems to support a more formalized JSON structure for more granular edits so I'm wondering if accuracy would improve using it. |
|
| ▲ | spyder 6 hours ago | parent | prev | next [-] |
| Great, especially that they still have an open-weight variant of this new model too.
But what happened to their work on their unreleased SOTA video model? did it stop being SOTA, others got ahead, and they folded the project, or what?
YT video about it: https://youtu.be/svIHNnM1Pa0?t=208
They even removed the page of that: https://bfl.ai/up-next/ |
| |
| ▲ | liuliu 5 hours ago | parent | next [-] | | As a startup, they pivoted and focused on image models (they are model providers, and image models often have more use cases than video models, not to mention they continue to have bigger image dataset moat, not video). | |
| ▲ | andersa 4 hours ago | parent | prev | next [-] | | I heard a possibly unsubstantiated rumor that they had a major failed training run with the video model and canceled the project. | | |
| ▲ | qoez 3 hours ago | parent [-] | | Makes no sense since they should have checkpoints earlier in the run that they could restart from and they should have regular checks that keep track if a model has exploded etc. | | |
| ▲ | embedding-shape 3 hours ago | parent [-] | | I didn't read "major failed training run" as in "the process crashed and we lost all data" but more like "After spending N weeks on training, we still didn't achieve our target(s)", which could be considered "failing" as well. |
|
| |
| ▲ | echelon 5 hours ago | parent | prev [-] | | Image models are more fundamentally important at this stage than video models. Almost all of the control in image-to-video comes through an image. And image models still needs a lot of work and innovation. On a real physical movie set, think about all of the work that goes into setting the stage. The set dec, the makeup, the lighting, the framing, the blocking. All the work before calling "action". That's what image models do and must do in the starting frame. We can get way more influence out of manipulating images than video. There are lots of great video models and it's highly competitive. We still have so much need on the image side. When you do image-to-video, yes you control evolution over time. But the direction is actually lower in terms of degrees of freedom. You expect your actors or explosions to do certain reasonable things. But those 1024x1024xRGB pixels (or higher) have way more degrees of freedom. Image models have more control surface area. You exercise control over more parameters. In video, staying on rails or certain evolutionary paths is fine. Mistakes can not just be okay, they can be welcome. It also makes sense that most of the work and iteration goes into generating images. It's a faster workflow with more immediate feedback and productivity. Video is expensive and takes much longer. Images are where the designer or director can influence more of the outcomes with rapidity. Image models still need way more stylistic control, pose control (not just ControlNets for limbs, but facial expressions, eyebrows, hair - everything), sets, props, consistent characters and locations and outfits. Text layout, fonts, kerning, logos, design elements, ... We still don't have models that look as good as Midjourney. Midjourney is 100x more beautiful than anything else - it's like a magazine photoshoot or dreamy Instagram feed. But it has the most lackluster and awful control of any model. It's a 2021-era model with 2030-level aesthetics. You can't place anything where you want it, you can't reuse elements, you can't have consistent sets... But it looks amazing. Flux looks like plastic, Imagen looks cartoony, and OpenAI GPT Image looks sepia and stuck in the 90's. These models need to compete on aesthetics and control and reproducibility. That's a lot of work. Video is a distraction from this work. | | |
| ▲ | cubefox 4 hours ago | parent [-] | | Hot take: text-to-image models should be biased toward photorealism. This is because if I type in "a cat playing piano", I want to see something that looks like a 100% real cat playing a 100% real piano. Because, unless specified otherwise, a "cat" is trivially something that looks like an actual cat. And a real cat looks photorealistic. Not like a painting, or cartoon, or 3D render, or some fake almost-realistic-but-cleary-wrong "AI style". | | |
| ▲ | 85392_school 3 hours ago | parent | next [-] | | FYI: photorealism is art that imitates photos, and I see the term misused a lot both in comments and prompts (where you'll actually get subideal results if you say "photorealism" instead of describing the camera that "shot" it!) | | |
| ▲ | cubefox 2 hours ago | parent [-] | | I meant it here in the sense of "as indistinguishable from a photo as the model can make it". |
| |
| ▲ | minimaxir 4 hours ago | parent | prev [-] | | As Midjourney has demonstrated, the median user of AI image generation wants those aesthetic dreamy images. | | |
| ▲ | cubefox 2 hours ago | parent [-] | | I think it's more likely this is just a niche that Midjourney has occupied. | | |
| ▲ | loudmax an hour ago | parent [-] | | If Midjourney is a niche, then what is the broader market for AI image generation? Porn, obviously, though if you look at what's popular on civitai.com, a lot of it isn't photo-realistic. That might change as photo-realistic models are fully out of the uncanny valley. Presumably personalized advertising, but this isn't something we've seen much of yet. Maybe this is about to explode into the mainstream. Perhaps stock-photo type images for generic non-personalized advertising? This seems like a market with a lot of reach, but not much depth. There might be demand for photos of family vacations that didn't actually happen, or removing erstwhile in-laws from family photos after a divorce. That all seems a bit creepy. I could see some useful applications in education, like "Draw a picture to help me understand the role of RNA." But those don't need to be photo-realistic. I'm sure people will come up with more and better uses for AI-generated images, but it's not obvious to me there will be more demand for images that are photo-realistic, rather than images that look like illustrations. | | |
| ▲ | echelon 39 minutes ago | parent | next [-] | | > If Midjourney is a niche, then what is the broader market for AI image generation? Midjourney is one aesthetically pleasing data point in a wide spectrum of possibilities and market solutions. Creator economy is huge and is outgrowing Hollywood and the Music Industry combined. There's all sorts of use cases in marketing, corporate, internal comms. There are weird new markets. A lot of people simply subscribe to Midjourney for "art therapy" (a legit term) and use it as a social media replacement. The giants are testing whether an infinite scroll of 100% AI content can beat human social media. Jury's out, but it might start to chip away at Instagram and TikTok. Corporate wants certain things. Disney wants to fine tune. They're hiring companies like MoonValley to deliver tailored solutions. Adobe is building tools for agencies and designers. They are only starting to deliver competent models (see their conference videos), and they're going about this a very different way. ChatGPT gets the social trend. Ghibli. Sora memes. > Porn, obviously, though if you look at what's popular on civitai.com, a lot of it isn't photo-realistic. Civitai is circling the drain. Even before the unethical and religious Visa blacklisting, the company was unable to steer itself to a Series A. Stable Diffusion and local models are still way too hard for 99.99% of people and will never see the same growth as a Midjourney or OpenAI that have zero sharp edges and that anyone in the world can use. I'm fairly certain an "OnlyFans but AI" will arise and make billions of dollars. But it has to be so easy a tucker who doesn't learn to code can use it from their 11 year old Toshiba. > Presumably personalized advertising, but this isn't something we've seen much of yet. Carvana pioneered this almost five years ago. I'll try to find the link. This isn't going to really take off though. It's creepy and people hate ads. Carvana's use case was clever and endearing though. | |
| ▲ | cubefox an hour ago | parent | prev [-] | | Well, as I said, if I type "cat", the most reasonable interpretation of that text string is a perfectly realistic cat. If I want an "illustration" I can type in "illustration of a cat". Though of course that's still quite unspecific. There are countless possible unrealistic styles for pictures (e.g. line art, manga, oil painting, vector art etc), and the reasonable thing is that the users should specify which of these countless unrealistic styles they want, if they want one. If I just type in "cat" and the model gives me, say, a water color picture of a cat, it is highly improbable that this style happens to be actually what I wanted. |
|
|
|
|
|
|
|
| ▲ | jakozaur 5 hours ago | parent | prev | next [-] |
| FLUX.1 Pro Kontext was one of the best artistic model, still great at instruction following comparing to MidJourney V7. See my third comparison in Nano Banana blog post:
https://quesma.com/blog/nano-banana-pro-intelligence-with-to... |
|
| ▲ | minimaxir 4 hours ago | parent | prev | next [-] |
| I just finished my Flux 2 testing (focusing on the Pro variant here: https://replicate.com/black-forest-labs/flux-2-pro). Overall, it's a tough sell to use Flux 2 over Nano Banana for the same use cases, but even if Nano Banana didn't exist it's only an iterative improvement over Flux 1.1 Pro. Some notes: - Running my nuanced Nano Banana prompts though Flux 2, Flux 2 definitely has better prompt adherence than Flux 1.1, but in all cases the image quality was worse/more obviously AI generated. - The prompting guide for Flux 2 (https://docs.bfl.ai/guides/prompting_guide_flux2) encourages JSON prompting by default, which is new for an image generation model that has the text encoder to support it. It also encourages hex color prompting, which I've verified works. - Prompt upsampling is an option, but it's one that's pushed in the documentation (https://github.com/black-forest-labs/flux2/blob/main/docs/fl...). This does allow the model to deductively reason, e.g. if asked to generate an image of a Fibonacci implementation in Python it will fail hilariously if prompt sampling is disabled, but get somewhere if it's enabled: https://x.com/minimaxir/status/1993361220595044793 - The Flux 2 API will flag anything tangently related to IP as sensentive even at its lowest sensitivity level, which is different from Flux 1.1 API. If you enable prompt upsampling, it won't get flagged, but the results are...unexpected. https://x.com/minimaxir/status/1993365968605864010 - Costwise and generation-speed-wise, Flux 2 Pro is on par with Nano Banana, and adding an image as an input pushes the cost of Flux 2 Pro higher than Nano Banana. The cost discrepancy increases if you try to utilize the advertised multi-image reference feature. - Testing Flux 1.1 vs. Flux 2 generations does not result in objective winners, particularly around more abstract generations. |
| |
| ▲ | loudmax an hour ago | parent | next [-] | | The fact that you have the possibility of running Flux locally might be enough of an argument to sway the balance for some cases. For example, if you've already set up a workflow and Google jacks up the price, or changes the API, you have no choice but to go along. If BFL does the same, you at least have the option of running locally. | | |
| ▲ | minimaxir an hour ago | parent [-] | | Those cases imply commercial workflows that are prohibited with the open-weights model without purchasing a license. I am curious to see how the Apache 2.0 distilled variant performs but it's still unlikely that the economics will favor it unless you have a specific niche use case: the engineering effort needed to scale up image inference for these large models isn't zero cost. |
| |
| ▲ | vunderba 4 hours ago | parent | prev | next [-] | | I've re-run my benchmark with the Flux 2 Pro model and found that in some cases the higher resolution models (I believe Flux 2 Pro handles 4k) can actually backfire on some of the tests because it'll introduce the equivalent of an almost ESRGAN style upscale which may add in unwanted additional details. (See the Constanza test in particular). https://genai-showdown.specr.net/image-editing | | |
| ▲ | minimaxir 4 hours ago | parent [-] | | That Constanza test result is baffling. | | |
| ▲ | vunderba 2 hours ago | parent [-] | | Agreed - I was quite surprised. Even though its a bog-standard 1024x1024 image, the somewhat low quality nature of a TV still provides for an interesting challenge. All the BFL models (Kontext Max and Flux 2 Pro) seemed to struggle hard with it. |
|
| |
| ▲ | babaganoosh89 2 hours ago | parent | prev [-] | | Flux 2 Dev is not IP censored | | |
| ▲ | minimaxir an hour ago | parent [-] | | Do you have generations contradicting that? The HF repo for the open-weights Flux 2 Dev says that IP filters are in place (and imply it's a violation of the license to do as such) EDIT: Seeing a few generations on /r/StableDiffusion generating IP from the open weights model. |
|
|
|
| ▲ | 542458 7 hours ago | parent | prev | next [-] |
| > Run FLUX.2 [dev] on GeForce RTX GPUs for local experimentation with an optimized fp8 reference implementation of FLUX.2 [dev], created in collaboration with NVIDIA and ComfyUI. Glad to see that they're sticking with open weights. That said, Flux 1.x was 12B params, right? So this is about 3x as large plus a 24B text encoder (unless I'm misunderstanding), so it might be a significant challenge for local use. I'll be looking forward to the distill version. |
| |
| ▲ | minimaxir 6 hours ago | parent [-] | | Looking at the file sizes on the open weights version (https://huggingface.co/black-forest-labs/FLUX.2-dev/tree/mai...), the 24B text encoder is 48GB, the generation model itself is 64GB, which roughly tracks with it being the 32B parameters mentioned. Downloading over 100GB of model weights is a tough sell for the local-only hobbyists. | | |
| ▲ | BadBadJellyBean 5 hours ago | parent | next [-] | | Never mind the download size. Who has the VRAM to run it? | | | |
| ▲ | zamadatix 5 hours ago | parent | prev | next [-] | | 100 GB is less than a game download, it's actually running it that's a tough sell. That said, the linked blog post seems to say the optimized model is both smaller and greatly improved the streaming approach from system RAM, so maybe it is actually reasonably usable on a single 4090/5090 type setup (I'm not at home to test). | |
| ▲ | _ache_ 5 hours ago | parent | prev | next [-] | | Even a 5090 can handle that. You have to use multiple GPUs. So the only option will be [klein] on a single GPU... maybe? Since we don't have much information. | | |
| ▲ | Sharlin 4 hours ago | parent [-] | | As far as I know, no open-weights image gen tech supports multi-GPU workflows except in the trivial sense that you can generate two images in parallel. The model either fits into the VRAM of a single card or it doesn’t. A 5ish-bit quantization of a 32Gw model would be usable by owners of 24GB cards, and very likely someone will create one. |
| |
| ▲ | crest 2 hours ago | parent | prev [-] | | The download is a trivial onetime cost and so is storing it on a direct attached NVMe SSD. The expensive part is getting a GPU with 64GB of memory. |
|
|
|
| ▲ | xnx 7 hours ago | parent | prev | next [-] |
| Good to see there's some competition to Nano Banana Pro. Other players are important for keeping the price of the leaders in check. |
| |
|
| ▲ | minimaxir 7 hours ago | parent | prev | next [-] |
| Text encoder is Mistral-Small-3.2-24B-Instruct-2506 (which is multimodal) as opposed to the weird choice to use CLIP and T5 in the original FLUX, so that's a good start albeit kinda big for a model intended to be open weight. BFL likely should have held off the release until their Apache 2.0 distilled model was released in order to better differentiate from Nano Banana/Nano Banana Pro. The pricing structure on the Pro variant is...weird: > Input: We charge $0.015 for each megapixel on the input (i.e. reference images for editing) > Output: The first megapixel is charged $0.03 and then each subsequent MP will be charged $0.015 |
| |
| ▲ | woadwarrior01 6 hours ago | parent | next [-] | | > BFL likely should have held off the release until their Apache 2.0 distilled model was released in order to better differentiate from Nano Banana/Nano Banana Pro. Qwen-Image-Edit-2511 is going to be released next week. And it will be Apache 2.0 licensed. I suspect that was one of the factors in the decision to release FLUX.2 this week. | | | |
| ▲ | kouteiheika 6 hours ago | parent | prev | next [-] | | > as opposed to the weird choice to use CLIP and T5 in the original FLUX Yeah, CLIP here was essentially useless. You can even completely zero the weights through which the CLIP input is ingested by the model and it barely changes anything. | |
| ▲ | beernet 6 hours ago | parent | prev | next [-] | | Nice catch. Looks like engineers tried to take care of the GTM part as well and (surprise!) messed it up. In any case, the biggest loser here is Europe once again. | |
| ▲ | throwaway314155 6 hours ago | parent | prev [-] | | > as opposed to the weird choice to use CLIP and T5 in the original FLUX This method was used in tons of image generation models. Not saying it's superior or even a good idea, but it definitely wasn't "weird". |
|
|
| ▲ | visioninmyblood 5 hours ago | parent | prev | next [-] |
| The model looks good for an open source model. I want to see how these models are trained. may be they have a base model from academic datasets and quickly fine-tune with models like nano banana pro or something? That could be the game for such models. But great to see an open source model competing with the big players. |
| |
|
| ▲ | notrealyme123 6 hours ago | parent | prev | next [-] |
| > The FLUX.2 - VAE is available on HF under an Apache 2.0 license. anyone found this? To me the link doesn't lead to the model |
| |
|
| ▲ | AmazingTurtle 6 hours ago | parent | prev | next [-] |
| I ran "family guy themed cyberpunk 2077 ingame screenshot, peter griffin as main character, third person view, view of character from the back" on both nano banana pro and bfl flux 2 pro. The results were staggering. The google model aligned better with the cyberpunk ingame scene, flux was too "realistic" |
| |
| ▲ | Yokohiii 2 hours ago | parent [-] | | i think they focus their dataset on photography. flux 1 dev one was never really great at artistic style, mostly locking you into a somewhat generic style. my little flux 2 pro testing does seem to verify that. but with lora ecosystem and enough time to fiddle flux 1 dev is probably still the best if you want creative stylistic results. |
|
|
| ▲ | geooff_ 6 hours ago | parent | prev | next [-] |
| Their published benchmarks leave a lot to be desired. I would be interested in seeing their multi-image performance vs. Nano Banana. I just finished up benchmarking Image Editing models and while Nano Banana is the clear winner for one-shot editing its not great at few-shot. |
| |
| ▲ | minimaxir 5 hours ago | parent [-] | | The issue with testing multi-image with Flux is that it's expensive due to its pricing scheme ($0.015 per input image for Flux 2 Pro, $0.06 per input image for Flux 2 Flex: https://bfl.ai/pricing?category=flux.2) while the cost of adding additional images is neligible in Nano Banana ($0.000387 per image). In the case of Flux 2 Pro, adding just one image increases the total cost to be greater than a Nano Banana generation. |
|
|
| ▲ | Yokohiii 6 hours ago | parent | prev | next [-] |
| 18gb 4 bit quant via diffusers. "low vram setup" :) |
|
| ▲ | bossyTeacher 2 hours ago | parent | prev | next [-] |
| Genuine question, does anyone use any of these text to image models regularly for non trivial tasks? I am curious to know how they get used. It literally seems like there is a new model reaching the top 3 every week |
|
| ▲ | DeathArrow 5 hours ago | parent | prev | next [-] |
| We probably won't be able to run it on regular PCs, even with a 5090. So I am curious how good the results will be using a quntized version. |
|
| ▲ | echelon 6 hours ago | parent | prev | next [-] |
| > Launch Partners Wow, the Krea relationship soured? These are both a16z companies and they've worked on private model development before. Krea.1 was supposed to be something to compete with Midjourney aesthetics and get away from the plastic-y Flux models with artificial skin tones, weird chins, etc. This list of partners includes all of Krea's competitors: HiggsField (current aggregator leader), Freepik, "Open"Art, ElevenLabs (which now has an aggregator product), Leonardo.ai, Lightricks, etc. but Krea is absent. Really strange omission. I wonder what happened. |
|
| ▲ | DeathArrow 6 hours ago | parent | prev | next [-] |
| If this is still a diffusion model, I wonder how well does it compare with NanoBanana. |
|
| ▲ | eric-p7 7 hours ago | parent | prev | next [-] |
| Yes yes very impressive. But can it still turn my screen orange? |
|
| ▲ | beernet 6 hours ago | parent | prev [-] |
| Oh, looks like someone had to release something very quickly after Google came for their lunch. Their little 15 mins is over already for BFL as it seems. |
| |
| ▲ | whywhywhywhy 6 hours ago | parent | next [-] | | comparing a closed image model to an open one is like comparing a compiled closed source app to raw source code. it's pointless to compare in pure output when one is set in stone and the other can be built upon. | | |
| ▲ | beernet 6 hours ago | parent [-] | | Did you guys even check the licence? Not sure what is "open source" about that. Open weights at the very best, yet highly restrictive | | |
| ▲ | gunalx 43 minutes ago | parent [-] | | Yep, definetly this, They should have creds for open weigths, and bein transparent of it not being open source though. Pepole should stop being this confused when the messaging is pretty clear. |
|
| |
| ▲ | timmmmmmay 6 hours ago | parent | prev [-] | | yeah except I can download this and run it on my computer, whereas Nano Banana is a service that Google will suddenly discontinue the instant they get bored with it |
|