| ▲ | minimaxir 10 hours ago |
| I...worked on the detailed Nano Banana prompt engineering analysis for months (https://news.ycombinator.com/item?id=45917875)...and...Google just...Google released a new version. Nano Banana Pro should work with my gemimg package (https://github.com/minimaxir/gemimg) without pushing a new version by passing: g = GemImg(model="gemini-3-pro-image-preview")
I'll add the new output resolutions and other features ASAP. However, looking at the pricing (https://ai.google.dev/gemini-api/docs/pricing#standard_1), I'm definitely not changing the default model to Pro as $0.13 per 1k/2k output will make it a tougher sell.EDIT: Something interesting in the docs: https://ai.google.dev/gemini-api/docs/image-generation#think... > The model generates up to two interim images to test composition and logic. The last image within Thinking is also the final rendered image. Maybe that's partially why the cost is higher: it's hard to tell if intermediate images are billed in addition to the output. However, this could cause an issue with the base gemimg and have it return an intermediate image instead of the final image depending on how the output is constructed, so will need to double-check. |
|
| ▲ | skeeter2020 9 hours ago | parent | next [-] |
| >> - Put a strawberry in the left eye socket.
>>- Put a blackberry in the right eye socket. >> All five of the edits are implemented correctly This is a GREAT example of the (not so) subtle mistakes AI will make in image generation, or code creation, or your future knee surgery. The model placed the specified items in the eye sockets based on the viewers left/right; when we talk relative in this scenario we usually (always?) mean from the perspective of the target or "owner". Doctors make this mistake too (they typically mark the correct side with a sharpie while the patient is still alert) but I'd be more concerned if we're "outsourcing" decision making without adequate oversight. https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nan... |
| |
| ▲ | oasisbob 7 hours ago | parent | next [-] | | There's a classic well-illustrated book, _How to Keep Your Volkswagen Alive_, which spends a whole illustrated page at the beginning building up a reference frame for working on the vehicle. Up is sky, down is ground, front is always vehicle's front, left is always vehicle's left. Sounds a bit silly to write it out, but the diagram did a great job removing ambiguity when you expect someone to be laying on the ground in a tight place looking backwards, upside down. Also feels important to note that in the theatre, there is stage-right and stage-left, jargon to disambiguate even though the jargon expects you to know the meaning to understand it. | |
| ▲ | CGMthrowaway 9 hours ago | parent | prev | next [-] | | >This is a GREAT example of the (not so) subtle mistakes AI will make in image generation, or code creation, or your future knee surgery. The mistake is in the prompting (not enough information). The AI did the best it could "What's the biggest known planet" "Jupiter" "NO I MEANT IN THE UNIVERSE!" | | |
| ▲ | sebzim4500 8 hours ago | parent | next [-] | | It doesn't affect your point but technically since the IAU are insane, exoplanets aren't technically planets and Jupiter is the largest planet in the universe. | | |
| ▲ | MangoToupe 7 hours ago | parent [-] | | I suppose it was too much to hope that chatbots could be trained to avoid pointless pedantry. | | |
| ▲ | fragmede 6 hours ago | parent [-] | | They've been trained on every web forum on the Internet. How could it be possible for them to avoid that? |
|
| |
| ▲ | throawayonthe 7 hours ago | parent | prev | next [-] | | asking "x-most known y" and not expecting a global answer is odd | | | |
| ▲ | bigstrat2003 9 hours ago | parent | prev [-] | | No, this is squarely on the AI. A human would know what you mean without specific instructions. | | |
| ▲ | siffin 9 hours ago | parent | next [-] | | Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery). Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do. | | |
| ▲ | emp17344 2 hours ago | parent | next [-] | | “People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors. | |
| ▲ | rodrigodlu 7 hours ago | parent | prev | next [-] | | Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom. So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling. | |
| ▲ | rullelito 8 hours ago | parent | prev [-] | | Why on earth would the fallback when a prompt is under specified be to do something no human expects? |
| |
| ▲ | danso 8 hours ago | parent | prev | next [-] | | If the instructions were actually specific, e.g. Put a blackberry in its right eye socket, then yes, most humans would know what that meant. But the instructions were not that specific: in the right eye socket | | |
| ▲ | TylerE 8 hours ago | parent [-] | | Or be even more explicit: Put a strawberry in the person’s right eye socket. |
| |
| ▲ | adastra22 8 hours ago | parent | prev | next [-] | | If you asked me right now what the biggest known planet was, I'd think Jupiter. I'd assume you were talking about our solar system ("known" here implying there might be more planets out in the distant reaches). | |
| ▲ | CGMthrowaway 8 hours ago | parent | prev | next [-] | | I would be amused to see you test this theory with 100 men on the street | |
| ▲ | jaggederest 9 hours ago | parent | prev | next [-] | | I would not, I would clarify, and I think I'm a human. | |
| ▲ | recursive 8 hours ago | parent | prev | next [-] | | But different humans would know what you meant differently. Some would have known it the same way the AI did. | |
| ▲ | nkmnz 7 hours ago | parent | prev [-] | | Yeah, just like humans always know what you mean. |
|
| |
| ▲ | lifthrasiir 3 hours ago | parent | prev | next [-] | | That was a big problem when I was toying around the original Nano Banana. I always prompted the perspective of the (imaginary) camera, and yet NB often interpreted that as that of the target, giving no way to select the opposite side. Since the selected side is generally closer to the camera, my usual workaround is to force the side far from the camera. And yet that was not perfect. | |
| ▲ | 0x457 9 hours ago | parent | prev | next [-] | | Right, that's why one should use "put a strawberry in the portside eye socket" and "put a strawberry in the starboard side socket" | | | |
| ▲ | Jabrov 9 hours ago | parent | prev | next [-] | | I don't know if that's so much a mistake as it is ambiguity though? To me, using the viewer's perspective in this case seems totally reasonable. Does it still use the viewer's perspective if the prompt specifies "Put a strawberry in the _patient's left eye_"? If it does, then you're onto something. Otherwise I completely disagree with this. | | |
| ▲ | ComputerGuru 9 hours ago | parent | next [-] | | “Eye on the left” is different from “the left eye”. First can be ambiguous, second really isn’t. | | |
| ▲ | simonw 8 hours ago | parent | next [-] | | I think "the left eye" in this particular case (a photo of a skull made of pancake batter) is still very slightly ambiguous. "The skull's left eye" would not be. | |
| ▲ | recursive 8 hours ago | parent | prev [-] | | I guess there's some ambiguity regarding whether or not this can be ambiguous. Because it seems like it can to me. |
| |
| ▲ | withinboredom 9 hours ago | parent | prev [-] | | “The right socket” can only be implied one way when talking about a body just like you only have one right hand despite the fact that it is on my left when looking at you. | | |
| ▲ | esrauch an hour ago | parent | next [-] | | "Right hand" is practically a bigram that has more meaning, since handedness is such a common topic. Also context matters, if you're talking to someone you would say "right shoulder" for _their_ right since you know it's an observer with different vantage point. Talking about a scene in a photo "the right shoulder" to me would more often mean right portion of the photo even if it was the person's left shoulder. | |
| ▲ | marcellus23 5 hours ago | parent | prev | next [-] | | I think the fact that anyone in this thread thinks it's ambiguous is proof by definition that it's ambiguous. | |
| ▲ | pphysch 9 hours ago | parent | prev [-] | | "Plug into right power socket" Same language, opposite meaning because of a particular noun + context. I think the only thing obvious here is that there is no obvious solution other than adding lots of clarification to your prompt. | | |
| ▲ | withinboredom 9 hours ago | parent [-] | | I think you missed the entire point? | | |
| ▲ | swores 8 hours ago | parent [-] | | No, they just disagree with you. | | |
| ▲ | withinboredom 8 hours ago | parent [-] | | How do you disagree with having a right and a left hand? | | |
| ▲ | TylerE 8 hours ago | parent [-] | | GP is using right as in “correct”, not directionality. | | |
| ▲ | degamad 6 hours ago | parent [-] | | No, I don't think they are. If you are facing a wall-plate with two power sockets on it side by side and you are telling someone to plug something in, which one would be "the right socket", and which would be "the left socket"? If above the wall-plate is a photo of a person and you are someone to draw a tattoo on the photo, which is "the right arm" and which is "the left arm"? Same wording, different expectation. | | |
| ▲ | TylerE 4 hours ago | parent [-] | | Power plugs are not people. ETA: and if I were telling someone which socket to plug something into, it would absolutely be from the prospective of the person doing the plugging, not from inside the wall. | | |
|
|
|
|
|
|
|
| |
| ▲ | minimaxir 8 hours ago | parent | prev [-] | | I meant to add a clarification to that point (because the ambiguity is a valid counterpoint), thanks for the reminder. |
|
|
| ▲ | simonw 10 hours ago | parent | prev | next [-] |
| In case anyone missed Max's Nano Banana prompting guide, it's absolutely the definitive manual for prompting the original Nano Banana... and I tried some of the prompts in there against Nano Banana Pro and found it to be very applicable to the new model as well. https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nan... My recreations of those pancake batter skulls using Nano Banana Pro: https://simonwillison.net/2025/Nov/20/nano-banana-pro/#tryin... |
| |
| ▲ | vunderba 9 hours ago | parent | next [-] | | In my experience multimodal models like gpt-image-1/nano/etc. don't really require a lot of prompt trickery [1] like the good ol' days of SD 1.5. To be clear, that's a good thing though. It's also one of the reasons why "prompt engineering" will become less relevant as model understanding goes up. [1] - Unless you're trying to circumvent guardrails | |
| ▲ | mNovak 8 hours ago | parent | prev | next [-] | | Does the refrigerator magnet system prompt leak [1] still work? [1] https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nan.... | | | |
| ▲ | doctorpangloss 9 hours ago | parent | prev [-] | | > it's absolutely the definitive manual How do you know Simon? It's certainly a blog post, with content about prompting in it. If your goal is to make generative art that uses specific IP, I wouldn't use it. | | |
| ▲ | simonw 9 hours ago | parent [-] | | Do you know of a better document specifically about prompting Nano Banana? | | |
| ▲ | doctorpangloss 9 hours ago | parent [-] | | Why don't you just ask Gemini? It will tell you! There's no mystery. | | |
| ▲ | simonw 9 hours ago | parent | next [-] | | You implied that Max's Nano Banana prompting guide wasn't the best available, so I think it's on you to provide a link to a better one. | |
| ▲ | jdiff 8 hours ago | parent | prev [-] | | Why would Gemini have any more insight than anyone else, let alone someone who's done hands on testing? |
|
|
|
|
|
| ▲ | ashraymalhotra 10 hours ago | parent | prev | next [-] |
| Minor clarification, the cost for every input image is $0.0011, not $0.06. |
| |
| ▲ | minimaxir 10 hours ago | parent | next [-] | | I was going off the footnote of "Image input is set at 560 tokens or $0.067 per image" but 560 * 2 / 1_000_000 is indeed $0.0011 so I have no idea where the $0.067 came from. Fixed, and this is why I typically don't read docs without coffee. | |
| ▲ | Taek 10 hours ago | parent | prev | next [-] | | I would consider that a major clarification | |
| ▲ | 10 hours ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | minimaxir 8 hours ago | parent | prev | next [-] |
| I just pushed gemimg 0.3.2 which adds image_size support for Nano Banana Pro, and I ran a few tests on some of the images in the blog. In my testing, Nano Banana Pro correctly handled most of the image generation errors noted in my blog post: https://x.com/minimaxir/status/1991580127587921971 - Fibonacci magnets: code is correctly indented and the syntax highlighting atleast tries giving variables, numbers, and keywords different colors. - Make me a Studio Ghibli: actually does style transfer correctly, and does it better than ChatGPT ever did. - Rendering a webpage from HTML: near-perfect recreation of the HTML, including text layout and element sizing. That said, there may be regressions where even with prompt engineering, the generated images which are more photorealistic look too good and land back into the uncanny valley. I haven't decided if I'm going to write a follow up blog post yet. The system prompt hacking trick doesn't work with Nano Banana Pro unfortunately. |
| |
|
| ▲ | Terretta 8 hours ago | parent | prev | next [-] |
| Your wrapper is awesome and still relevant. > "I...worked on the detailed Nano Banana prompt engineering analysis for months" Early in four decades of tech innovation I wasted time layering on fixes for clear deficiencies in a snowballing trend's tech offerings. If it's a big enough trend to have well funded competitors, just wait. The concern is likely not unique, and will likely be solved tomorrow. I realized it's better to learn adaptive/defensive techniques, giving your product resilience to change. Your goal is that when surfing the change waves you can pick a point you like between rock solid and cutting edge and surf there safely. Invest that "remediate their thing" time in "change resilience" instead – pays dividends from then on. It can be argued your tool is in this camp! // Getting better at this also helps you with zero days. |
|
| ▲ | swyx 10 hours ago | parent | prev | next [-] |
| btw you should get on their Trusted Testers program, they do give early heads up GDM folks, get Max on! |
|
| ▲ | visioninmyblood 10 hours ago | parent | prev | next [-] |
| yes they are pricey but the price will go down over time and then you can switch. vlm.run got access as early customers and are releasing it for free with unlimited generations(till they are bottlenecked by google). some results here combining image gen(Nano Banana pro) with video gen(Veo 3.1) in a single chat https://chat.vlm.run/c/1c726fab-04ef-47cc-923d-cb3b005d6262. This combined the synth generation of a person and made the puppet dance. Quite impressive |
|
| ▲ | vunderba 9 hours ago | parent | prev | next [-] |
| > The model generates up to two interim images to test composition and logic. The last image within Thinking is also the final rendered image. I've been using a bespoke Generative Model -> VLM Validator -> LLM Prompt Modifier REPL as part of my benchmarks for a while now so I'd be curious to see how this stacks up. From some preliminary testing (9 pointed star, 5 leaf clover, etc) - NB Pro seems slightly better than NB though it still seems to get them wrong. It's hard to tell what's happening under the covers. |
|
| ▲ | spyspy 10 hours ago | parent | prev | next [-] |
| This reminds me of the journalist working for months on uncovering Trump's dirty business just for Trump himself to admit the entire thing in a tweet. |
| |
| ▲ | wahnfrieden 10 hours ago | parent [-] | | It's written to mimic that style but without meaning that the work has been done for them, just that there is new work to be done, making it an odd perhaps unconscious reference |
|
|
| ▲ | sandGorgon 10 hours ago | parent | prev | next [-] |
| this is pretty cool!
have you found success with image editing in nano banana - i mean photoshop-like stuff.
from your article i seem to wonder if nano banana is good for editing versus generating new images. |
| |
| ▲ | vunderba 10 hours ago | parent [-] | | That IS the use-case for Nano Banana (as opposed to pure generative like Imagen4). In my benchmarks, Nano-Banana scores a 7 out of 12. Seedream4 managed to outpace it, but Seedream can also introduce slight tone mapping variations. NB is the gold standard for highly localized edits. Comparisons of Seedream4, NanoBanana, gpt-image-1, etc. https://genai-showdown.specr.net/image-editing | | |
| ▲ | simonw 8 hours ago | parent [-] | | I tried your "Remove all the brown pieces of candy from the glass bowl." prompt against Nano Banana Pro and it converted them to green, which I think is a pass by your criteria. Original Nano Banana had failed that test because it changed the composition of the M&Ms. https://static.simonwillison.net/static/2025/brown-mms-remov... | | |
| ▲ | vunderba 8 hours ago | parent [-] | | Thanks Simon - I'm in the middle of re-running all my prompts through NB Pro at the moment. Nice to know it's already edged out the original. It also passed the SHRDLU test (swapping colored blocks) without cheating and just changing the colors. I'll have an update to the site shortly! EDIT: Finished the comparisons. NB Pro scored a few more points than NB which was already super impressive. https://genai-showdown.specr.net/image-editing?models=nb,nbp |
|
|
|
|
| ▲ | oblio 10 hours ago | parent | prev [-] |
| It looks nice, what are people using the package for? |