▲ | user_7832 4 days ago | |||||||
> a very high focus on adherence Don't know if it's the same for others, but my issue with Nano Banana has been the opposite. Ask it to make x significant change, and it spits out what I would've sworn is the same image. Sometimes randomly and inexplicably it spits our the expected result. Anyone else experiencing this or have solutions for avoiding this? | ||||||||
▲ | alvah 4 days ago | parent | next [-] | |||||||
Just yesterday, asking it to make some design changes to my study. It did a great job with all the complex stuff, but asking it to move a shelf higher, it repeatedly gave me back the same image. With LLMs generally I find as soon as you encounter resistance it's best to start a new chat, however in this case that didn't wok either. Not a single thing I could do to convince it that the shelf didn't look right half way up a wall. | ||||||||
| ||||||||
▲ | vunderba 4 days ago | parent | prev | next [-] | |||||||
Yeah I've definitely seen this. You can actually see evidence of this problem in some of the trickier prompts (the straightened Tower of Pisa and the giraffe for example). Most models (gpt-image-1, Kontext, etc) typically fail by doing the wrong thing. From my testing this seems to be a Nano-Banana issue. I've found you can occasionally work around it by adding far more explicit directives to the prompt but there's no guarantee. | ||||||||
▲ | jbm 4 days ago | parent | prev | next [-] | |||||||
I've had this same issue happen repeatedly. It's not a big deal because it is just for small personal stuff, but I often need to tell it that it is doing the same thing and that I had asked for changes. | ||||||||
▲ | nick49488171 3 days ago | parent | prev [-] | |||||||
Yes experienced this exactly. |