▲ | perlgeek 5 days ago | |||||||||||||||||||||||||
Then why does it produce different output? | ||||||||||||||||||||||||||
▲ | simonw 5 days ago | parent | next [-] | |||||||||||||||||||||||||
It works as a tool. The main model (GPT-4o or GPT-5 or o3 or whatever) composes a prompt and passes that to the image model. This means different top level models will get different results. You can ask the model to tell you the prompt that it used, and it will answer, but there is no way of being 100% sure it is telling you the truth! My hunch is that it is telling the truth though, because models are generally very good at repeating text from earlier in their context. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
▲ | seba_dos1 5 days ago | parent | prev [-] | |||||||||||||||||||||||||
You know that unless you control for seed and temperature, you always get a different output for the same prompts even with the model unchanged... right? |