| ▲ | thisisit 2 hours ago | |
My non scientific tests has been that GPT models follow the prompts literally. Every time I give it an example, it uses the example in literal sense instead of using it to enhance its understanding of the ask. This is a good thing if I want it to follow instructions but bad if I want it to be creative. I have to tell it that the examples I gave are just examples and not to be used in output. I feel comfortable using it when I have everything mapped out. Claude on the other hand can be creative. It understands that examples are for reference purposes only. But there are times it decides to off on a tangent on its own and decide not to follow instructions closely. I find it useful for bouncing off ideas or test something new, The other thing I notice is Claude has slightly better UI design sensibilities even if you don’t give instructions. GPT on the other hand needs instructions otherwise every UI element will be so huge you need to double scroll to find buttons. | ||
| ▲ | veber-alex 2 hours ago | parent | next [-] | |
This is also what I noticed. GPT doesn't know how to get creative, you need to tell it exactly what to do and what code you want it to write. For Claude you can be more general and it will look up solutions for you outside of the scope you gave it. I presonaly prefer Claude. | ||
| ▲ | sixothree an hour ago | parent | prev [-] | |
I think you might benefit from the "superpower" plugin. Add the word "brainstorm" before your prompt and it does a little bit better at figuring out how you want things. | ||