| ▲ | strobe 2 days ago | |
>You need to follow up with a set of strict requirements to set expectations, guard rails, and what the final product should do (and not do). that usually very hard to do part, and is was possible to spent few days on something like that in real word before LLMs. But with LLMs is worse because is not enough to have those requirements, some of those won't work for random reasons and is no any 'rules' that can grantee results. It always like 'try that' and 'probably this will work'. Just recently I was struggled with same prompt produced different result between API calls before I realized that just usage of few more '\"' and few spaces in prompt leaded model to completely different route of logic which produced opposite answers. | ||