▲ | Ask HN: LLM Prompt Engineering | ||||||||||||||||
3 points by Scotrix 13 hours ago | 3 comments | |||||||||||||||||
I’m working on a project where I need to extract user intents and move them to deterministic tool/function/api executions + afterwards refining/transforming the results by another set of tools. Since gathering the right intent and parameters (there are a lot of subtle differences in potential prompts) is quite challenging I’m using a long consecutive executed list of prompts to fine tune to gather exactly the right pieces of information needed to have somewhat reliable tool executions. I tried this with a bunch of agent frameworks (including langchain/langgraph) but it gets very messy very quickly and this messiness is creating a lot of side effects easily. So I wonder if there is a tool, approach, anything to keep better control of chains of LLM executions which don’t end up in a messy configuration and/or code execution implementation? Maybe even something more visual, or am I the only struggling with this? | |||||||||||||||||
▲ | thekuanysh 13 hours ago | parent [-] | ||||||||||||||||
What kind of IO do you have? JSON or plain language? | |||||||||||||||||
|