| ▲ | ares623 4 days ago | |||||||||||||
I don’t know if it’s just me but doesn’t a huge value of LLMs for the general population necessitate all 3 of the circles? Having just 2 circles requires a person in the loop, and that person will still need knowledge and experience and a low enough throughput to meaningfully action the workload otherwise they would just rubber stamp everything (which is essentially the 3rd circle with extra steps) | ||||||||||||||
| ▲ | QuadmasterXLII 3 days ago | parent | next [-] | |||||||||||||
Most current consumer LLM uses are run only once or a few times, before changing prompt and task. This causes the attacker to have to move first: they put malicious injected documents onto the internet, which are then ingested by ephemeral systems, the details of which the attacker doesn't observe. On the other hand, something like an AI mcdonalds drive through order taker runs over and over again. This property of running repeatedly is what allows the attacker to move second and gain the advantage. | ||||||||||||||
| ▲ | wj 2 days ago | parent | prev | next [-] | |||||||||||||
Yes, a huge value of LLMs is having all three circles and moving all of that work into the background. (headless) Boiling down all knowledge work to the following workflow: Inputs -> Analysis -> Action There would be value to just be able to put an LLM in a loop ("Go get inputs. Make decision. Take action."). What I think is going to happen is that the human in the loop is going to end up being an engineer/super user who builds a program/workflow that uses the LLM for the Analysis with the Action step being launched externally from the LLM based on the LLM's response. Inputs (Workflow calls LLM) -> Analysis (Inputs + Analysis Prompt + Instructions to return payload in specific format) -> Action (Check payload and take action) It doesn't solve prompt injection but mitigates some of the risk while still leveraging AI to make business move cheaper and faster. (Quality being the factor other than Time and Cost for measuring a task and the factor I am not speaking to.) | ||||||||||||||
| ▲ | pprotas 3 days ago | parent | prev | next [-] | |||||||||||||
The HITL is needed to pin the accountability on an employee you can fire | ||||||||||||||
| ||||||||||||||
| ▲ | mercer 3 days ago | parent | prev | next [-] | |||||||||||||
wouldn't that still add a lot of value, where the person in the loop (sadly, usually) becomes little more than the verifier, but can process a lot more work? Anecdotally what I'm hearing is that this is pretty much how LLMs are helping programmers get more done, including the work being less enjoyable because it involves more verification and rubber-stamping. For the business owner, it doesn't matter that the nature of the work has changed, as long as that one person can get more work done. Even worse, the business owner probably doesn't care as much about the quality of the resulting work, as long as it works. I'm reminded of how much of my work has involved implementing solutions that took less careful thought, where even when I outlined the drawbacks, the owner wanted it done the quick way. And if the problems arose, often quite a bit later, it was as if they hadn't made that initial decision in the first place. For my personal tinkering, I've all but defaulted to the LLMs returning suggested actions at logical points in the workflow, leaving me to confirm or cancel whatever it came up with. this definitely still makes the process faster, just not as magically automatic. | ||||||||||||||
| ▲ | beefnugs a day ago | parent | prev [-] | |||||||||||||
Need agentic pipelines where you ratchet between only 2 at a time i imagine. Oops more cost, more tokens, more effort, more complexity, oops ai sucks | ||||||||||||||