Remix.run Logo
hansmayer 14 hours ago

> The duo’s answer was to build deterministic infrastructure that serves as a trust and verification layer for AI.

On the one hand, very encouraging to see plain old deterministic infra w/o using slop machines.

On the other hand, this is a recognition that LLMs are just additional friction in the system that we would better off without in the first place!

bjelkeman-again 14 hours ago | parent | next [-]

Just friction? What do you mean? What would you do instead?

hansmayer 14 hours ago | parent [-]

Well... You have a 'tool' that you cannot trust. Present everywhere due to unholly alliance between the LLM- companies and the exhilirated office worker cretins who "use" them to do "workflows". Now they fuck up stuff. Sounds like friction to me, or do you value the LLMs as net positive? WHy should I do something to fix their problems instead?

SpicyLemonZest 14 hours ago | parent | prev [-]

You're misunderstanding something about the problem space they're describing. The deterministic infra is for an underlying "execution layer"; the LLMs are providing utility by figuring out how to express English language queries in terms of the primitives of that verifiable layer. That way, you can describe your results deterministically even though the process of arriving at them was not necessarily deterministic.

hansmayer 14 hours ago | parent [-]

Oh. I may have misread indeed. Ao its like, still LLM bullshit, but with really strongly worded .md instruction files begging them to please be correct?

SpicyLemonZest 13 hours ago | parent [-]

No. The point of the verification layer is that you don't have to beg the LLM to please be correct.

eddiehammond 12 hours ago | parent | next [-]

[dead]

hansmayer 13 hours ago | parent | prev [-]

[dead]