Remix.run Logo
atleastoptimal 5 days ago

If a program calls an API like

search_engine.get_search_results(query, length, order)

It doesn't "care" about the algorithm that produced that list of results, only that it fits the approximation of how the algorithm works as defined by the schema. There are thousands of ways the engine could have been implemented to produce the schema that returns relevance-based results from a web-crawler-sourced database.

In the same way, if I prompt an LLM "design a schema with [list of requirements] that works in [code context and API calls]", there are thousands of ways it could produce that code, but within a margin of error a high quality LLM should be able to produce the code that fits those requirements.

Of course the difference is that there is a stochastic element to LLM generated code. However it is useful to think of LLM's this way because it allows being able to leverage their probability of being correct, even if they aren't as precise as calling APIs but being explicit in how those abstractions are used.