| ▲ | kvdveer 10 hours ago | |||||||||||||
Two things are holding back current LLM-style AI of being of value here: * Latency. LLM responses are measured in order of 1000s of milliseconds, where this project targets 10s of milliseconds, that's off by almost two orders of magnitute. * Determinism. LLMs are inherently non-deterministic. Even with temperature=0, slight variations of the input lead to major changes in output. You really don't want your DB to be non-deterministic, ever. | ||||||||||||||
| ▲ | qeternity 8 hours ago | parent | next [-] | |||||||||||||
> LLMs are inherently non-deterministic. This isn't true, and certainly not inherently so. Changes to input leading to changes in output does not violate determinism. | ||||||||||||||
| ||||||||||||||
| ▲ | simonask 10 hours ago | parent | prev | next [-] | |||||||||||||
> 1000s of milliseconds Better known as "seconds"... | ||||||||||||||
| ▲ | olau 10 hours ago | parent | prev [-] | |||||||||||||
The suggestion was not to use an LLM to compile the expression, but to use an LLM to build the compiler. | ||||||||||||||