Remix.run Logo
elbear 8 hours ago

Those tests were written by people. That's why they were confident that what the LLM implemented was correct.

jackschultz an hour ago | parent [-]

Meta about how important context is.

People see LLMs and tons of tests tests written in the same sentence, and think that shows how models love writing pointless tests. Rather than realizing that the tests are standard and people written to show that the model wrote code that is validated by a currently trusted source.

Shows the importance for us to always write comments that humans are going to read with the right context is _very_ similar to how we need to interact with LLMs. And if we fail to communicate with humans, clearly we're going to fail with models.