Remix.run Logo
vidarh 17 hours ago

Proper coding agents can easily be set up with hooks or other means of forcing linting and tests to be run and prevent the LLMs from bypassing them already. Adding extra checks in the work flow works very well to improve quality. Use the tools properly, and while you still need to take some care, these issues are rapidly diminishing separately from improvements to the models themselves.

scubbo 8 hours ago | parent [-]

> Use the tools properly

> (from upthread) I was being sold a "self driving car" equivalent where you didn't even need a steering wheel for this thing, but I've slowly learned that I need to treat it like automatic cruise control with a little bit of lane switching.

This is, I think, the core of a lot of people's frustrations with the narrative around AI tooling. It gets hyped up as this magnificent wondrous miraculous _intelligence_ that works right-out-of-the-box; then when people use it and (correctly!) identify that that's not the case, they get told that it's their own fault for holding it wrong. So which is it - a miracle that "just works", or a tool that people need to learn to use correctly? You (impersonal "you", here, not you-`vidarh`) don't get to claim the former and then retreat to the latter. If this was just presented as a good useful tool to have in your toolbelt, without all the hype and marketing, I think a lot of folks (who've already been jaded by the scamminess of Web3 and NFTs and Crypto in recent memory) would be a lot less hostile.

TeMPOraL 6 hours ago | parent | next [-]

How about:

1) Unbounded claims of miraculous intelligence don't come from people actually using it;

2) The LLMs really are a "miraculous intelligence that works right out-of-the-box" for simple cases of a very large class of problems that previously was not trivial (or possible) to solve with computers.

3) Once you move past simple cases, they require increasing amount of expertise and hand-holding to get good results from. Most of the "holding it wrong" responses happen around the limits of what current LLMs can reliably do.

4) But still, that they can do any of that at all is not far from a miraculous wonder in itself - and they keep getting better.

scubbo 2 hours ago | parent [-]

With the exception of 1) being "No True Scotsman"-ish, this is all very fair - and if the technology was presented with this kind of grounded and realistic evaluation, there'd be a lot less hostility (IMO)!

vidarh 8 hours ago | parent | prev [-]

The problem with this argument is that it is usually not the same people making the different arguments.