Remix.run Logo
Our_Benefactors 5 hours ago

I can’t take this article seriously, and neither should you. Being anti AI/anti LLM is solidly in the Luddite camp; there’s really no more debates to be had. Every serious inquiry shows productivity gains by using ai.

It’s anyone’s prerogative to continue to advocate for the horse and buggy over the automobile, but most people won’t bother to take the discussion seriously.

snickerbockers 5 hours ago | parent | next [-]

>Being anti AI/anti LLM is solidly in the Luddite camp; there’s really no more debates to be had. Every serious inquiry shows productivity gains by using ai.

These two sentences appear to be at odds with one another.

Our_Benefactors 5 hours ago | parent [-]

The data showed llms are better. This put debate to rest. Now we are post-debate.

JohnFen 5 hours ago | parent | next [-]

What data are you talking about? Why do you value it above the data showing the opposite?

snickerbockers 3 hours ago | parent [-]

It's superior data because it supports his expectations. His expectations are right because they are based on superior data. Checkmate Luddites.

Our_Benefactors an hour ago | parent [-]

Meanwhile, you have furnished zero data that supports your claims. Ho hum.

lmf4lol 5 hours ago | parent | prev | next [-]

give me one seriously peer reviewed study please with proper controls

i wait

Our_Benefactors 4 hours ago | parent [-]

Go ahead and move the goalposts now... This took about 2 minutes of research to support the conclusions I know to be true. You can waste time as long as you choose in academia attempting to prove any point, while normal people make real contributions using LLMs.

### An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation We evaluate TESTPILOT using OpenAI’s gpt3.5-turbo LLM on 25 npm packages with a total of 1,684 API functions. The generated tests achieve a median statement coverage of 70.2% and branch coverage of 52.8%. In contrast, the state-of-the feedback-directed JavaScript test generation technique, Nessie, achieves only 51.3% statement coverage and 25.6% branch coverage. - *Link:* [An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation (arXiv)](https://arxiv.org/abs/2302.06527)

---

### Field Experiment – CodeFuse (12-week deployment) - Productivity (measured by the number of lines of code produced) increased by 55% for the group using the LLM. Approximately one third of this increase was directly attributable to code generated by the LLM. - *Link:* [CodeFuse: Generative AI for Code Productivity in the Workplace (BIS Working Paper 1208)](https://www.bis.org/publ/work1208.htm)

footy 2 hours ago | parent | next [-]

> This took about 2 minutes of research to support the conclusions I know to be true

This is a terrible way to do research!

Our_Benefactors an hour ago | parent [-]

The point is that the information is readily available, and rather than actually adding to the discussion they chose to crow “source?”. It’s very lame.

capyba 2 hours ago | parent | prev | next [-]

“ Productivity (measured by the number of lines of code produced) increased”

The LLM’s better have written more code, they’re a text generation machine!

In what world does this study prove that the LLM actually accomplished anything useful?

Our_Benefactors an hour ago | parent [-]

As expected, the goalposts are being moved.

LOC does have a correlation with productivity, as much as devs hate to acknowledge it. I don’t care that you can provide counterexamples to this, or even if the AI on average takes more LOC to accomplish the same task - it still results in more productivity overall because it arrives at the result faster.

capyba 15 minutes ago | parent [-]

Nothing about this is moving goalposts - you and/or the person(s) conducting this study are the ones being misleading!

If you want to measure time to complete a complex task, then measure that. LOC is an intermediate measure. How much more productive is "55% more lines of code"?

I can write a bunch of garbage code really fast with a lot of bugs that doesn't work, or I can write a better program that works properly, slower. Under your framework, the former must be classified as 'better' - but why?

I read the study you reference and there is literally nothing in the study that talks about whether or not tasks were accomplished successfully.

It says: * Junior devs benefited more than senior devs, then presents a disingenuous argument as to why that's the senior devs' fault (more experienced employees are worse than less experienced employees, who knew?!) * 11% of the 55% increase in LOC was attributed directly to LLM output * Makes absolutely no attempt to measure whether or not the extra code was beneficial

psunavy03 an hour ago | parent | prev [-]

If you are seriously linking "productivity" to "lines of code produced," that says all about your credibility that I need to know.

Our_Benefactors an hour ago | parent [-]

Do you think LOC and program complexity are not correlated? You are arguing in bad faith.

snickerbockers 5 hours ago | parent | prev [-]

"the data"

xg15 4 hours ago | parent | prev | next [-]

You mean like the study that found a 20% productivity drop instead of gain?

logicprog 2 hours ago | parent [-]

It didn't really show that if you break down the data, and its methodology was pretty bad

https://www.fightforthehuman.com/are-developers-slowed-down-...

AllegedAlec 4 hours ago | parent | prev | next [-]

> Being anti AI/anti LLM is solidly in the Luddite camp; there’s really no more debates to be had. Every serious inquiry shows productivity gains by using ai.

"Guys this debate is so stupid. Every serious inquiry shows productivity gains when we take away all senses, jack workers into the matrix and feed them a steady diet of speed intravenously. This put debate to rest. Now we are post-debate"

Something can increase productivity and still not be good.

steve_adams_86 3 hours ago | parent | prev | next [-]

The Luddites had some great ideas and were driven by a more sophisticated philosophy than people tend to give them credit for. I think their motivations are still applicable and worth considering today.

Refreeze5224 4 hours ago | parent | prev | next [-]

Then put me solidly in the Luddite camp. I think you should look into the history of the Luddites though. They were not against technology; they were against technology that destroyed jobs.

AI is about destroying working-class jobs so that corporations and the owning class can profit. It's not about writing code or summarizing articles. Those are just things workers can do with it. That's not what it's actually for. Its purpose is to reduce payroll costs for companies by replacing workers.

logicprog 2 hours ago | parent [-]

> They were not against technology; they were against technology that destroyed jobs.

They were not against technology; they were against technology that their destroyed jobs. If we had followed what they wanted, we'd still be in a semi pre industrial artisnal economy, and the worse off for it.

lkey 24 minutes ago | parent [-]

So you didn't read about them.

> In North West England, textile workers lacked these long-standing trade institutions and their letters composed an attempt to achieve recognition as a united body of tradespeople. As such, they were more likely to include petitions for governmental reforms, such as increased minimum wages and the cessation of child labor.

Sounds pretty modern doesn't it? unions, wages, no child-exploitation...

And the government response?

> Mill and factory owners took to shooting protesters and eventually the movement was suppressed by legal and military force, which included execution and penal transportation of accused and convicted Luddites.

darvid 4 hours ago | parent | prev [-]

you're absolutely correct!