| ▲ | xixixao 2 days ago | |||||||
Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step. I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side. | ||||||||
| ▲ | wongarsu 2 days ago | parent | next [-] | |||||||
> A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step Which describes a lot of outsourced development. And we all know how well that works | ||||||||
| ||||||||
| ▲ | nosianu 2 days ago | parent | prev | next [-] | |||||||
> Would a human perform very differently? How useful is the comparison with the worst human results? Which are often due to process rather than the people involved. You can improve processes and teach the humans. The junior will become a senior, in time. If the processes and the company are bad, what's the point of using such a context to compare human and AI outputs? The context is too random and unpredictable. Even if you find out AI or some humans are better in such a bad context, what of it? The priority would be to improve the process first for best gains. | ||||||||
| ▲ | Yeask 2 days ago | parent | prev | next [-] | |||||||
A human trained with 0.00000001% of the money OpenAi uses to train models will perform better. A human with no traning will perform worse. | ||||||||
| ▲ | Capricorn2481 2 days ago | parent | prev | next [-] | |||||||
> Would a human perform very differently? Yes. | ||||||||
| ▲ | ebonnafoux 2 days ago | parent | prev | next [-] | |||||||
I have seen some codebase doubling the number of LoC after "refactoring" made by humans, so I would say no. | ||||||||
| ▲ | thatwasunusual 2 days ago | parent | prev [-] | |||||||
No (human) developer would _add_ tests. ^/s | ||||||||