| ▲ | jennyholzer2 11 hours ago | |||||||||||||||||||||||||||||||||||||
"Most companies are efficiency-obsessed. Hence, they also expect AI solutions to increase “productivity”, i.e., efficiency, to a superhuman level. If a human is meant to monitor the output of the AI and intervene if needed, this requires that the human needs to comprehend what the AI solution produced at superhuman speed – otherwise we are down to human speed. This presents a quandary that can only be solved if we enable the human to comprehend the AI output at superhuman speed (compared to producing the same output by traditional means)." | ||||||||||||||||||||||||||||||||||||||
| ▲ | everdrive 11 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||
> "Most companies are efficiency-obsessed. Hence, they also expect AI solutions to increase “productivity” So this is true on paper, but I can tell you that companies don't broadly do a very good job of being efficient. What they do a good job of is doing the bare minimum in a number of situations, generating fragile, messy, annoying, or tech-debt-ridden systems / processes / etc. Companies regularly claim to make objective and efficient decisions, but often those decisions amount to little more than doing a half-assed job because it will save money and will probably be good enough. The "probably" does a lot of work here, and then "probably" is not good enough there's a lot of blame shifting / politics / bullshitting. The idea that companies are efficient is generally not very realistic except when it comes to things with real, measurable costs, such as manufacturing. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | TheOtherHobbes 11 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
Not necessarily. It depends if the process is deterministic and repeatable. If an AI generates a process more quickly than a human, and the process can be run deterministically, and the outputs are testable, then the process can run without direct human supervision after initial testing - which is how most automated processes work. The testing should happen anyway, so any speed increase in process generation is a productivity gain. Human monitoring only matters if the AI is continually improvising new solutions to dynamic problems and the solutions are significantly wrong/unreliable. Which is a management/analysis problem, and no different in principle to managing a team. The key difference in practice is that you can hire and fire people on a team, you can intervene to change goals and culture, and you can rearrange roles. With an agentic workflow you can change the prompts, use different models, and redesign the flow. But your choices are more constrained. | ||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||
| ▲ | singpolyma3 9 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||
Superhuman can mean different things though. Most software developers in industry are very very slow and so superhuman, for them, may still be less than what is humanly achievable for someone else. It's not a binary situation | ||||||||||||||||||||||||||||||||||||||
| ▲ | sokoloff 9 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||
Being down to human speed of reviewing code that already passes tests could still be a massive increase over 12 months’ ago pace. | ||||||||||||||||||||||||||||||||||||||