Remix.run Logo
andy99 16 hours ago

Why more dangerous?

For the same level of IC, AI tools will let people show off their skills more quickly, i.e. show faster how smart or dumb they are.

For nontechnical managers, it will let them ask lots of dumb questions and have stupid ideas that need refuting much faster, "I had a conversation with chatgpt and it says we can just ...". I don't consider that dangerous, that would be too flattering, it just makes the asymmetric bullshit (Brandoloni's law?) flow more efficiently.

Daedren 5 hours ago | parent | next [-]

I agree, but on the other hand, they may also ask less questions because they will rely more on AI, and they will not know when the AI's hallucinating until they've spent a considerable amount of time dealing with a wrong premise.

You'll have to refute a lot more bullshit AND at a later stage/time by Brandoloni's law.

turtleyacht 14 hours ago | parent | prev [-]

How do you refute your manager? Not only are they now biased to the answer, but without their own aesthetic (experience), it will cost you double: once to try the AI solution, and then another to try your version.

Suppose you're wrong versus the machine; they will think you are less consistent, even though every problem context carries its own nuance.

Having to, in good faith, try both avenues every time sounds exhausting.

nomel 14 hours ago | parent [-]

You're assuming the manager will dictate the design/work, without feedback. Maybe I've been lucky, but I've never seen this in a technical setting, unless the manager was actually contributing to the technical work. The sorts of managers that micromanage and dictate like that don't last in technical environments, because they limit the teams technical ability to their own, and their ideas never make it past technical meetings/design reviews when others are involved.