Remix.run Logo
wiz21c 7 hours ago

> This indicates that AI outputs are perceived as useful and valuable by many of this year’s survey respondents, despite a lack of complete trust in them.

Or the respondents have hard times admitting AI can replace them :-)

I'm a bit cynical but sometimes when I use Claude, it is downright frightening how good it is sometimes. Having coded for a lot of year, I'm sometimes a bit scared that my craft can, somtimes, be so easily replaced... Sure it's not building all my code, it fails etc. but it's a bit disturbing to see that somethign you have been trained a for a very long time can be done by a machine... Maybe I'm just feeling a glimpse of what others felt during the industrial revolution :-)

pluc 6 hours ago | parent | next [-]

Straight code writing has never been the problem - it's the understanding of said code that is. When you rely on AI, and AI creates something, it might increase productivity immediately but once you need to debug something that uses that piece of code, it will nullify that gain as you have no idea where to look. That's just one aspect of this false equivalency.

polotics 7 hours ago | parent | prev | next [-]

Well when I use a power screwdriver I am always impressed by how much more quickly I can finish easy tasks too. I also occasionally busted a screw or three, that then I had to drill out...

cogman10 6 hours ago | parent | prev | next [-]

So long as you view AI as a sometimes competent liar, then it can be useful.

I've found AI is pretty good at dumb boilerplate stuff. I was able to whip out prototypes, client interfaces, tests, etc pretty fast with AI.

However, when I've asked AI "Identify performance problems or bugs in this code" I find it'll just make up nonsense. Particularly if there aren't problems with the code.

And it makes sense that this is the case. AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

fluoridation 2 hours ago | parent [-]

>AI has been trained on a mountain of boilerplate and a thimble of performance and bug optimizations.

That's not exactly it, I think. If you look through a repository's entire history, the deltas for the bug fixes and optimizations will be there. However, even a human who's not intimately familiar with the code and the problem will have a hard time understanding why the change fixes the bug, even if they understand the bug conceptually. That's because source code encodes neither developer intent, nor specification, nor real design goals. Which was cause of the bug?

* A developer who understood the problem and its solution, but made a typo or a similar miscommunication between brain and fingers.

* A developer who understood the problem but failed to implement the algorithm that solves it.

* An algorithm was used that doesn't solve the problem.

* The algorithm solves the problem as specified, but the specification is misaligned with the expectations of the users.

* Everything used to be correct, but an environment change made it so the correct solution stopped being correct.

In an ideal world, all of this information could be somehow encoded in the history. In reality this is a huge amount of information that would take a lot of effort to condense. It's not that it wouldn't have value even for real humans, it's just that it would be such a deluge of information that it would be incomprehensible.

hu3 6 hours ago | parent | prev | next [-]

I also find it great for prompts like:

"this function should do X, spot inconsistencies, potential issues and bugs"

It's eye opening sometimes.

zwieback 5 hours ago | parent | prev | next [-]

I find AI coding assistants useful when I'm using a new library or language feature I'm not super familiar with.

When I have AI generate code using features I'm very familiar with I can see that it's okay but not premium code.

So it makes sense that I feel more productive but also a little skeptical.

apt-apt-apt-apt 5 hours ago | parent | prev | next [-]

When I see the fabulous images generated by AI, I can't help but wonder how artists feel.

Anyone got a pulse on what the art community thinks?

fluoridation 4 hours ago | parent [-]

Generally speaking, they don't like their public posts being scraped to train AIs, and they don't like accounts that post AI output without disclosing it.

surgical_fire 7 hours ago | parent | prev | next [-]

In a report from Google, who is heavily invested in AI becoming the future, I actually expect the respondents to sound more positive about AI than they actually are

Much like in person I pretend to think AI is much more powerful and inevitable than I actually think it is. Professionally it makes very little sense to be truthful. Sincerity won't pay the bills.

bluefirebrand 2 hours ago | parent [-]

Everyone lying to their bosses about how useful AI is has placed us all in a prisoner's dilemma where we all have to lie or we get replaced

If only people could be genuinely critical without worrying they will be fired

surgical_fire 33 minutes ago | parent [-]

I agree. I also don't make the rules.

And to be honest, I don't really care. It is a very comfortable position to be in. Allow me to explain:

I genuinely believe AI poses no threat to my employment. I identify the only medium term threat the very likely economic slowdown in the coming years.

Meanwhile, I am happy to do this silly dance while companies waste money and resources on what I see as a dead-end, wasteful technology.

I am not here to make anything better.

bitwize 5 hours ago | parent | prev | next [-]

We may see a return to the days when businesses relied on systems analysts, not programmers, to design their information systems—except now, the programming work will be left to the machines.

bopbopbop7 6 hours ago | parent | prev [-]

Or you aren’t as good as you think you are :-)

Almost every person I worked with that is impressed by AI generated code has been a low performer that can’t spot the simplest bugs in the code. Usually the same developers that blindly copy pasted from stack overflow before.