Remix.run Logo
daxfohl 2 hours ago

I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).

The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?

Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.

mprast an hour ago | parent | next [-]

It's very interesting to me how many people presume that if you don't learn how to vibecode now you'll never ever be able to catch up. If the models are constantly getting better, won't these tools be easier to use a year from now? Will model improvements not obviate all the byzantine prompting strategies we have to use today?

dns_snek 40 minutes ago | parent | next [-]

I think so, that's why I think that the risk of pretty much ignoring the space is close to zero. If I happen to be catastrophically wrong about everything then any AI skills I would've learned today will be completely useless 5 years from now anyway, just like skills from early days of ChatGPT are completely useless today.

koolba 41 minutes ago | parent | prev | next [-]

And if you can never catch up, how would someone new to the game ever be a meaningful player?

eddythompson80 21 minutes ago | parent [-]

If you’ve never driven a model T, how would you ever drive a corolla? If you never did angular 1, how would you ever learn react? If you never used UNIX 4, you’ll be behind in Linux today. /s

gerdesj 20 minutes ago | parent | prev | next [-]

Wait around five years and then prompt: "Vibe me Windows" and then install your smart new double glazed floor. There is definitely something useful happening in LLM land but it is not and will never be AGI.

Oooh, let me dive in with an analogy:

Screwdriver.

Metal screws needed inventing first - they augment or replace dowels, nails, glue, "joints" (think tenon/dovetail etc), nuts and bolts and many more fixings. Early screws were simply slotted. PH (Philips cross head) and PZ (Pozidrive) came rather later.

All of these require quite a lot of wrist effort. If you have ever screwed a few 100 screws in a session then you know it is quite an effort.

Drill driver.

I'm not talking about one of those electric screw driver thingies but say a De W or Maq or whatever jobbies. They will have a Li-ion battery and have a chuck capable of holding something like a 10mm shank, round or hex. It'll have around 15 torque settings, two or three speed settings, drill and hammer drill settings. Usually you have two - one to drill and one to drive. I have one that will seriously wrench your wrist if you allow it to. You need to know how to use your legs or whatever to block the handle from spinning when the torque gets a bit much.

...

You can use a modern drill driver to deploy a small screw (PZ1, 2.5mm) to a PZ3 20+cm effort. It can also drill with a long auger bit or hammer drill up to around 20mm and 400mm deep. All jolly exciting.

I still use an "old school" screwdriver or twenty. There are times when you need to feel the screw (without deploying an inadvertent double entendre).

I do find the new search engines very useful. I will always put up with some mild hallucinations to avoid social.microsoft and nerd.linux.bollocks and the like.

wiseowise 26 minutes ago | parent | prev | next [-]

FOMO is hell of a drug.

an hour ago | parent | prev [-]
[deleted]
wavemode an hour ago | parent | prev | next [-]

> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.

The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.

The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.

paulryanrogers 31 minutes ago | parent | next [-]

Those of us working from the bottom, looking up, do tend to take the clinical progressive approach. Our focus is on the next ticket.

My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.

Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.

krackers 11 minutes ago | parent [-]

>must be so focused on the future

They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.

dns_snek 22 minutes ago | parent | prev [-]

> Test on mice, test in clinical trials, then go to market.

You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.

softwaredoug an hour ago | parent | prev | next [-]

Even within AI coding how people use this varies wildly from one people trying to one shot apps to people being barely above tab completers.

When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.

I think the best you can do now is try lots of different new ways of working keep an open mind

daxfohl an hour ago | parent [-]

Or just wait for things to settle. As fast as the field is moving, staying ahead of the game is probably high investment with little return, as the things you spend a ton of time honing today may be obsolete tomorrow, or simply built into existing products with much lower learning cost.

Note, if staying on the bleeding edge is what excites you, by all means do. I'm just saying for people who don't feel that urge, there's probably no harm just waiting for stuff to standardize and slow down. Either approach is fine so long as you're pragmatic about it.

mgraczyk an hour ago | parent | prev | next [-]

Even if you believe that many are too far on one side now, you have to account for the fact that AI will get better rapidly. If you're not using it now you may end up lacking preparation when it becomes more valuable

daxfohl an hour ago | parent | next [-]

But as it gets better, it'll also get easier, be built into existing products you already use, etc. So I wouldn't worry too much about that aspect. If you enjoy tinkering, or really want to dive deep into fundamentals, that's one thing, but I wouldn't worry too much about "learning to use some tool", as fast as things are changing.

jaapbadlands 39 minutes ago | parent | next [-]

The baseline, out-of-the-box basic tool level will lift, but so will the more obscure esoteric high-level tools that the better programmers will learn to control, further separating themselves in ability from the people who wait for the lowest common denominator to do their job for them.

daxfohl 4 minutes ago | parent [-]

Maybe. But so far ime most of the esoteric tools in the AI space are esoteric because they're not very good. When something gets good, it's quickly commoditized.

Until coding systems are truly at human-replacement level, I think I'd always prefer to hire an engineer with strong manual coding skills than one who specializes in vibe coding. It's far easier to teach AI tools to a good coder than to teach coding discipline to a vibe coder.

mgraczyk an hour ago | parent | prev [-]

I don't think so. That's a good point but the capability has been outpacing people's ability to use it for a while and that will continue.

Put another way, the ability to use AI became an important factor in overall software engineering ability this year, and as the year goes on the gap between the best and worst users or AI will widen faster because the models will outpace the harnesses

eddythompson80 11 minutes ago | parent | next [-]

That’s the comical understanding being pushed by management in software companies yes. The people who never actually use the tools themselves, but the concept of it. It’s the same AGI nonesense, but dumped down to something they think they can control.

9 minutes ago | parent [-]
[deleted]
daxfohl 27 minutes ago | parent | prev [-]

I mean, right now "bleeding edge" is an autonomous agents system that spends a million dollars making an unbelievably bad browser prototype in a week. Very high effort and the results are jibberish. By the time these sorts of things are actually reliable, they'll be productized single-click installer apps on your network server, with a simple web interface to manage them.

If you just mean, "hey you should learn to use the latest version of Claude Code", sure.

mgraczyk 24 minutes ago | parent [-]

I mean that you should stay up to date and practiced on how to get the most out of models. Using harnesses like Claude code sure, but also knowing their strengths and weaknesses so you can learn when and how to delegate and take on more scope

q3k 30 minutes ago | parent | prev [-]

Why should I worry about lacking preparation in the future? Why can't I just learn this as any other skill at any other time?

mgraczyk 26 minutes ago | parent [-]

You'll be behind by a few months at least, and that could be anywhere from slightly harmful to devasting to your career

q3k 20 minutes ago | parent [-]

How so? Why would a couple of months break in employment (worst case, if I truly become unemployable for some reason until I learn the tools) harm or destroy my career?

zozbot234 30 minutes ago | parent | prev | next [-]

> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.

runarberg an hour ago | parent | prev | next [-]

This is basically Pascal’s wager. However, unlike the original Pascal’s wager, yours actually sounds sound.

Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.

daxfohl an hour ago | parent [-]

Interesting analogy, but I'd say it's kind of the opposite. In the two you mentioned, the cost of inaction is extremely high, so they reach one conclusion, whereas here the argument is that the cost of inaction is pretty low, and reaches the opposite conclusion.

_se 2 hours ago | parent | prev [-]

Very reasonable take. The fact that this is being downvoted really shows how poor HN's collective critical thinking has become. Silicon Valley is cannibalizing itself and it's pretty funny to watch from the outside with a clear head.

daxfohl 2 hours ago | parent [-]

I think it's like the California gold rush. Anybody and their brother can go out and dig, but the real money is in selling the shovels.

koolba 39 minutes ago | parent | next [-]

More like they’re leasing away deeply discounted steam shovels at below market rates and somehow expecting to turn a profit doing so.

The real profits are the companies selling them chips, fiber, and power.

fao_ an hour ago | parent | prev [-]

I don't think this is the case, because the AI companies are all just shuffling around the same 300 million or trillion to each other.