Remix.run Logo
wavemode 2 hours ago

> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.

The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.

The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.

paulryanrogers 2 hours ago | parent | next [-]

Those of us working from the bottom, looking up, do tend to take the clinical progressive approach. Our focus is on the next ticket.

My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.

Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.

krackers 2 hours ago | parent [-]

>must be so focused on the future

They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.

bigstrat2003 37 minutes ago | parent | prev | next [-]

To be fair, that's what I have done. I try to use AI every now and then for small, easy things. It isn't yet reliable for those things, and always makes mistakes I have to clean up. Therefore I'm not going to trust it with anything more complicated yet.

dns_snek 2 hours ago | parent | prev [-]

> Test on mice, test in clinical trials, then go to market.

You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.