| ▲ | ZeroConcerns 13 hours ago |
| Well, the elephant in the room here is that the generic AI product that is being promised, i.e. "you get into your car in the morning, and on your drive to the office dictate your requirements for one of the apps that is going to guarantee your retirement, in order to find it completely done, rolled out to all the app stores and making money already once you arrive" isn't happening anytime soon, if ever, yet everyone pretty much is acting like it's already there. Can "AI" in its current form deliver value? Sure, and it absolutely does but it's more in the form of "several hours saved per FTE per week" than "several FTEs saved per week". The way I currently frame it: I have a Claude 1/2-way-to-the-Max subscription that costs me 90 Euros a month. And it's absolutely worth it! Just today, it helped me debug and enhance my iSCSI target in new and novel ways. But is it worth double the price? Not sure yet... |
|
| ▲ | madeofpalk 13 hours ago | parent | next [-] |
| The other part to this is that LLMs as a technology definitely has some value as a foundation to build features/products on other than chat bots. But unclear to be whether that value can sustain current valuations. Is a better de-noisier algorithm in Adobe Lightroom worth $500 billion? |
| |
| ▲ | ZeroConcerns 13 hours ago | parent | next [-] | | > Is a better de-noisier algorithm in Adobe Lightroom worth $500 billion? No. But: a tool that allows me to de-noise some images, just by uploading a few samples and describing what I want to change, just might be? Even more so, possibly, if I can also upload a desired result and let the "AI" work on things until it matches that? But also: cool, that saves me several hours per week! Not: oh, wow, that means I can get rid of this entire department... | |
| ▲ | ansgri 13 hours ago | parent | prev [-] | | A bit off-topic, but denoise in LR is like 3 years behind the purpose-built products like Topaz, so a bad example. They've added any ML-based denoise to it when, like a year ago? |
|
|
| ▲ | ebiester 10 hours ago | parent | prev | next [-] |
| > Can "AI" in its current form deliver value? Sure, and it absolutely does but it's more in the form of "several hours saved per FTE per week" than "several FTEs saved per week". Yes but... First, what we're seeing with coding is that it is just exposing the next bottleneck quickly. The bottlenecks are always things that don't lend themselves to LLMs yet. Second, that still can mean 4 hours a week for 20-50 bucks. At US white collar wages, that might mean 8 people are needed rather than 9. In profit centers that's more budget for advancing goals. At cost centers, though, that's a reduction in headcount. |
|
| ▲ | vorticalbox 13 hours ago | parent | prev | next [-] |
| I use mongo at work and LLM helped me find index issues. Feeding it the explain, query and current indexes it can quickly tell what it was doing and why it was slow. I saved a bunch time as I didn’t have to read large amounts of json from explain to see what is going on. |
|
| ▲ | adastra22 13 hours ago | parent | prev | next [-] |
| Agentic tools is already delivering an increase in productivity equivalent to many FTEs. I say this as someone in the position of having to hire coders and needing far fewer than we otherwise would have. |
| |
| ▲ | ZeroConcerns 13 hours ago | parent [-] | | Well, yeah, as they say on Wikipedia: {{Citation Needed}} Can AI-as-it-currently-is save FTEs? Sure: but, again, there's a template for that: {{How Many}} -- 1% of your org chart? 10%? In my case it's around 0.5% right now. Or, to reframe it a bit: can AI pay Sam A's salary? Sure! His stock options? Doubtful. His future plans? Heck nah! | | |
| ▲ | adastra22 11 hours ago | parent [-] | | 400-800%. That is to say, I am hiring 4x-8x fewer developers for the same total output (measured in burn down progress, not AI-biased metrics like kLOC). |
|
|
|
| ▲ | pixl97 13 hours ago | parent | prev [-] |
| Skeptics always like to toss in 'if ever' as some form of enlightenment they they are aware of some fundamental limitation of the universe only they are privy to. |
| |
| ▲ | falseprofit 12 hours ago | parent | next [-] | | Let’s say there are three options: {soon, later, not at all}. Ruling out only one to arrive at {later, not at all} implies less knowledge than ruling out two and asserting {later}. Awareness of a fundamental limitation would eliminate possibilities to just {not at all}, and the phrasing would be “never”, rather than “not soon, if ever”. | | |
| ▲ | pixl97 9 hours ago | parent [-] | | But we know that the fundamental limitation of intelligence does not exist, nature has already created that with animal and eventually human intelligence via random walk. So 'AI will never exist' is lazy magical thinking. That intelligence can be self reinforcing is a good reason why AI will exist much sooner than later. | | |
| ▲ | falseprofit 6 hours ago | parent [-] | | I actually agree with your premise of ruling out “not at all”. I was just responding to your characterization of “if ever”. I’m not quite as certain as you are, though. Just because a technology is possible does not mean it is inevitable. |
|
| |
| ▲ | mzajc 13 hours ago | parent | prev | next [-] | | Of the universe, perhaps, but humans certainly are a limiting factor here. Assuming we get this technology someday, why would one buy your software when the mere description of its functionality allows one to recreate it effortlessly? | | |
| ▲ | pixl97 9 hours ago | parent [-] | | >humans certainly are a limiting factor here. Completely disagree. Intelligence is self reinforcing. The smarter we get as humans the more likely we'll create sources of intelligence. |
| |
| ▲ | madeofpalk 9 hours ago | parent | prev [-] | | Theorising something will exist before the heat death of the universe isn’t really interesting. |
|