▲ | atleastoptimal 18 hours ago | |||||||
AI is being framed as the future because it is the future. If you can't see the writing on the wall then you surely have your head in the sand or are seeking out information to confirm your beliefs. I've thought a lot about where this belief comes from, that belief being the general Hacker News skepticism towards AI and especially big tech's promotion and alignment with it in recent years. I believe it's due to fear of irrelevance and loss of control. The general type I've seen most passionately dismissive of the utility of LLM's are veteran, highly "tech-for-tech's sake" software/hardware people, far closer Wozniak than Jobs on the Steve spectrum. These types typically earned their stripes working in narrow intersections of various mission-critical domains like open-source software, systems development, low-level languages, etc. To these people, a generally capable all-purpose oracle capable of massive data ingestion and effortless inference represents a death knell to their relative status and value. AI's likely trajectory heralds a world where intelligence and technical ability are commodified and ubiquitous, robbing a sense purpose and security from those whose purpose and security depends on their position in a rare echelon of intellect. This increasingly likely future is made all the more infuriating by the annoyances of the current reality of AI. The fact that AI is so presently inescapable despite how many glaring security-affecting flaws it causes, how much it propagates slop in the information commons, and how effectively it emboldens a particularly irksome brand of overconfidence in the VC world is preemptive insult to injury in the lead up a reality where AI will nevertheless control everything. I can't believe these types I've seen on this site aren't smart enough to avoid seeing the forest for the trees on this matter. My Occam's razor conclusion is that most are smart enough, they just are emotionally invested in anticipating a future where the grand promises of AI will fizzle out and it will be back to business as usual. To many this is a salve necessary to remain reasonably sane. | ||||||||
▲ | nevertoolate 15 hours ago | parent | next [-] | |||||||
Your point is: ai is the future and for some it is bad news and they dismiss the possibility of this future. My question is: who will control this ai? The folk who can’t do the work without ai or the ones who can? Who would you hire? | ||||||||
▲ | desumeku 13 hours ago | parent | prev | next [-] | |||||||
>This increasingly likely future is made all the more infuriating by the annoyances of the current reality of AI. The fact that AI is so presently inescapable despite how many glaring security-affecting flaws it causes, how much it propagates slop in the information commons, and how effectively it emboldens a particularly irksome brand of overconfidence in the VC world is preemptive insult to injury in the lead up a reality where AI will nevertheless control everything. So basically: "yes, I know AI is actually completely and totally useless and a net negative on the world just like you say it is, but I can imagine that things will suddenly turn into the sci-fi ultraverse for no reason so therefore you're wrong." | ||||||||
| ||||||||
▲ | cheevly 17 hours ago | parent | prev [-] | |||||||
Well-said and spot on. |