| ▲ | o11c 14 hours ago |
| That's a whole lot of twisting to avoid admitting "it usually doesn't work, and even when it does work, it's usually not cost-effective even at the heavily-subsidized prices." Or maybe it's more about refusing to admit that executives are out of touch with concrete reality and are just blindly chasing trends instead. |
|
| ▲ | somenameforme 14 hours ago | parent | next [-] |
| Another issue, one that you alluded to, is imagine AI actually was reliable. And a company does lay off e.g. 30% of their employees to replace them with AI systems. How long before they get a letter from AI Inc 'Hi, we're increasing prices 500x in order to enhance our offerings and and improve customer satisfaction. Enjoy.' The entire MO of big tech is trying to create a monopoly by the software equivalent of dumping (which is illegal in the US [1], but not for software, because reasons), marketshare domination, and then jacking effective pricing wayyyyy up. And in this case big tech companies are dumping absurdo amounts of money into LLMs, getting absurd funding, and then providing them for free or next to free. If a person has any foresight whatsoever it's akin to a rusting van outside an elementary, with blacked out windows, and with some paint scrawled on it, 'FREE ICECREAM.' [1] - https://en.wikipedia.org/wiki/Dumping_(pricing_policy)#Unite... |
| |
| ▲ | crinkly 14 hours ago | parent | next [-] | | Yep. Also the problem that the AI vendor reinforces bias into their product’s training which services the vendor. Literally every shitty corporate behaviour is amplified by this technology fad. | |
| ▲ | Opocio 14 hours ago | parent | prev | next [-] | | It's quite easy to switch LLM api, so you can just transition to a competitor. Competition between AI providers is quite fierce, I don't see them setting up a cartel anytime soon. And open source models are not that far beyond commercial ones. | | |
| ▲ | gmag 13 hours ago | parent [-] | | It's easy to switch the LLM API, but in practice this requires having a strong eval suite so that the expected behavior of whatever is built on top changes within acceptable limits. It's really the implications of the LLM switch that matter. |
| |
| ▲ | rwmj 13 hours ago | parent | prev [-] | | You can run a reasonable LLM on a gaming machine (cost under $5000), and that's only going to get better and better with time. The irony here is that VCs are pouring money into businesses with almost no moat at all. | | |
| ▲ | acdha 8 hours ago | parent [-] | | I think the most is small but it’s not gone yet: in these discussions I almost never see the people reporting productivity wins say they’re running local models, and there’s a standard response to problems that you should switch to a different vendor, which suggests that there are enough differences in the training and tooling to make switching non-trivial. This will especially be true for all of the non-tech specialist companies adopting things – if you bought an AI tool to optimize your logistics and the vendor pulls a Broadcom on your contract renewal, they’re probably going to do so only after you’ve designed your business around their architecture and price it below where it’d make sense to hire a team and spend years building something in house. Having laid off a lot of in-house knowledge directly helps them, too. |
|
|
|
| ▲ | paulluuk 14 hours ago | parent | prev | next [-] |
| It really depends on the use-case. I currently work in the video streaming industry, and my team has been building production-quality code for 2 years now. Here are some things that are going really well: * Determine what is happening in a scene/video
* Translating subtitles to very specific local slang
* Summarizing scripts
* Estimating how well a new show will do with a given audience
* Filling gaps in the metadata provided by publishers, such as genres, topics, themes
* Finding the most "viral" or "interesting" moments in a video (combo of LLM and "traditional" ML) There's much more, but I think the general trend here is not "chatbots" or "fixing code", it's automating stuff that we used armies of people to do. And as we progress, we find that we can do better than humans at a fraction of the cost. |
| |
| ▲ | poisonborz 14 hours ago | parent | next [-] | | Based on what you listed I would seriously consider the broader societal value of your work. | | |
| ▲ | paulluuk 14 hours ago | parent [-] | | I know this is just a casual comment, but this is a genuine concern I have every day. However, I've been working for 10 years now and working in music/video streaming has been the most "societal value" I've had thus far. I've worked at Apple, in finance, in consumer goods.. everywhere is just terrible. Music/Video streaming has been the closest thing I could find to actually being valuable, or at least not making the world worse. I'd love to work at an NGO or something, but I'm honestly not that eager to lose 70% of my salary to do so. And I can't work in pure research because I don't have a PhD. What industry do you work in, if you don't mind me asking? | | |
| ▲ | poisonborz 11 hours ago | parent [-] | | It's not a casual comment in the sense that I have genuine concern every day that the current world we are living in is enabled by common employees. I'm not saying everyone should solve world hunger, "NGO or bust" - and yes, the job market is tough - but especially for software engineers, there are literally hundreds of thousands of companies requiring software work and who do net good or at least "plausible" harm, and pay an above average salary. Also I only read the comment above, it's you who can judge what you contribute to and what you find fair. I just wish there were a mandatory "code of conduct" for engineers. The way AI is reshaping the field, I could imagine this becoming more like a medical/law field where this would be possible. I work in IoT telemetrics. The company is rumored to partake in military contracts at a future point, that would be my exit then. |
|
| |
| ▲ | bootsmann 9 hours ago | parent | prev [-] | | > Estimating how well a new show will do with a given audience Can you elaborate on this point? | | |
| ▲ | paulluuk 6 hours ago | parent [-] | | I work in R&D, and although I haven't signed an NDA, I think it's best if I don't elaborate too much. But basically we have a large dataset of shows and movies for which we already know how well they did with specific audiences, but we didn't know why exactly. So we use LLMs to reverse-engineer a large amount of metadata about these shows, and then use traditional ML to train a model that learns which feature appeal to which audiences. Most stuff is obvious: nobody needs to tell you what segment of society is drawn to soap operas or action movies, for example. But there's plenty of room for nuance in some areas. This doesn't guarantee that it actually becomes a succesful movie or show, though. That's a different project and frankly, a lot harder. Things like which actors, which writers, which directors, which studio are involved, and how much budget the show has.. it feels more like Moneyball but with more intangible variables. |
|
|
|
| ▲ | designerarvid 14 hours ago | parent | prev [-] |
| People aren’t necessarily out of touch, they may be optimising for something other than real value. Appearing impressive for instance. |