| ▲ | nemomarx 3 days ago |
| It also seems like skills with particular tech (prompt engineering, harnesses, mixture of experts set ups) doesn't always necessarily pay off when there's a sea change. Hard to predict what you'll want in a few years anyway, right? |
|
| ▲ | Aurornis 3 days ago | parent | next [-] |
| > (prompt engineering, harnesses, mixture of experts set ups) Prompt engineering as a specific skill got blown out of proportion on LinkedIn and podcasts. The core idea that you need to write decent prompts if you want decent output is true, but the idea that it was an expert-level skill that only some people could master was always a lie. Most of it is common sense about having to put your content into the prompt and not expecting the LLM to read your mind. Harnesses isn’t really a skill you learn. It’s how you get th LLM to interact with something. It’s also not as hard as the LinkedIn posts imply. Mixture of Experts isn’t a skill you learn at all. It’s a model architecture, not something you do. At most it’s worth understanding if you’re picking models to run on your own hardware but for everything else you don’t even need to think about this phrase. I think all of this influencer and podcast hype is giving the wrong impression about how hard and complicated LLMs are. The people doing the best with them aren’t studying all of these “skills”, they’re just using the tools and learning what they’re capable of. |
| |
| ▲ | Izkata a day ago | parent [-] | | > It’s also not as hard as the LinkedIn posts imply. Keeping in mind the LinkedIn posters/audience (marketers/recruiters), it probably was quite hard for most of them. |
|
|
| ▲ | bdcravens 3 days ago | parent | prev | next [-] |
| In my experience (and this may be confirmation bias on my part), casting a wide net and trying out new tech, while you maintain depth in the area relevant at the time, makes you ready for what's coming, even when you don't know what that may be. |
| |
| ▲ | zephen 3 days ago | parent [-] | | Curiosity is good and helps with your personal development, for sure. OTOH, tfa specifically said: > I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. I'm utterly content to wait until their hype has been realised. So, it's not like he's being deliberate ignorant, rather simply deliberately slow-walking his journey. |
|
|
| ▲ | bonesss 3 days ago | parent | prev | next [-] |
| Past the sea change: half the reason those prompt and harness solutions seem to work are LLM-lies, the testing is gassing you about how it works and the efficacy, defaulting to ‘yes’. If you test specific features of those solutions over time you see very inconsistent results, lots of lies, and seemingly stable solutions that one-shot well but suddenly experience behaviour changes due to tweaks on the backend. Tuesdays awesome agent stack that finally works is loading totally different on Thursday, and debugging is “oh, sorry, it’s better now” even when it isn’t. Compression, lies, and external hosting are a bad combo. Sometimes I imagine a world where computers executed programs the same way each time. You could write some code once and run it a whole calendar month later with a predictable outcome. What a dream, we can hope I guess. |
| |
| ▲ | skydhash 3 days ago | parent [-] | | People are doing toy projects and praising them, while some are testing them in real world situations and not findings them that useful. But the former is labelling the latter as luddites and telling them they will be left behind. | | |
| ▲ | abustamam 3 days ago | parent [-] | | As someone on the intersection of both (I've built a lot of vibe coded toy projects and lead a vibe coding initiative at work), they're both right and both wrong. For a single dev team, vibe coding is great. Write specs, write plans, write code. I know what the project wants and needs because I'm the target market. At work, I haven't written more than a few lines of code since December. But I work with other people vibe coding this same project. Lots of changing requirements and rapid iteration. Lots of mistakes were made by everyone involved. Lots of tech debt. Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused. I think the critical difference is the attitude towards our situation. My boss said to fix the AI harness so we can vibe code more confidently and freely. But other bosses might cut their losses and ban vibe coding. Who's right? I dunno. In both cases I'd just do what my boss wants me to do. But it's not that I don't want to be left behind. I don't want to lose my job. There's a difference. | | |
| ▲ | patrick451 2 days ago | parent [-] | | > Sure, we built something in 2 mos that would have otherwise taken us 6 mos, but now I'm fixing the mess that we caused. You didn't actually build it in 2 months. | | |
| ▲ | abustamam 2 days ago | parent [-] | | Even if it takes me a month to get us to fix (likely a week tbh), then it took us 3 months to build. | | |
| ▲ | herewulf 2 days ago | parent [-] | | A mere 2x productivity improvement sounds like something you could achieve by introducing new tools that are predictable (i.e.: Not AI). | | |
| ▲ | abustamam a day ago | parent [-] | | Perhaps. 2x is still 2x. And new tools still need to be vetted and learned. It's strange that the goalpost seems to have moved from "AI is net negative to productivity" to "only 2x improvement isn't worth it" |
|
|
|
|
|
|
|
| ▲ | dakolli 3 days ago | parent | prev | next [-] |
| All of these occulted skills, that we literally can't explain why they work are akin to gamblers superstitions. If I write something this way, it works. Its like a gambler who think they order in which the push the buttons on the slot machine makes a difference. Kind of weird tools also incorporate addictive gambling game's UX design. They're literally allowing you to multiply your output: 3x, 4x, 5x (run it 5 times for a better shot at a working prompt). You're being played by billionaires who are selling you a slot machine as a thinking machine. |
| |
| ▲ | zephen 3 days ago | parent [-] | | > All of these occulted skills, that we literally can't explain why they work are akin to gamblers superstitions. Yes, it's hard to see how, at this moment in time, "Anybody can write code with an LLM" is so different from "Anybody can make money in the stock market." The underlying mechanisms are completely different, of course, and the putative goal of the LLM purveyors is to make it where anybody really can write code with an LLM. I'm typically a nay-sayer and a perfectionist, but many not-so-great things become and stay popular, and this may fall into that category. > Kind of weird tools also incorporate addictive gambling game's UX design. It's unclear it started out this way, but since it's obviously going this way, it is certainly prudent to ask if some of this is by design. It would presumably be more worrisome if there were only a single vendor, but even with multiple vendors, it might be lucrative for them to design things so that "true insider knowledge" of how to make good prompts is a sought-after skill. | | |
| ▲ | oro44 2 days ago | parent [-] | | Broadly speaking, LLMs are destined to fail. Why? Because all the folks involved have created a technology in search for a problem to solve. That never, ever works. Steve Jobs of all people left this piece of wisdom behind. Its amazing how few actually apply it. The internet was never this - its origins go back to the need to able to transmit data - darpa. And this is what we still do now... | | |
| ▲ | zephen 2 days ago | parent | next [-] | | There are a few examples of technologies that only found their application later, such as the glue in post-it notes. And to be fair, Steve Jobs was a master of taking things that had been invented elsewhere, and making them work well enough to foster a demand. But your point stands. Who made the most money, Xerox PARC, or Apple? | |
| ▲ | gnabgib 2 days ago | parent | prev [-] | | Can you stop using them? https://news.ycombinator.com/item?id=47462767 | | |
| ▲ | oro44 2 days ago | parent | next [-] | | I dont use them. | | |
| ▲ | zephen 2 days ago | parent [-] | | The only thing worse than the overuse of AIs is the ever present handwringing and finger-pointing of people who wrongly believe they are infallible AI detectors. | | |
| |
| ▲ | oro44 a day ago | parent | prev [-] | | [flagged] |
|
|
|
|
|
| ▲ | dw_arthur 3 days ago | parent | prev | next [-] |
| Even two or three years ago I had ideas for projects but I could see the models were not ergonomic for my uses. I decided to wait for better models and sure enough the agentic models showed up which are much easier to use. Next thing I'm waiting on is building a new server for a powerful locally hosted LLM in 5 years. No need to go through the headaches and cost of doing it now with models that may not be powerful enough. |
|
| ▲ | stiiv 3 days ago | parent | prev [-] |
| Agreed! Investing lightly at this stage seems smart if your time/attention budget is tight. |