| ▲ | AJRF a day ago |
| This is a fairly negative comment, but putting it out there to see if other people are feeling the same thing If you told the median user of these services to set one of these up I think they would (correctly) look at you like you had two heads. People want to log in to an account, tell the thing to do something, and the system figures out the rest. MCP, Apps, Skills, Gems - all this stuff seems to be tackling the wrong problem. It reminds me of those youtube channels that every 6 months say "This new programming language, framework, database, etc is the killer one", they make some todo app, then they post the same video with a new language completely forgetting they've done this already 6 times. There is a lot of surface level iteration, but deep problems aren't being solved. Something in tech went very wrong at some point, and as soon as money men flood the field we get announcments like this. push out the next release, get my promo, jump to the next shiny tech company leaving nothing in their wake. |
|
| ▲ | zkmon a day ago | parent | next [-] |
| >> but deep problems aren't being solved There is no problem to solve. These days, solutions come in a package which includes the problems they intend to solve. You open the package. Now you have a problem that jumped out of the package and starts staring at you. The solution comes out of the package and chases the problem around the room. You are now technologically a more progressed human. |
| |
| ▲ | AJRF a day ago | parent | next [-] | | This made me laugh a lot at the mental image. This was my experience with Xcode for sure. | |
| ▲ | TeMPOraL a day ago | parent | prev | next [-] | | This is where GP is wrong, I think. The problem are being solved, for now, because the businesses are still too excited about the whole AI thing to notice it's not in their interest, and properly consolidate against it. And the problem being solved is, LLMs are universal interfaces. They can understand[0] what I mean, and they understand what those various "solutions" are, and they can map between them and myself on the fly. They abstract services away. The businesses will eventually remember that the whole point of marketing is to prevent exactly that from happening. -- [0] - To a degree, and conditioned on what one considers "understanding", but still - it's the first kind of computer systems that can do this, becoming a viable alternative to asking a human. | |
| ▲ | 3abiton a day ago | parent | prev | next [-] | | I wish this was wrong, but it really isn't. To contrast though, I would argue that is part of evolution? We just want to do things faster or better? Smartphones solved no problems, but they ushered the digital millenium. | | |
| ▲ | zkmon a day ago | parent [-] | | I think most new technologies helped to increase the expectations about what you can do. But overall work did not get reduced. It didn't give me more free time to go fishing, or bird-watching. On the other hand, I got an irreversible dependency on these things. Otherwise I'm are no longer compatible with the World 2.0 |
| |
| ▲ | kvirani a day ago | parent | prev | next [-] | | Wow. I hadn't thought of it like that but it resonates | |
| ▲ | notepad0x90 a day ago | parent | prev | next [-] | | If you like creating solutions, why wait for a problem to show up? lol | |
| ▲ | nwhnwh a day ago | parent | prev [-] | | LOL, this is so true. |
|
|
| ▲ | darth_avocado a day ago | parent | prev | next [-] |
| > MCP, Apps, Skills, Gems - all this stuff seems to be tackling the wrong problem My fairly negative take on all of this has been that we’re writing more docs, creating more apis and generally doing a lot of work to make the AI work, that would’ve yielded the same results if we did it for people in the first place. Half my life has been spent trying to debug issues in complex systems that do not have those available. |
| |
| ▲ | XenophileJKO a day ago | parent | next [-] | | This is true, but the reason the economics have inverted is that we can pay these new "people" <$20 for the human equivalent of ~300 hours worth of non-stop typing. | | |
| ▲ | throwaway127482 a day ago | parent | next [-] | | Correct. And we know the AI will read the docs whereas people usually ignore 99% of docs so it just feels like a bad use of time sometimes, unfortunately. | | |
| ▲ | threecheese a day ago | parent [-] | | -ish; while you can be fairly certain it reads the docs, whether they’ve been used/synthesized is just about unknowable. The output usually looks great, but it’s up to us to ensure its accuracy; we can make it better in aggregate by tweaking dials and switches. To mitigate this we’re asking AIs to create plans and todo lists first, which adds some rigor but again we can’t know if the lists were comprehensive or even correct. It does seem to make the output better. And if the human doesnt read the docs, they can be beaten! |
| |
| ▲ | darth_avocado a day ago | parent | prev [-] | | That is not true at all. The economics you’re seeing right now are akin to Uber handing out $5 airport pickups to kill the taxi industry. And even then the models are nowhere as cheap as <$20 for ~300 hours of human work. | | |
| ▲ | XenophileJKO a day ago | parent [-] | | 40 words per minute is equivalent to about 50 tokens a minute. I just took GPT-5, output is $10 per million tokens. Let's double the cost to account for input tokens which are ($1.25 per million / $0.125 if cached). For 1 million tokens it would take a 40 wpm typist.. around 20K minutes to output that $20 of worth of text. That is just typing. So about 300 hours of non-stop effort for that $20. So even if you say.. oh.. the real price is $100 not $20. The value changes are still shattering to the previous economic dynamics. Then layer in that also as part of that value, the "typist" is also more skilled than the average working person in linguistics, software engineering, etc. Then that value is further magnified. This is why I say we have only begun to barely see the disruption this will cause. Even if the models don't get better or cheaper, the potential impact is hard to grasp. |
|
| |
| ▲ | ip26 a day ago | parent | prev | next [-] | | If writing a good document and a strong API had to happen anyway, and now you can write just that and the rest will take care of itself, we may actually have progressed. Plus the documents would then have to be there, instead of skipped like today. The counter-argument is that code is the only way to concisely and unambiguously express how everything should work. | | |
| ▲ | joquarky a day ago | parent [-] | | Honestly, we needed something to cap extreme programming and swing the pendulum back to a balance between XP and waterfall again. |
| |
| ▲ | michael1999 a day ago | parent | prev | next [-] | | I am also struck by how much these kinds of context documents resemble normal developer documentation, but actually good. What was the barrier to creating these documents before? | | |
| ▲ | TeMPOraL a day ago | parent [-] | | They're much more useful when an LLM stands between them and users - because LLMs can (re)process much more of them, and much faster, than any human could ever hope to. One way (and one use case) of looking at it is, LLM agents with access ("tools") to semantic search[0] are basically a search engine that understands the text it's searching through... and then can do a hundred different things with it. I found myself writing better notes at work for this very reason - because I know the LLM can see them, and can do anything from surfacing obscure insights from the past, to writing code to solve an issue I documented earlier. It makes notes no longer be write-only. -- [0] - Which, incidentally, is itself enabled by LLM embeddings. |
| |
| ▲ | phlakaton a day ago | parent | prev [-] | | What if the great boon of AI is to get us to do all the thinking and writing we should have been doing all along? What if the next group of technologists to end up on top are... the technical writers? Haha, just kidding you tech bros, AI's still for you, and this time you'll get to shove the nerds into a locker for sure. ;-) | | |
| ▲ | quentindanjou a day ago | parent | next [-] | | It might not be that wrong. After all, programming languages are a way to communicate with the machine. In the same way we are not doing binary manually, we might simply not have to do programming too. I think software architecture is likely to be what it should be: the most important part of every piece of software. | | |
| ▲ | skydhash a day ago | parent [-] | | You’ve got it wrong. The machine is fine with a bit soup and doesn’t care if it’s provided with punch card or python. Programming was always a tool for humans. It’s a formal “notation” for describing solutions that can be computed. We don’t do well with bit soup. So we put a lot of deterministic translations between that and the notation that we’re good with. Not having to do programming would be like not having to write sheet music because we can drop a cat from a specific height onto a grand piano and have the correct chord come out. Code is ideas precisely formulated while prompts are half formed wishes and prayers. |
| |
| ▲ | CPLX a day ago | parent | prev [-] | | This is actually my theory of the future. Basically, the ability to multiply your own effectiveness is now directly dependent on your ability to express ideas in simple plain English very quickly and precisely. I’m attracted to this theory in part because it applies to me. I’m a below average coder (mostly due to inability to focus on it full time) and I’m exceptionally good at clear technical writing, having made a living off it much of my life. The present moment has been utterly life changing. |
|
|
|
| ▲ | tptacek a day ago | parent | prev | next [-] |
| What is a "deep problem" and what was the cadence with which we addressed these kinds of "deep problems" prior to 2023, when ChatGPT first went mainstream? |
| |
| ▲ | skydhash a day ago | parent [-] | | For a very tiny slice of these deep problems and how they were addressed, you can review the usenix conferences and the published papers there. https://www.usenix.org/publications/proceedings | | |
| ▲ | tptacek a day ago | parent [-] | | I've been a Usenix reviewer twice, once as a program chair (I think that's what they call the co-leaders of a PC?). So this doesn't clarify anything for me. | | |
| ▲ | skydhash a day ago | parent [-] | | To out it more clearly. You take a domain (like OS security, perfomance, and administration) and you’ll find those kinds of problems that people feel important to share solutions with each other. Solutions that are not trivially found. Findings you can be proud your name is attached with. And then you have something like the LLM craze where while it’s new, it’s not improving any part of the problem it’s supposed to solve, but instead is creating new ones. People are creating imperfect solutions to those new problems, forgetting the main problem in the process. It’s all vapourware. Even something like a new linter for C is more of a solution to programmer’s productivity than these “skills” | | |
| ▲ | tptacek a day ago | parent [-] | | OK: I think I have decisively established my Usenix bona fides here, and I'm repeating my original question: what is the cadence at which we resolved "deep question" prior to the era of LLMs? (It began in 2023.) |
|
|
|
|
|
| ▲ | Fernicia a day ago | parent | prev | next [-] |
| >they make some todo app, then they post the same video with a new language completely forgetting they've done this already 6 times I don't see how this is bad. Technology makes iterative, marginal improvements over time. Someone may make a video tomorrow claiming a great new frontend framework, even though they made that exact video about Nextjs, or React before that, or Angular, or JQuery, or PHP, or HTML. >Something in tech went very wrong at some point, and as soon as money men flood the field we get announcments like this If it weren't for the massive money being poured into AI, we'd be stuck with GPT-3 and Claude 2. Sure, they release some duds in the tooling department (although I think Skills are good, actually) but it's hardly worthy of this systemic rot diagnosis you've given. |
|
| ▲ | solsane a day ago | parent | prev | next [-] |
| I do not feel the same way. This looks easy to use and useful. I don’t think every problem needs to be a ‘deep problem’. There’s so many practical steps to get to > People want to log in to an account, tell the thing to do something, and the system figures out the rest. At a glance, this seems to be a practical approach to building up a personalized prompting stack based on the things I commonly do. I’m excited about it. |
|
| ▲ | underdeserver a day ago | parent | prev | next [-] |
| Well, we're still early days and we don't know what works. It might be superficial but it's still state of the art. |
|
| ▲ | ip26 a day ago | parent | prev | next [-] |
| Hypothetically, AI coding could completely absorb all that surface level iteration & posturing. If agentic coding of good quality becomes too cheap to meter, all that is left are the deep problems. |
|
| ▲ | deadeye a day ago | parent | prev | next [-] |
| I'm not sure what you mean. What is the "real problem"? In the pursuit of making application development more productive, they ARE solving real problems with mcp servers, skills, custom prompts, etc... The problems are context dilution, tool usage, and awareness outside of the llm model. |
| |
| ▲ | skydhash a day ago | parent [-] | | > The problems are context dilution, tool usage, and awareness outside of the llm model. These is accidental complexity. You’ve already decided on a method and instead of solving the main problem, you are solving the problems associated with the method. Like deciding to go in space with a car and trying to strap a rocket onto it. |
|
|
| ▲ | crowcroft a day ago | parent | prev | next [-] |
| Yes, people should be building applications on top of this. |
|
| ▲ | rottencupcakes a day ago | parent | prev | next [-] |
| If that's true, why do leadership, VCs, and eventually either the acquiring company or the public markets keep falling for it then? As the old adage goes: "Don't hate the player, hate the game?" To actually respond: this isn't for the median user. This is for the 1% user to set up useful tools to sell to the median user. |
| |
| ▲ | AJRF a day ago | parent | next [-] | | > If that's true, why do leadership, VCs, and eventually either the acquiring company or the public markets keep falling for it then? If I had to guess, it would be because greed is a very powerful motivator. > As the old adage goes: "Don't hate the player, hate the game?" I know this advice is a realistic way of getting ahead in the world, but it's very disheartening and long term damaging. Like eating junk food every day of your life. | |
| ▲ | greenchair a day ago | parent | prev [-] | | [flagged] |
|
|
| ▲ | micromacrofoot a day ago | parent | prev | next [-] |
| These are all tools for advanced users of LLMs, I've already built a couple MCPs for clients... you might not have a use for them... but there are niches already getting a lot out of them |
|
| ▲ | antonvs a day ago | parent | prev [-] |
| > People want to log in to an account, tell the thing to do something, and the system figures out the rest. For consumers, yes. In B2B scenarios more complexity is normal. |