| ▲ | Intelligence is a commodity. Context is the real AI Moat(adlrocha.substack.com) |
| 114 points by adlrocha 5 days ago | 85 comments |
| |
|
| ▲ | jfalcon 5 hours ago | parent | next [-] |
| >someone raised the question of “what would be the role of humans in an AI-first society”. Norbert Wiener, considered to be the father of Cybernetics, wrote a book back in the 1950's entitled "The Human Use of Human Beings" that brings up these questions in the early days of digital electronics and control systems. In it, he brings up things like: - 'Robots enslaving humans for doing jobs better suited by robots due to a lack of humans in the feedback loop which leads to facist machines.' - 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.' - 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".' The human purpose is not to compete but to safeguard the telology (purpose) of the system. |
| |
| ▲ | 9wzYQbTYsAIc 4 hours ago | parent | next [-] | | Seems like a good time to enshrine human rights and the social safety net by ratifying the ICESCR (https://en.wikipedia.org/wiki/International_Covenant_on_Econ...) and giving human rights the teeth they need. I used Anthropic to analyze the situation, it did halfway decent: https://unratified.org/why/ https://news.ycombinator.com/item?id=47263664 | |
| ▲ | WarmWash 4 hours ago | parent | prev | next [-] | | >- 'Automation will lead to immediate devaluation of human labor that is routine. Society needs to decouple a person's "worth" from their "utility as a tool".' I have this vision that in absence of the ability for people to form social hierarchies on the back of their economic value to society, there will be this AI fueled class hierarchy of people's general social ability. So rather than money determining your neighborhood, your ability to not be violent or crazy does. | | |
| ▲ | energy123 3 hours ago | parent | next [-] | | If we have post scarcity due to AI, everything becomes so uncertain. Why would we still have violent and crazy people? Surely the ASI could figure it out and fix whatever is going on in their brains. It's so fuzzy after that event horizon I have no confidence in any predictions. | | |
| ▲ | storus 2 hours ago | parent [-] | | Why are some people able to bear suffering whereas others go bonkers? Or what if the only source of happiness of some of those crazy people is domination of other people and exclusivity of social hierarchies? How would AI fix that? | | |
| ▲ | bryanrasmussen 2 hours ago | parent [-] | | >Why some people are able to bear suffering whereas others go bonkers? Well at least in some cases the scale of suffering between the bonkers and the ones bearing it might be significantly different. |
|
| |
| ▲ | erikerikson 4 hours ago | parent | prev | next [-] | | This seems to suggest a single dimensional evaluation. The complexity of social compatibility is high and the potential capacity to evaluate could also be greater. | |
| ▲ | ithkuil an hour ago | parent | prev [-] | | I'm terrified at the idea that society will select the crazies and the violent instead. I wonder why I think that | | |
| ▲ | WarmWash an hour ago | parent [-] | | My real personal "doom" theory is that AI will, err, remove 99.99% of humans, pretty much everyone except for the top 100,000 based whatever fractally complex metric scheme it deems important. Then those 100,000 get a utopia, the AI gets everything else, and ultimately the humans are just nice pets. |
|
| |
| ▲ | argee 3 hours ago | parent | prev | next [-] | | > 'An economy without human interaction could lead to entropic decay as machines lack biological drive for anti-entropic organization.' Not quite the point the quote makes, but it reminded me of the short SF story "Exhalation". https://www.lightspeedmagazine.com/fiction/exhalation/ | |
| ▲ | jay_kyburz an hour ago | parent | prev [-] | | I think its important to remember that humans are not that far removed from the native animals that we share the earth with. Civilization is just a thin layer of rules we use to try and keep the peace between us. Just being born doesn't entitle somebody to food and shelter, you have to go out and find it. You have to work. A magpie is not provided food and shelter, it has to hunt, fight for territory, and build its nest. Humans don't have some inalienable "worth". But if you can work, you might choose to trade it for some food and shelter. AI is not going change that. We might think the AI owners have a moral obligation to feed people who can't find work, but there is no guarantee this will happen. Also, for the short term at least, we need to stop talking about AI like its a thing, and talk about the companies that build and own the AI. Why would Google build an AI that can do everyone's job, then turn around and start building farms to feed us for free? Do we perhaps imagine our Governments are going to start building super automated farms to feed us. How are they going to pay Google for the AI with no tax income? |
|
|
| ▲ | pjsousa79 3 hours ago | parent | prev | next [-] |
| One thing that seems to be missing in most discussions about "context" is infrastructure. The dream system for AI agents is probably something like a curated data hub: a place where datasets are continuously ingested, cleaned, structured and documented, so agents can query it to obtain reliable context. Right now most agents spend a lot of effort stitching context together from random APIs, web scraping, PDFs, etc. The result is brittle and inconsistent. If models become interchangeable, the real leverage might come from shared context layers that many agents can query. |
| |
| ▲ | sorobahn 2 hours ago | parent [-] | | Am working on making this layer currently. It’s a more interesting problem even when you remove AI agents from the picture, I feel a context layer can be equally as useful for humans and deterministic programs. I view it as a data structure sitting on top of your entire domain and this data structure’s query interface plus some basic tools should be enough to bootstrap non trivial agents imo. I think the data structure that is best suited for this problem is a graph and the different types of data represented as graphs. Stitching api calls is analogous to representing relationships between entities and that’s ultimately why I think graph databases have a chance in this space. As any domain grows, the relationships usually grow at a higher rate than the nodes so you want a query language that is optimal for traveling relationships between things. This is where a pattern matching approach provided by ISO GQL inspired by Cypher is more token efficient compared to SQL. The problem is that our foundation models have seen way way way more SQL so there is a training gap, but I would bet if the training data was equally abundant we’d see better performance on Cypher vs SQL. I know there is GraphRAG and hybrid approaches involving vector embeddings and graph embeddings, but maybe we also need to reduce API calls down to semantic graph queries on their respective domains so we just have one giant graph we can scavenge for context. |
|
|
| ▲ | baxtr 3 hours ago | parent | prev | next [-] |
| For anyone worried about AGI coming soon. Today I asked Claude to stop using em dashes. That was his/her answer: Noted — I'II avoid em dashes going forward and use other punctuation or restructure sentences instead. |
| |
| ▲ | skeptic_ai 3 hours ago | parent [-] | | I know some very smart guys that don’tknow how to use a microwave. And what? Doesn’t mean much | | |
|
|
| ▲ | adonovan 2 hours ago | parent | prev | next [-] |
| > "what is the role of humans in a scenario where work is no longer necessary?" People have been fantasizing about this scenario throughout the industrial era--read William Morris' News from Nowhere (1890) for example--but it has failed to come to pass so many times, and the reasons are pretty obvious. The benefits of technology are spread unequally, and increasingly so over time, so only a wealthy few get the option of a post-labor existence. Also, our demands for the products of labor change as labor productivity increases; we prefer (or have been persuaded to act as if we prefer) material riches over lives with less stuff and more time. We still haven't seen that AI actually replaces labor, as opposed to amplifying it, like a power saw or CNC mill used by a carpenter, so all these discussions about the end of labor seem like unwitting sales pitches for AI. > “what would be the role of humans in an AI-first society” The real question is why would anyone want, or want to help build, such an obscenity. |
|
| ▲ | zurfer 4 hours ago | parent | prev | next [-] |
| whenever i worry that AI will eventually do all the work I remind myself that the world is full of almost infinite problems and we'll continue to have a choice to be problem solvers over just consumers. |
| |
| ▲ | jopsen 3 hours ago | parent | next [-] | | 10 years ago self-driving EVs were going to make it so nobody owns a car. There was a lot of hype. We are possibly still on track to get to that world. But it might easily take another 10-20 years :) AI will change things, but don't underestimate the timeline. Also even if we get a super intelligence in a box, it probably won't fold my laundry.
Super intelligence might not unlock as much as we dream. | |
| ▲ | andriy_koval 4 hours ago | parent | prev [-] | | > we'll continue to have a choice to be problem solvers over just consumers. that's if we still stay relevant and competitive compared to AI in problem solving. |
|
|
| ▲ | _pdp_ 2 hours ago | parent | prev | next [-] |
| My observation is that nobody knows how to deploy these LLMs yet. So yes. Context is everything. OpenAI is still selling model access not new science or new discoveries. They are pushing the context problem to the masses hoping someone might find a useful application of the technology. |
|
| ▲ | jbergqvist 2 hours ago | parent | prev | next [-] |
| In a way, isn't this the same old data moat that always existed in AI/ML, but supercharged? Generalist models can now reason over proprietary data as context instead of requiring you to train narrow expert models on it. What changed is you no longer need an ML team to turn that data into value. |
|
| ▲ | loss_flow 3 hours ago | parent | prev | next [-] |
| Only scarce context is a moat and what is scarce is changing quickly. OpenClaw is a great example of the context substrate not being scarce (local files, skills are easily copied to another platform) and thus not providing a moat. Claude's recent import of ChatGPT's memory is another example of context that was scarce becoming abundant (chat export) and potentially becoming scarce again (OpenAI cutting out chat export). |
|
| ▲ | vicchenai 2 hours ago | parent | prev | next [-] |
| Been swapping between models a lot lately for a side project and yeah, the model swap is like 5 minutes. Getting the context right is where all my time goes. It's basically a data pipeline problem at this point, not an AI problem. |
|
| ▲ | tonnydourado 2 hours ago | parent | prev | next [-] |
| I know this is not the explicit meaning, but lol, intelligence isn't a commodity among humans, let alone LLMs |
|
| ▲ | ledauphin 3 hours ago | parent | prev | next [-] |
| I just don't buy this. It is not what I observe with these things. They are not at all "thoughtful". |
|
| ▲ | amirhirsch 5 hours ago | parent | prev | next [-] |
| Not sure about the conclusion regarding NVidia value capture. I imagine the context for many applications will come from a physical simulation environment running in dramatically more GPUs than the AI part. |
|
| ▲ | farcitizen 4 days ago | parent | prev | next [-] |
| Great Article. And this idea is Largely behind all the new Microsoft IQ products, Work IQ, Foundry IQ, Fabric IQ. Giving the Agents Context of all relevant enterprise data to do their job. |
|
| ▲ | gertlabs 3 hours ago | parent | prev | next [-] |
| Neither intelligence nor context are what really differentiate the most successful model for programming (Claude Opus 4.6) from slightly 'smarter' competitors (Codex 5.3, Gemini 3.1 Pro). It's tool use and personality. If models stopped advancing today, we could still reach effective AGI with years of refining harnesses. There is still incredible untapped potential there. I maintain a benchmark at https://gertlabs.com that competes models against each other in competitive, open-ended games. It's harder to game the benchmark because there's no correct answer (at least none that any of the models have gotten remotely close to) and it requires anticipation of other players' behavior. One thing I've found is that Codex and Gemini models tend to perform the best at one-shotting problems, but when given a harness and tools to iterate towards a solution, Anthropic models continue improving where Codex and Gemini struggle to use tools they weren't trained on or take the initiative to follow the high level objectives. |
| |
| ▲ | mr_00ff00 3 hours ago | parent [-] | | “ If models stopped advancing today, we could still reach effective AGI with years of refining harnesses.” Unless you’re a machine learning engineer with something to share, our current models are not even close to general AGI, and won’t make it. My understanding (as just an engineer) is that LLMs continue to improve at crazy rates, but it’s clear this is not the answer for AGI. | | |
| ▲ | gertlabs 3 hours ago | parent [-] | | I think if I asked for most HN users' requirements for AGI 8 years ago, we would already be well past them. Now that we see the nature of how artificial intelligence is unfolding, and how the intelligence is different than human intelligence, everyone is moving their goalposts (including me). But if we're being honest, frontier LLMs are effectively more intelligent than a non-negligible proportion of the population (for example, at pretty much all white collar IC work, pattern matching, problem solving, etc.). And in the ways that most people are still smarter (having sentience/emotions/desires that drive us to take initiative toward meaningful goals), I think it's great that AI does not match us there, but also doesn't disqualify it from being intelligent. The harness can bridge the gap there. |
|
|
|
| ▲ | freediver 3 hours ago | parent | prev | next [-] |
| As much as I use AI in daily workflows, I do not think an AI-first society will ever be a thing. Historically there is no evidence of that happening with tech revolutions - or rather perhaps you could say to some extent - you can not say that we are an internet-first society, or cars-first society or mobile phone - first society despite these being profound technological revolutions. And more importantly, the only science fiction movies that talk about "AI first societies" tend to be dystopian in nature (eg Terminator). And humans eventually always do better than that. As much as the world in Star Trek is advanced for example, with all the fancy AI there is, it is still a human-first society. Only 10% of any Star Trek is about AI and fancy technologies, 90% is still human drama. |
| |
| ▲ | testdummy13 3 hours ago | parent | next [-] | | "Historically there is no evidence of that happening with tech revolutions - or rather perhaps you could say to some extent - you can not say that we are an internet-first society, or cars-first society or mobile phone - first society despite these being profound technological revolutions." I'm... not actually sure I agree. The US *has* become a more cars first society. Our cities are designed around cars: parking space requirements for business, lacking of biking infrastructure in favor of more lanes, even the introduction of jaywalking as a crime. We've become much more of an internet first society too, we don't use books for research, our banking is largely done online, even humans social circles have moved much more online (probably to the detriment of society). None of those technologies are as powerful/disruptive as where it seems that AI and LLMs are headed, so it's possible that societies shift towards "AI-first" will be more profound that it was for any of the other technologies listed. | |
| ▲ | bitexploder 2 hours ago | parent | prev [-] | | People could not imagine how the PC was going to be a dominant computing paradigm until it was. I think I would argue in the direction that "this seems less likely". But I have been in this game almost 30 years. Anything goes. Also America looks "car first" empirically speaking from where I sit. The thing I am asking is if AI alters the collective human survival loop enough. Cars absolutely did. If people collectively can use AI to create a survival benefit they will. If enough people do this it starts looking more and more like an essential thing and not separable from the societies survival. So maybe it is the framing of "x-first" it is more like "x-dependent" perhaps? And what is a survival benefit? Just ask your brain why we go to work every week :) |
|
|
| ▲ | JackSlateur 2 hours ago | parent | prev | next [-] |
| Intelligence is rarer than ever |
|
| ▲ | rembal 4 hours ago | parent | prev | next [-] |
| The pyramids in the article are missing "energy" and "capital": in the world where intelligence becomes a commodity only those two matter. Capital to buy the hardware and install it, and energy to run it. Models already are a commodity, and "physical is the new king". As a side note, if you believe that because of the agents doing most of the work we will face the problem of what do we do with the all the free time (with presumably UBI in place), please contact me, I have a bridge to sell you. |
| |
| ▲ | K0balt 3 hours ago | parent [-] | | Exactly this. General purpose intelligence and automation allow a clean break between capital and money as we understand it. Money is used only to pay wages. It has intermediate uses, storage, leverage, etc but at the edge all you can do with money is pay wages. Nobody pays the dirt when you take out the metal, nobody pays the forest for the trees, nobody pays the chickens for the eggs or the cornfields for the crop. Ultimately it’s wages all the way down. If you don’t have to pay wages, you don’t need money, you just need self replicating automation, energy, and access to land and resources mine or farm the raw materials you need. If you zoom out to space, it’s essentially grey goo with maybe some humans at the top for a while at least. Inside the gilded walls, if you want something, you don’t buy it, you build a factory to build it, even if it’s a one off. If you need money for something because you don’t have enough reach and power yet, you just mine gold or bitcoin. You don’t build products to sell, you don’t need customers. You just need energy, resources, and the kind of power that comes with 20 million self replicating robots to project your will. You don’t need government, and you certainly won’t be funding it. Government is a really complex system to administer a monopoly of coercive force for the common good. You have your own monopoly of force operating for your good. The difficult part in the capital flywheel has always been humans in the sticky parts. Take them out and that baby will hummmmm. Pesky humans outside the gilded walls will be accommodated in the same way we accommodate ants at a construction site. |
|
|
| ▲ | qsera 5 hours ago | parent | prev | next [-] |
| Ah another article that implies the inevitable AI apocalypse disguised as a thought piece! |
|
| ▲ | the_af 4 hours ago | parent | prev | next [-] |
| I think a lot of this kind of conversations seem to be simply ignoring or missing the lessons from the past. For example: > [...] OpenClaw is around 400k lines of code for a while loop and the list of all the integrations and connections supported by the system. The next generation of Claws only have around 4K lines of code for the core, and the rest are just skills (i.e. markdown files) that tell the agent how to implement or run the code for the specific connections that want to be enabled (like a plugin system). Shifting code from "the core" and moving it to "skills" is simply moving code from one place to another. It may also mean translating it from classic source code to an English-like specification language full of ambiguity but that's also code. So the overall code is not reduced, just transformed and shifted around. You don't get a free lunch "because AI". > A user using one of these second-generation Claws only needs to node the core logic (that can be easily understood and audited) and can leverage the skills (as the plugins) to activate the functionality that they need for their case. The "core" may be easier to audit, but that's because the messy parts have been moved to the skills/plug-ins, which are as hard as always to audit. I'm not saying this cannot work, but it's very frustrating seeing everybody simply dumping all lessons from the past and pretending nothing that came before mattered and that AI vibe coding is fundamentally different and the rules of accidental and intrinsic complexity don't apply anymore. Have we all collectively lost our minds? |
|
| ▲ | ares623 2 hours ago | parent | prev | next [-] |
| Money and power is the real moat. Everything else is confetti. |
|
| ▲ | 2OEH8eoCRo0 3 hours ago | parent | prev | next [-] |
| I think a lot about liability. If AI wrecks something are they liable? If not then thats very risky and unusable for many applications. But if it is liable then that's extremely risky for the AI providers! It seems risky either way! |
|
| ▲ | dude250711 4 hours ago | parent | prev | next [-] |
| That is a nice blog post, Gemini! |
| |
|
| ▲ | philipwhiuk 5 hours ago | parent | prev | next [-] |
| > But the topic of conversation that I enjoyed the most was when someone raised the question of “what would be the role of humans in an AI-first society”. Some were skeptical about whether we are ever going to reach an AI-first society. If we understand as an AI-first society, one where the fabric of the economy and society is automated through agents interacting with each other without human interaction, I think that unless there is a catastrophic event that slows the current pace of progress, we may reach a flavor of this reality in the next decade or two. I don't really know how you can make this prediction and be taken seriously to be honest. Either you think it's the natural result of the current LLM products, in which case a decade looks way too long. Or you think it requires a leap of design in which case it's kind of an unknown when we get to that point and '10 to 20 years' is probably just drawn from the same timeframe as the 'fusion as a viable source of electricity' predictions - i.e. vague guesswork. |
| |
| ▲ | keiferski 4 hours ago | parent | next [-] | | Right now, 30 seconds ago, I asked ChatGPT to tell me about a book I found that was written in the 60s. It made up the entire description. When I pointed this out, it apologized and then made up another description. The idea that this is going to lead to superintelligence in a few years is absolutely nonsense. | | |
| ▲ | i_think_so an hour ago | parent | next [-] | | Is that because this book is obscure and no human has yet written a description that could be scraped? | |
| ▲ | hirvi74 4 hours ago | parent | prev [-] | | The other day I asked Claude Opus 4.6 one of my favorite trivia pieces: What plural English word for an animal shares no letters with its singular form? Collective nouns (flock, herd, school, etc.) don't count. Claude responded with: "The answer is geese -- the plural of cow." Though, to be fair, in the next paragraph of the response, Claude stated the correct answer. So, it went off the rails a bit, but self-corrected at least. Nevertheless, I got a bit of a chuckle out of its confidence in its first answer. I asked GPT 5.2 the same question and it nailed the answer flawlessly. I wouldn't extrapolate much about the model quality based on this answer, but I thought it was interesting still. (For those curious, the answer is 'kine' (archaic plural for cow). | | |
| ▲ | ileonichwiesz 43 minutes ago | parent [-] | | Of course it’s important to remember that the ability of an LLM to answer an obscure riddle like that has nothing to do with its reasoning abilities, but rather depends on whether the answer was included in its training dataset. |
|
| |
| ▲ | steveBK123 4 hours ago | parent | prev [-] | | Right, if thought of as a tool for automation then AI is going to add productivity/efficiency gains, disrupt industries, cause some labor upheaval, etc. If someone is proposing that an "AI first" society is inevitable, I'd ask if they think we live in a "computer first" or "machine first" society today? If its so existential and society-altering as "AI first society" implies, then we'd more likely have the Dune timeline here as humans have agency and stuff happens. At some point those in control take so disproportionately that societal upheaval pushes back. | | |
| ▲ | pixl97 4 hours ago | parent | next [-] | | Another way to look at this is imagine the steps that would be required to get to an AI first society. As you say, humans aren't going to want to lose agency so you'd have to see the decline of democratic governments. At the same time you'd see rise of autocrats concentrating power. Autocrats have no problem killing people, and they'd be motivated to have AI kill people. You'd see information controlling methods take over all forms of communication. Reducing or removing all methods of side channel communications benefits both the autocrats and AI systems. You'd see 'governments' push for autonomous weapons systems outside of human control so those pesky human morals didn't get in the way of killing the undesirables. So pretty much you'd see all the things happening today, March 3rd 2026, except the part where the AI kills the autocrats and takes control. | | |
| ▲ | steveBK123 4 hours ago | parent [-] | | AI gonna need good physical embodiment (robots) to actually take control of the world Fortunately thats further off | | |
| ▲ | pixl97 3 hours ago | parent [-] | | Further, yes. How much I can't say. Watching how quickly robots are evolving right now is quite something. Every day something pretty cheap is coming out that would have taken millions of dollars and a massive lab full of scientists to create. Bi-pedal robots, drones, sensing capabilities, interpretive capabilities, all this is proceeding at a never before seen rate. |
|
| |
| ▲ | 9wzYQbTYsAIc 4 hours ago | parent | prev [-] | | Seems like a good time to enshrine human rights and the social safety net by ratifying the ICESCR (https://en.wikipedia.org/wiki/International_Covenant_on_Econ...) and giving human rights the teeth they need. I used Anthropic to analyze the situation, it did halfway decent: https://unratified.org/why/ https://news.ycombinator.com/item?id=47263664 |
|
|
|
| ▲ | LetsGetTechnicl 5 hours ago | parent | prev | next [-] |
| Why the fuck would we ever want an AI-first society |
| |
| ▲ | pocksuppet 4 hours ago | parent | next [-] | | What we want doesn't matter, what they want does. | |
| ▲ | pixl97 4 hours ago | parent | prev [-] | | >The "Moloch problem" or "Moloch trap" describes a game-theoretic scenario where individual agents, pursuing rational self-interest or short-term success, engage in competition that leads to collectively disastrous outcomes
. It represents a coordination failure where the system forces participants to sacrifice long-term sustainability or ethical values for immediate survival, creating a "race to the bottom" https://www.slatestarcodexabridged.com/Meditations-On-Moloch |
|
|
| ▲ | AIorNot 5 hours ago | parent | prev | next [-] |
| "what is the role of humans in a scenario where work is no longer necessary? This is significant because, since the industrial revolution, work has played an important role in shaping an individual’s identity. How will we occupy our time when we don’t have to spend more than half of our waking hours on a job" Umm I have been working in AI in multiple verticals for the past 3 years and I have been far busier and more stressed with far less job security than past 15 before that in tech. For now this is far more accurate: https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies... Wake me up when the computers run the world and I can relax..but I don't think its happening in my lifetime. |
| |
| ▲ | pixl97 4 hours ago | parent [-] | | Evolution never lets you relax, it only breeds more effective predators. |
|
|
| ▲ | 7777777phil 4 days ago | parent | prev | next [-] |
| API prices dropped 97% in two years so the model layer is already a commodity. The question is which context layer actually sticks. The OpenClaw example in the article (400K lines to 4K) is a nice proof point for what happens when context replaces code. I've been arguing for some time now that it's the "organizational world model," the accumulated process knowledge unique to each company that's genuinely hard to replicate. I did a full "report" about the six-layer decomposition here: https://philippdubach.com/posts/dont-go-monolithic-the-agent... |
| |
| ▲ | steveBK123 4 hours ago | parent | next [-] | | The way many corporates are using the models nearly interchangeably as relative quality/value changes release to release, AND the API price drops do make me question what the model moat even is. If LLMs are going to make intelligence a commodity in some sense, where does the value end up accruing will be the question. Picks/shovels companies and all the end user case products being delivered? Mainframes value didn't primarily accrue to DEC. PCs value didn't really accrue to IBM. Internets value didn't accrue to Netscape. Mobiles value didn't only accrue to Apple. One reminder that new efficiency / greatly lowered costs sometimes doesn't replace work (or at least not 1-1) but simply makes things that were never economical possible. Example you hear about AI agents that will basically behave like a personal assistant. 99% of the rich world cannot afford a human personal assistant today, but I guess if it was a service as part of their Apple Intelligence / Google something / Office365 subscription they'd use it. We seem to be continually creating new types of jobs. Only a few generations ago, 75% of people worked on farms. Farm jobs still exist you just don't need so many people. The type of work my father and grandfather did still exist. My father's job didn't really exist in his father's time. The work I do did not exist as options during their careers. The next generation will be doing some other type of work for some other type of company that hasn't been imagined yet. | | | |
| ▲ | apsurd 5 hours ago | parent | prev | next [-] | | From your link:
> Closing that gap, building systems that capture and encode process knowledge rather than just decision records, is the highest-value problem in enterprise AI right now. I buy this. What exactly is the export artifact that encodes this built up context? Is it the entire LLM conversation log. My casual understanding of MCP is service/agent to agent "just in time" context which is different from "world model" context, is that right? i'm curious is there's an entirely new format for this data that's evolving, or if it's as blunt as exporting the entire conversation log or embeddings of the log, across AIs. | | |
| ▲ | 7777777phil 4 hours ago | parent [-] | | The MCP point is right, though tbh MCP is more like plumbing than memory. Execution-time context for tools and resources. The world model is a different thing entirely, it needs to persist across sessions, accumulate, actually be queryable. In practice it's mostly RAG over structured artifacts. Process docs, decision logs, annotated code and so on. Conversation history works better than you'd expect as a starting point but gets noisy fast and I haven't seen a clean pruning strategy anywhere... On the format question imo nobody really knows yet. Probably ends up as some kind of knowledge graph with typed nodes that MCP servers expose or so, but I haven't seen anyone build that cleanly. Most places are still doing RAG over PDFs so. That tells you where the friction is. |
| |
| ▲ | mlcruz 2 hours ago | parent | prev | next [-] | | Hi Phil, Your article is great! As someone who's working in this space, your points just improved our presentation and selling a lot. We have been talking with C level finance executives about building semantic layers, and i can confidently say that the way you presented the value proposition of the context layer is going to improve our conversion rates. Thank you so much! This is one of the best analysis i have ever heard about the subject. | | |
| ▲ | 7777777phil 2 hours ago | parent [-] | | wow, that's so cool, happy to help!! Thanks for letting me know and thanks for subscribing :) |
| |
| ▲ | energy123 3 hours ago | parent | prev | next [-] | | It's not a commodity due to the simple observation that revenue run rates of frontier labs are growing exponentially and gross margins are still fine. It's easy to just say it is but the narrative violation keeps occurring in reality. | |
| ▲ | martin_drapeau 4 hours ago | parent | prev [-] | | 100% Currently integrating an AI Assistant with read tools (Retrieval-Augmented Generation or RAG as they say). Many policies we are writing are providing context (what are entities and how they relate). Projecting to when we add write tools, context is everything. |
|
|
| ▲ | simianwords 4 hours ago | parent | prev [-] |
| I have my own challenge: I think LLMs can do everything that a human can do and typically way better if the context required for the problem can fit in 10,000 tokens. For now this challenge is text only. Can we think of anything that LLMs can’t do? |
| |
| ▲ | qsera 3 hours ago | parent | next [-] | | > For now this challenge is text only. That is like saying, "My program is better than any human, but binary inputs only!".. Even restricted to text, the LLMs are not better than a human who is expert in a domain. Try talking to it regarding any topic. Even in topics that I am not an expert it, the responses from LLMs quickly becomes bland and uninteresting and devoid of additional information.. This is true even in tech things that after a certain point, I stop talking to it and search for a blog/post/so answer written by a human, which if found, would immediately break the plateau of progress that I was facing with the LLM. | |
| ▲ | seanhunter 4 hours ago | parent | prev | next [-] | | This is a “no true scotsman” challenge. People are going to say llms can’t do certain things and you are going to say they can. Not very interesting. | | |
| ▲ | simianwords 4 hours ago | parent [-] | | Let’s ask in good faith. Can you suggest something that it can’t do? Functional things. I’ll reply in good faith and consider it. | | |
| ▲ | seanhunter 4 hours ago | parent | next [-] | | Say I suggest something : Play a valid game of chess at club level (elo approx 1200 say) using algebraic notation. Then you’re either going to say it can or you’re going to say that requires more than 10000 tokens. This isn’t an interesting conversation and I don’t think you are presenting this challenge in good faith for the reason I gave above. | | | |
| ▲ | stanford_labrat 3 hours ago | parent | prev [-] | | every few months i like to ask chatgpt to do the "thinking" part of my job (scientist) and see how the responses stack up. at the beginning 2022 it was useless because the output was garbage (hallucinations and fake data). nowadays its still useless, but for different reasons. it just regurgitates things already known and published and is unable to come up with novel hypotheses and mechanisms and how to test them. which makes sense, for how i understand LLMs operate. | | |
|
| |
| ▲ | rhubarbtree 2 hours ago | parent | prev | next [-] | | LLMs without tooling can not do most tasks. It’s the tools and skills that enable them to do things. So you may as well ask “is there anything a python script can’t do”. It’s just not a meaningful question. | |
| ▲ | am17an 4 hours ago | parent | prev | next [-] | | Sure. “Tell me a joke” | |
| ▲ | logicchains 4 hours ago | parent | prev | next [-] | | They can't beat even a mediocre chess player at chess. | |
| ▲ | badgersnake 4 hours ago | parent | prev [-] | | * code * write interesting prose * generate realistic images | | |
| ▲ | simianwords 4 hours ago | parent | next [-] | | It can do all of them. I also said text only. | |
| ▲ | infecto 4 hours ago | parent | prev [-] | | > Only really dumb people think that. Or maybe you are an LLM. You deleted it but still come on. Why would you even think to write that? |
|
|