Remix.run Logo
jaimebuelta 6 days ago

I see some of this, from the point of view that it's going to be cheaper to create bespoke solutions for problems. And perhaps a "neoSaaS" company is one that, from a very bare bones idea, can create your own implementation.

But, at the same time, there are two issues:

- Companies can be really complex. The "create a system and parametrise it" idea has been done before, and those parametrisation processes are pretty intensive and expensive. And the resulting project is not always to be guaranteed to be correct. Software development is a discovery process. The expensive part is way more in the discovery than in the writing the code.

- The best software around is the one that's opinionated. It doesn't fit all the use cases, but it presents you a way to operate that's consistent and forces you to think and operate in certain way. It guides you how to work and, once going downstream, they are a joy to work with. This requires a consistent product view and enforcing, knowing when to say "no" and what use cases not to cover, as they'll be detrimental from the experience. It's very difficult to create software like that, and trying to fit your use case I'll guarantee it won't happen.

These two things tension any creation of software, and I don't think they'll go away just because we have a magical tool that can code fast.

tikhonj 6 days ago | parent | next [-]

The best software around is Emacs. Does that count as "opinionated" in your view?

In some ways it is—Emacs does a lot of things its own way, completely unbothered by mainstream conventions—but, at the same time, it's also totally malleable in the sense of this article. What makes Emacs great is a consistent and coherent conceptual foundation coupled with remarkably flexible code, letting you adjust Emacs to your needs rather than adjusting your needs to Emacs.

Or maybe the best software around is situated software. Software that's built by and for a specific set of people in a specific social context. Situated software is qualitatively different from product software, and it works so well because, again, it gives its users real agency and control. Instead of trying to create software that knows better than its users, we can create software that supports its users in whatever ways works for me. The result is still opinionated, but it's opinionated in a categorically different way from what you're describing.

So perhaps the best mainstream software is Excel.

And, while I don't think they're there now, it seems like LLMs are likely to be the foundation for the next Excel.

skydhash 6 days ago | parent | next [-]

You can either go with simple primitives and a way to combine them (emacs, excel, unix) or simple program that just works (notepad, sumatra,…). Anything else is going to be restrictive in one way or another.

godelski 6 days ago | parent | prev [-]

As a vim user I agree with all this. Same is true about why I am terminally terminal. I'm able to adapt the tools to me so that I may get the most use out of them. Sane defaults are great, but there are no settings which are universal. The only solution to this is to let people adjust as needed.

osigurdson 6 days ago | parent | prev | next [-]

I think the article presents a bit of an odd premise - I can make a mini app in ChatGPT today so by 2035 I can create an entire suite of software needed for a given business. What is the requisite change between what I can do now and in 2035? Presumably it is AGI.

OK, so we are in a digital super intelligence world in 2035. The HR department can now just have a conversation with a chatbot and create software to make them more productive. No more configuring SAP widgets or whatever they do today. The chatbot will be like "hey bro, the process that you want to automate doesn't make any sense: here is a better way. And, by the way, I'm terminating your entire department. I'll take care of it from now on". I mean, get real, in a post DGI world there will be exactly zero office jobs and no SaaS software at all.

ozim 6 days ago | parent | next [-]

Odd premise is that AGI will have infinite bandwidth to deal with petty stuff like taking over menial stuff of HR departments.

Current AI barely keeps up with generating funny images people ask from it :)

glitchc 6 days ago | parent | prev [-]

It doesn't need to be AGI to build complex software. A human software developer can build a complex software system and perform other complex tasks with the same body (play an instrument, fly an aircraft, etc.). Doing all of that with the same resources is what AGI is needed for. Just software, well I'm sure an LLM can eventually become an expert just like it learnt how to play Go.

osigurdson 6 days ago | parent [-]

AGI usually means "equivalent to human" while digital super intelligence generally means "smarter than all humans put together". In any case I agree that once we reach "equivalent to human" naturally it can do anything we do. That should be enough to end office jobs imo.

1718627440 5 days ago | parent [-]

A machine, that is capable of performing human intelligence in every paradigm, according to a mathematical model, and scalable by increasing the frequency, power or duplicating it, because it is reproducible, is both "equivalent to human" and "smarter than all humans put together". When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density. The only limit will be money as a proxy of available resources.

bccdee 5 days ago | parent | next [-]

> scalable by increasing the frequency, power or duplicating it

Well there's your problem. Very few things scale like that. Two people are not twice as smart as one person, nor are two instances of ChatGPT are twice as smart as one. One instance of ChatGPT running twice as fast isn't significantly smarter, and in fact, ChatGPT can never outrun its own hallucinations no matter how fast you overclock it.

Intelligence is the most complex phenomenon in the universe. Why would it ever scale geometrically with anything?

> When humans were capable of producing this, then this will be capable of improving itself and optimizing until the limit of information density.

This doesn't follow. After all, humans are as smart as humans, and we can't really optimize ourselves beyond a superficial level (good nutrition, education, etc). Increasingly, AI is a black box. Assuming we do create a machine as smart as we are, why would it understand itself any better than we understand ourselves?

And why wouldn't we hit some sort of technical roadblock at (arbitrarily) 1.5x human intelligence? Why do we assume that every problem becomes tractable once a computer is solving it? Imagine we applied this reasoning to cars: Over a matter of a century, cars went from 10 km/h to 100km/h to 500km/h to (in special vehicles) 1000km/h. Can we expect to see a 5000km/h car within the next century? No, that's unlikely; at such high speeds, you begin to hit intractable technical limits. Why should scaling intelligence just be smooth sailing forever?

1718627440 5 days ago | parent [-]

> Very few things scale like that.

I wasn't talking about two instances for scaling smartness, I meant applying two instances to different problems. That very much scales.

> This doesn't follow. After all, humans are as smart as humans ...

In the hypothetical case of humans capable of producing the one true AI system (real AI or AGI or however its called, because marketing has taken the previous term), then this system is capable of producing another system by definition. Humans are capable of following Moores law, so this system will as well. So this chain of system will explore the set of all possible intelligent systems restricted only by resources. It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model, its physical representation does only matter as so far as it needs to exist in this hypothetical case.

> AI is a black box

In this case, the black box "humans" was able to produce another thing reproducing their intelligence. So we have understood ourselves better than we currently do.

Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs. It's really the same as with consciousness.

> And why wouldn't we hit some sort of technical roadblock? Can we expect to see a 5000km/h car?

Yes. We are capable of accelerating "objects" to 0.99..c. It's not impossible for us to accelerate a "car" to nearly light speed, we "just" need enough energy (meaning matter as energy).

> technical roadblock at (arbitrarily) 1.5x human intelligence

I wrote "until the limit of information density". Whatever this may be.

I intended to point out, why a system "equivalent to human" is actually equivalent to "digital super intelligence meaning 'smarter than all humans put together'".

---

You don't need to tell me you don't think this system will exist. I think this will end the same as the attempts to build a machine creating energy. My personal understanding is this: A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this. In addition that "understanding" needs to be also included in its "information size" so the size to understand has at least doubled then. This means that the largest system capable of "understanding" itself has the size of 0.

In other words understanding something means knowing the whole thing and abstracting to a higher level then the abstractness of the system to be understood. But when the system tries to understand itself, it's always looking for yet another higher abstraction to infinity, as each abstraction it finds is not yet enough.

This idea comes from the fact, that you can't prove that every implementation of a mathematical model has some behaviour, without formalizing every possible model, in other words inventing another higher model, in other words abstracting.

bccdee 4 days ago | parent [-]

> I meant applying two instances to different problems. That very much scales.

You can't double the speed at which you solve a problem by splitting it in two and assigning one person to each half. Fred Brooks wrote a whole book about how this doesn't scale.

> this system is capable of producing another system by definition

Yeah, humans can produce other humans too. We're talking about whether that system can produce an improved system, which isn't necessarily true. The design could easily be a local maximum with no room for improvement.

> Humans are capable of following Moores law

Not indefinitely. Technical limitations eventually cause us to hit a point of diminishing returns. Technological progress follows a sigmoid curve, not an exponential curve.

> It isn't bound by inner problems like "(good nutrition, education, etc)", because it is a mathematical model

It's an engineering problem, not a math problem. Transistors only get so small, memory access only gets so fast. There are practical limits to what we can do with information.

> We are capable of accelerating "objects" to 0.99..c.

Are we? In practice? Because it's one thing to say, "the laws of physics don't prohibit it," and quite another to do it with real machines in the real world.

> > technical roadblock at (arbitrarily) 1.5x human intelligence

> I wrote "until the limit of information density".

Yeah, I know: That's wildly optimistic, because it assumes technological progress goes on forever without ever getting stuck at local maxima. Who's to say that it doesn't require at least 300IQ of intelligence to come up with the paradigm shift required to build a 200IQ brain? That would mean machines are capped at 200IQ forever.

> Note, that every intelligent system is completely able to be simulated by a large enough non intelligent statistical system, so intelligence isn't inferable from a set of inputs -> outputs.

This is circular. If a non-intelligent statistical system is simulating intelligence, then it is an intelligent system. Intelligence is a thing that can be done, and it is doing it.

> A system (humans) can never completely "understand" itself, because it's "information size" is as large as itself, but to contain something, it needs to be larger then this.

I don't think this logic checks out. You can fit all the textbooks and documentation describing how a 1TB hard drive works on a 1TB hard drive with plenty of room to spare. Your idea feels intuitively true, but I don't see any reason why it should necessarily be true.

1718627440 4 days ago | parent [-]

> You can't double the speed

I only need two instances to be faster then a single one. This means the human having the resources to run the system is unbound to do anything an infinite number of humans can do regarding his own time and energy.

> Yeah, humans can produce other humans too

In this hypothetical scenario humans were able to build "AI" (including formalized, deterministic and reproducible). A system as capable as a human (=AI) is then able to produce many such systems.

> There are practical limits to what we can do with information.

Yes, but we are nowhere near this limits yet.

> Are we? In practice?

Yes. We are able to build a particle accelerator. Given enough resources, we can have enough particle generators as there are particles in a car.

> That would mean machines are capped at 200IQ forever.

Except when the 300IQ thing is found by chance. When the system is reproducible and you aren't bound by resources, then a small chance means nothing.

> This is circular.

No it just means intelligence is not attributable to a black box. We don't think other humans are intelligent solely by their behaviour, we conclude that they are similar then us and we have introspection into us.

> You can fit all the textbooks and documentation describing how a 1TB hard drive works on a 1TB hard drive with plenty of room to spare.

It's not about encoding the result of having understood. A human is very much capable of computing according to the nature of a human. It's about the process of understanding itself. The harddrive can store this, it can't create it. Try to build a machine that makes predictions about itself including the lowest level of itself. You won't get faster then time.

bccdee 4 days ago | parent [-]

> Yes, but we are nowhere near this limits yet.

Says who?

> Given enough resources, we can have enough particle generators as there are particles in a car.

Given by whom? I said in practice—you can't just assume limitless resources.

> Except when the 300IQ thing is found by chance. When the system is reproducible and you aren't bound by resources, then a small chance means nothing.

We're bound by resources! Highly so! Stop trying to turn practical questions about what humans can actually accomplish into infinite-monkey-infinite-typewriter thought experiments.

> We don't think other humans are intelligent solely by their behaviour

I wouldn't say that, haha

> It's not about encoding the result of having understood. It's about the process of understanding itself.

A process can be encoded into data. Let's assume it takes X gigabytes to encode comprehension of how a hard drive array works. Since data storage does not grow significantly more complex with size (only physically larger), it stands to reason that an X-GB hard drive array can handily store the process for its own comprehension.

1718627440 4 days ago | parent [-]

> Says who?

Because I think we haven't even started. Where is the proof based system able to invent every possible thought paradigm of humans a priori? I think we are so far away from anything like this, we can't even describe the limits. Maybe we will never have and never do.

> you can't just assume limitless resources

I assumed that, because the resource limits of a very rich human (meaning for whom money is never the limit) and the one true AI system are not different in my opinion.

> comprehension

Comprehension is already the result. But I don't think this is a sound definable concept, so maybe I should stop defending this.

bccdee 2 days ago | parent [-]

> Where is the proof based system able to invent every possible thought paradigm of humans a priori?

Beyond the realm of feasibility, I'd imagine. The gulf between what is theoretically possible and what is realistically doable is gargantuan.

> I assumed that, because the resource limits of a very rich human (meaning for whom money is never the limit)

The resources of a very rich human are extremely limited, in the grand scheme of things. They can only mobilize so much of the global economy, and even the entire global economy is only capable of doing so much. That's what I'm getting at: Just because there's some theoretical configuration of matter that would constitute a superintelligence, does not guarantee that humanity, collectively, is capable of producing it. Some things are just beyond us.

osigurdson 4 days ago | parent | prev [-]

I'd say it might scale like whatever your mathematical model is telling you, but it might not. I don't think we have a reasonable model for how human intelligence scales as the number of brains increases. Sometimes it feels more like attenuation than scaling in many meetings.

ethbr1 6 days ago | parent | prev | next [-]

> The best software around is the one that's opinionated.

This. And it isn't going to change.

The post avoids trying to answer "Why are opinionated tools popular and effective?"

The answer is that a standardized process that they encourage is often more efficient than whatever bullshit {random company} came up with in-house.

Malleable software needs to produce two equivalently good outcomes to beat opinionated:

1. Improve the underlying process at the customer's business (in terms of effectiveness)

2. Avoid a customization maintenance burden

The seductiveness of "just for you" bespoke solutions is they avoid (1) by telling the customer what they want to hear: you're so brilliant, your process is actually better, our product is a custom fit for your exact process, etc. That's bullshit -- a lot of customer processes are half-baked dumpster fires, and their companies would be better served by following standards.

To (2), I am incredibly skeptical on the long-term tech debt that malleable solutions will impose. What happens when there's a bug in the version only you use? Is that going to be the vendor's priority? Oh, you're supposed to fix it yourself? Congrats... we've just added a requirement that these tools are capable of making random mid-level in-house practitioners as competent as focused dev teams. That's a tall order.

Exhibit A that I'd want a follow-up post to address: SAP.

The above are the reason they realized they were trending in the wrong direction and have been dragging their customer base back to Clean Core.

Walk me through how malleable software would work better for SAP as a product, and I'll begin to believe...

ch4s3 6 days ago | parent | next [-]

Highly customizable configuration causes all kinds of problems in healthcare, and EHR customizations have actually killed people.

RUnconcerned 6 days ago | parent [-]

In my first job I had to work with healthcare software and it horrified me. There is a standard for interop, HL7, but every system implements HL7 in its own special way so there are "integration engines" to massage the data so that they all conform to the same standard.

It's a gigantic grift.

ch4s3 6 days ago | parent [-]

The history of HL7 is kind of nuts. It was originally developed for copper wire communication in 1979. Formalization was ongoing until maybe the early 1990s and lots of proprietary usage arose, because back in the 1990s none of these systems really inter-operated and everything eventually ended up on paper. It wasn't until after the ACA that a lot of interoperability pushes really got going at scale. Before that you had a few Health Information Exchanges at state levels so there was usually a local standard if there was an HIE. HL7 FHIR is much more standardized now.

I wouldn't call any of it a grift. It's just old tech built for a fragmented archipelago of systems that didn't communicate. Also you can write a pretty good HL7v2 parser in an afternoon, I've written maybe 5 of them.

ethbr1 5 days ago | parent [-]

The koan that unlocked why healthcare technology is the way it is for me:

I was working on automating health insurance claims processing on a mainframe system.

In their key interface, a key form had 8 blanks for ICD codes. If more than 8 codes were needed, a child claim was created and linked to the parent claim.

This was a long project, so I was staring at this interface for months, as linked child claims made automation more complex than it needed to be. (E.g. if a parent claim had aged, been archived, and needed to be reloaded to active overnight before processing the child claim)

Finally, I started asking around. "This is a computer system. Why are there a finite number of fields for something that might need more?"

Nobody knew. Project continued. I continued asking different people.

Finally, I asked a guy who had been working in the industry since the 1960s...

"Oh, because that's how many fields there were on the paper version of the form that preceded the mainframe app."

Which seems insane, until you think it through. There were innumerable downstream processes of that paper form.

Changing the number of fields on the digital version would have cascaded that change downstream to all those processes. In the interest of rapid implementation, the optimal approach was to preserve everything about the form.

And then nobody had a reason to go to the bother to change it for the next 50 years. (And that was a process within a single company!)

1718627440 5 days ago | parent [-]

But you can split these claims into child claims upon printing. That's the thing with good software, the user model and the internal implementation are completely orthogonal. I think a good example of this is postfix.

ch4s3 5 days ago | parent [-]

> But you can split these claims into child claims upon printing

Maybe, if business rules and the law allow a thing like that. If insurance won't pay claims like that then you can't do it.

BinaryIgor 6 days ago | parent | prev [-]

100%; customization maintenance burden is underrated - it simply costs a lot of time and energy to customize things; often there are better uses of this time, especially in the business context

tablet 6 days ago | parent | prev | next [-]

Your arguments are totally valid, niche tools will be alive and well. I think my take is that even in niche tools we will see a lot of generalization and more flexible niche tools will eventually win.

crote 6 days ago | parent [-]

The problem is that software can be too flexible. A great example is companies ending up using Excel as a load-bearing database, relying on a bunch of incomprehensible macros to execute critical business logic.

Sure, it's flexible, but are they really better off than a competitor using properly-engineered one-off software? In the end, is there really a difference between software development and flexible-tool-configuration?

godelski 6 days ago | parent | prev [-]

  > Companies can be really complex
I think this is a great argument for flexible code, though it was unclear to me that the author of that post was talking about that.

  > The best software around is the one that's opinionated.
I think I might be on the same page as you but I would say that the best software is written to be an environment more than a specific tool. You're absolutely right that you can't solve all problems.

tikhonj jokingly suggests emacs but even as a vim user I fully agree. Like they say, the beauty of it is that the complexity draws from simpler foundations. It is written as an environment rather than just as a text editor. Being written that way lets it adapt to many different situations and is what has kept both vim and emacs alive and popular after all these years. There's a constant in software development: requirements change with time. The point of writing an environment is that you're able to adapt to these changes. So any time you write a tool that tool is built out of that environment. Anything short of that means the tool won't be able to adapt as time marches on.

I definitely agree that writing software like this is hard but I'm not sure if it is harder. It takes more work up front but I'd argue it takes less work in the long run. It's just that in the long run many efforts are distributed across different people and time. But hey, good flexible code also tends to be much easier to read and that's big short term benefit to anyone coming into a mature project.