Remix.run Logo
Scubabear68 15 hours ago

The image concept, in my opinion, is what really limited Smalltalk's appeal and distribution.

The image meant you basically got whatever state the developer ended up with, frozen in time, with no indication really of how they got there.

Think of today's modern systems and open source, with so many libraries easily downloadable and able to be incorporated in your system in a very reproducible way. Smalltalk folks derided this as a low tech, lowest-common-denominator approach. But in fact it gave us reusable components from disparate vendors and sources.

The image concept was a huge strength of Smalltalk but, really in the end in my opinion, one of the major areas that held it back.

Java in particular surged right past Smalltalk despite many shortcomings compared to it, partially because of this. The other part of course was being free at many levels. The other half of Smalltalk issues beyond the image one, was the cost of both developer licenses ($$$$!) and runtime licenses (ugh!).

cbsmith 12 hours ago | parent | next [-]

> The image meant you basically got whatever state the developer ended up with, frozen in time, with no indication really of how they got there.

That wasn't a function of the image system. That was a product of your version control/CI/CD systems and your familiarity with them.

Consider that Docker and other container based systems also deploy images. No reason Smalltalk has to be any different.

I did software development work in Smalltalk in the 90's. We used version control (at one point, we used PVCS, which was horrible, but Envy was pretty sweet), had a build process and build servers that would build deploy images from vanilla images. Even without all that, the Smalltalk system kept a full change log of ever single operation it performed in order. In theory, someone could wipe their changelog, but that's the moral equivalent of deleting the source code for your binary. Image-based systems are no reason to abandon good engineering practices.

lmm 8 hours ago | parent [-]

> Consider that Docker and other container based systems also deploy images.

Consider also that Docker was the only one to really get popular, perhaps because it promoted the idea of using a text-based "Dockerfile" as your source of truth and treating the images as transitory built artifacts (however false this was in practice).

orthoxerox an hour ago | parent | next [-]

It's still mostly true in practice. You don't add one more layer to your image to build the next version, you rebuild it from the Dockerfile, which is the opposite of Smalltalk approach.

cess11 2 hours ago | parent | prev [-]

Arguably it goes back to chroot-stuff, and LXC predates Docker by some five years or so. I don't remember the details well but Solaris had similar containers, maybe even before LXC arrived.

I'd say the clown popularised it outside of Linux and Unix sysadmin circles, rather than the Dockerfile format itself.

lmm an hour ago | parent [-]

> Arguably it goes back to chroot-stuff, and LXC predates Docker by some five years or so. I don't remember the details well but Solaris had similar containers, maybe even before LXC arrived.

Solaris and FreeBSD had significantly better implementations of the containerisation/isolation piece from a technical standpoint. But they never caught on. I really think the Dockerfile made the difference.

btilly 14 hours ago | parent | prev | next [-]

I agree that the image concept was a problem, but I think that you're focused on the wrong detail.

The problem with an image based ecosystem that I see is that you are inevitably pushed towards using tools that live within that image. Now granted, those tools are able to be very powerful because they leverage and interact with the image itself. But the community contributing to that ecosystem is far smaller than the communities contributing to filesystem based tools.

The result is that people who are considering coming into the system, have to start with abandoning their familiar toolchain. And for all of the technical advantages of the new toolchain, the much smaller contributor base creates a worse is better situation. While the file-based system has fundamental technical limitations, the size of the ecosystem results in faster overall development, and eventually a superior system.

rbanffy 12 hours ago | parent | next [-]

> But the community contributing to that ecosystem is far smaller than the communities contributing to filesystem based tools.

Another point is that you need to export your tools out of your own image so others can import it into their images. This impedance mismatch between image and filesystem was annoying.

Scubabear68 14 hours ago | parent | prev [-]

I think we could quibble over the relative importance of these points, but I agree in general. The image locking you into that ecosystem is definitely a good point.

pjmlp 42 minutes ago | parent | prev | next [-]

One of the reasons Java got adopted, was that Smalltalk big names like IBM decided to go all in with Java.

It is no accident that Eclipse to this day still has a code navigation perspective based on Smalltalk, it has an incremental compiler similar to Smalltalk experience, and the virtual filesystem used by Eclipse workspaces mimic the behaviour of Smalltalk images.

rbanffy 12 hours ago | parent | prev | next [-]

> The image meant you basically got whatever state the developer ended up with, frozen in time, with no indication really of how they got there.

I worked with a similar language, Actor (Smalltalk with an Algol-like syntax), and the usual way to deal with distribution was to “pack” (IIRC) the image by pointing to the class that your app is an instance of, and the tool would remove every other object that is not a requirement of your app. With that you got an image that started directly into your app, without any trace of the development environment.

sebastianconcpt 14 hours ago | parent | prev | next [-]

It wasn't the image concept. You use it every day in Docker containers for everything else.

But saving the image has some drawbacks. Mutability always requires special care.

chuckadams 14 hours ago | parent | next [-]

The key is the plural in "Docker containers". You're not doing everything by modifying one Docker container that's been handed down over literally generations, you're rebuilding images as you need to, usually starting from a golden master, but sometimes starting from a scratch image into which you just copy individual files. It's the "cattle, not pets" mentality, whereas a Smalltalk or Lisp Machine image is the ultimate pet.

Jtsummers 14 hours ago | parent [-]

> You're not doing everything by modifying one Docker container that's been handed down over literally generations

You don't do that with Smalltalk, either, at least not for the last 30 years or so. Smalltalk has worked with version control systems for decades to maintain the code outside the image and collaborate with others without needing to share images.

rbanffy 12 hours ago | parent [-]

It’s fun when you realize something that happened 30 years ago is a relatively recent addition to the typical workflow.

Jtsummers 12 hours ago | parent [-]

I try not to think about these things, I've mostly worked with hardware-centric companies and on "legacy" systems. So many things they're doing that no one else does because 5-25 years ago everyone else figured out the lessons from 30-60 years ago, except for these companies.

Scubabear68 14 hours ago | parent | prev [-]

I disagree, it really was the image concept, or very specifically how it was created and maintained over time.

A docker container is composed typically of underlying components. You can cowboy it for sure, but the intent is to have a composable system.

The Smalltalk image resulted from the developer just banging on the system.

isr 14 hours ago | parent [-]

Except that's not really what happened. You're ignoring the range of in-image tools which kept track if who did what, where. From versioning of individual methods, to full blown distributed version control systems, which predated git.

Not to sound harsh or gatekeep, but folks who keep repeating the canard that "The Smalltalk image resulted from the developer just banging on the system", mostly never used smalltalk in the first place.

Give the original smalltalk devs some credit for knowing how to track code development over time.

Scubabear68 12 hours ago | parent [-]

No, I haven't ignored those tools. They were all stop-gaps that worked in a "meh" way to various degrees. Smalltalk was always optimized to one guy banging away on their solution. Add a second developer and things got much hairier, and more so as you kept adding them.

isr 12 hours ago | parent [-]

Hmm, well I don't know exactly when Monticello was first developed, but it was certainly in heavy use by the early 2000s. How is that "meh" when compared to ... cvs & subversion?

I don't know much about the systems used in commercial smalltalks of the 90s, but I'm sure they weren't "meh" either (others more knowledgeable than me about them can chime in).

image-centric development is seductive (I'm guilty). But the main issue isn't "we don't know what code got put where, and by whom". There were sophisticated tools available almost from the get go for that.

Its more a problem of dependencies not being pruned, because someone, somewhere wants to use it. So lots of stuff remained in the "blessed" image (I'm only referring to squeak here) which really ought not to have been in the standard distribution. And because it was there, some other unrelated project further down the line used a class here, a class there.

So when you later realise it needed to be pruned, it wasn't that easy.

But nevertheless, it was still done. Witness cuis.

In other words, it was a cultural problem, not a tooling problem. It's not that squeak had too few ways of persisting & distributing code - it had too many.

IMHO, the main problem was never the image, or lack of tools. It was lack of modularisation. All classes existed in the same global namespace. A clean implementation of modules early on would have been nice.

igouy 10 hours ago | parent [-]

1988 "An Overview of Modular Smalltalk"

https://dl.acm.org/doi/pdf/10.1145/62084.62095

wild_egg 9 hours ago | parent | next [-]

A Smalltalk with all reflexivity removed just sounds pointless, no?

igouy 8 hours ago | parent [-]

The point would have been different.

isr 2 hours ago | parent | prev [-]

Interesting. Shows how aware they were of these 2025 criticisms, way back in the 80s (which shows how much of an oversimplification these criticisms are of the real situation).

You probably already know about this, but in case you didn't, there is 1 project which adds modules to cuis Smalltalk:

http://haver.klix.ch/index.html

mpweiher 13 hours ago | parent | prev | next [-]

>> most impressive part of Smalltalk ecosystem is the structure of the image

> The image concept, in my opinion, is what really limited Smalltalk's appeal and distribution.

I'd say these statements are both true. The image concept is very impressive and can be very useful, it certainly achieved a lot of bang for very little buck.

And it also was/is one of the major impediments for Smalltalk, at least after the mid 1980s.

The impressive bit is shown by pretty much the entire industry slowly and painfully recreating the Smalltalk image, just usually worse.

For example on macOS a lot of applications nowadays auto-save their state and will completely return to the state they were last in. So much that nowadays if you have a lot of TextEdit windows open and wish to make sure everything is safe, you kill the program, you don't quit it.

Also, all/most of the shared libraries and frameworks that come with the system are not loaded individually, instead they are combined into one huge image file that is mapped into your process. At some point they stopped shipping the individual framework and shared library binaries.

User interfaces have also trended in the direction of a an application that contains its own little world, rather than editing files that exist within the wider Unix filesystem.

The image accomplished all that and more and did so very efficiently. Both in execution speed and in amount of mechanism required: have a contiguous piece of memory. Write to disk, make a note of the start pointer. On load, map or read it into memory, fix up the pointers if you didn't manage to load at the same address and you're ready to go. On G4/G5 era Macs, the latter would take maybe a second or two, whereas Pages, for example, took forever to load if things weren't already cached, despite having much less total data to load.

But the drawbacks are also huge. You're really in your little world and going outside of it is painful. On an Alto in the mid to late 1970s I imagine that wasn't much of an issue, because there wasn't really much outside world to connect to, computer-wise, and where would you fit it on a 128KB machine (including the bitmap display)? But nowadays the disadvantages far outweigh the advantages.

With Objective-S, I am building on top of Cocoa's Bundle concept, so special directories that can contain executable code, data or both. Being directories, bundles can nest. You can treat a bundle as data that your program (possibly the IDE) can edit. But you can also plonk the same bundle in the Resources folder of an application to have it become part of that application. In fact, the IDE contains an operation to just turn the current bundle into an application, by copying a generic wrapper application form its own resources and then placing the current bundle into that freshly created/copide app.

Being directories, data resources in bundles can remain standard files, etc.

With Objective-S being either interpreted or compiled, a bundle with executable code can just contain the source code, which the interpreter will load and execute. Compiling the code inside a bundle to binaries is just an optimization step, the artifact is still a bundle. Removing source code of a bundle that has an executable binary is just an obfuscation/minimization step, the bundle is still the bundle.

shevy-java 14 hours ago | parent | prev | next [-]

Agreed with that; this is why I think it should unite both the "scripting" as well as image approach, at the same time.

cess11 14 hours ago | parent | prev | next [-]

Contemporary Smalltalks support git.

rbanffy 12 hours ago | parent [-]

And, more importantly, source code files.

igouy 11 hours ago | parent | next [-]

For decades —

"When you use a browser to access a method, the system has to retrieve the source code for that method. Initially all the source code is found in the file we refer to as the sources file. … As you are evaluating expressions or making changes to class descriptions, your actions are logged onto an external file that we refer to as the changes file. If you change a method, the new source code is stored on the changes file, not back into the sources file. Thus the sources file is treated as shared and immutable; a private changes file must exist for each user."

1984 "Smalltalk-80 The Interactive Programming Environment" page 458

layer8 10 hours ago | parent | prev [-]

But the image isn’t just the code, or classes, it’s also the network of objects (instances). And that’s more difficult to version, or to merge branches of.

igouy 8 hours ago | parent [-]

Given that the instantiation of those objects has been triggered by Smalltalk commands; those Smalltalk commands can be recorded and versioned and replayed to instantiate those objects.

layer8 7 hours ago | parent [-]

It means that versioning operations, even just displaying the history, effectively have to run the full image from the beginning of the history, or take intermediate snapshots of the image. In addition, there is interaction between the source code changes and the recorded command history. It also doesn't address how merging would be practical. You would have to compare the state of two images side-by-side, or rather three, for three-way merges.

cess11 2 hours ago | parent [-]

This isn't more of a nuisance than things like web testing where you automate login and navigation.

Barrin92 14 hours ago | parent | prev | next [-]

the entire philosophy of Smalltalk was to think of software artifacts as living entities. You can just find yourself in a piece of software, fully inspect everything and engage with it by way of a software archaeology. To do away with the distinction between interacting, running and writing software.

They wanted to get away from syntax and files, like an inert recipe you have to rerun every time so I think if you do away with the image you do away with the core aspect of it.

Computing just in general didn't go the direction they wanted it to go in many ways I think it was too ambitious of an idea for the time. Personally I've always hoped it comes back.

shevy-java 14 hours ago | parent [-]

I'd include both approaches.

The thing is that the "scripting" approach, is just so much easier to distribute. Just look at how popular python got. Smalltalk didn't understand that. The syntax is worse than python IMO (and also ruby of course).

rbanffy 12 hours ago | parent [-]

Once I asked James Gosling what Java did right that Smalltalk did wrong. He simply answered “Smalltalk never played well with others”.

Imposing a very different metaphor from the ground up limited adoption and integration with other tools and environments.

igouy 11 hours ago | parent [-]

Let's remember: Java was free-as-in-beer.

api 12 hours ago | parent | prev | next [-]

Re: the image concept.

A lot of great ideas are tried and tried and tried and eventually succeed, and what causes them to succeed is that someone finally creates an implementation that addresses the pragmatic and usability issues. Someone finally gets the details right.

Rust is a good example. We've had "safe" systems languages for a long time, but Rust was one of the first to address developer ergonomics well enough to catch on.

Another great example is HTTP and HTML. Hypertext systems existed before it, but none of them were flexible, deployable, open, interoperable, and simple enough to catch on.

IMHO we've never had a pure functional language that has taken off not because it's a terrible idea but because nobody's executed it well enough re: ergonomics and pragmatic concerns.

pluralmonad 9 hours ago | parent [-]

Typed out a response indicating the really good dev experience that F# and Elixir offer, but neither are "pure". Is Haskell the closest mainstream language to meet a purity requirement?

lawlessone 14 hours ago | parent | prev | next [-]

Isn't this kinda where AI is now?

Like with LLM's it seems impossible to separate the "reasoning" from data it has stored to learn that reasoning.

fellowniusmonk 14 hours ago | parent | prev [-]

For this very reason I'm working on a development platform that makes all changes part of a cheaply stored crdt log. The log is part of the application, there are some types of simulations inside of this that we can only timestamp and replay but we can always derive the starting position with 100% accuracy.

joshmarinacci 11 hours ago | parent [-]

Ooh. Tell us more.