Remix.run Logo
moomin 9 hours ago

Arguably the answer is “When Barbara Liskov invented CLU”. It literally didn’t support inheritance, just implementation of interface and here we have her explaining 15 odd years later why she was right the first time.

I used to do a talk about Liskov that included the joke “CLU didn’t support object inheritance. The reason for this is that Barbara Liskov was smarter than Bjarne Stroustrup.”

Scubabear68 9 hours ago | parent | next [-]

There is a reason C++ devs and only C++ devs have nightmares of diamond inheritance.

Oh the damage that language has done to a generation, but at least it is largely passed us now.

lll-o-lll 8 hours ago | parent | next [-]

Diamond inheritance is its own special kind of hell, but “protected virtual” members of java and c# are the “evil at scale” that’s still with us today. An easy pattern that leads to combinatorial explosion beyond the atoms in the universe. Trivially.

People need to look at a playing deck. 52 cards, and you get 8×10^67 possible orders of the deck. Don’t replicate this in code.

DeathArrow 41 minutes ago | parent [-]

Why do protected virtual methods lead to an explosion?

kccqzy 6 hours ago | parent | prev | next [-]

Every language that permits diamond inheritance causes the devs who dare to use this feature at least some nightmare. It's not a C++ issue.

nine_k an hour ago | parent [-]

It's also cultural, possibily. Python supports diamond inheritance, and clearly states how it handles it (it ends up virtual in C++ terms). But in like 20 years of working with Python I can't remember encountering diamond inheritance in the wild once.

creata 5 hours ago | parent | prev | next [-]

> at least it is largely passed us now

What does this mean? There doesn't seem to be a popular alternative to C++ yet, unfortunately.

klardotsh 4 hours ago | parent [-]

Aside from game dev, Rust is being used in quite a lot of green field work where C++ would have otherwise been used.

Game dev world still has tons of C++, but also plenty of C#, I guess.

Agreed that it’s not really behind us though. Even if Rust gets used for 100% of C++’s typical domains going forward (and it’s a bit more complicated than that), there’s tens? hundreds? of millions (or maybe billions?) of lines of working C++ code out there in the wild that’ll need maintained for quite a long time - likely order decades.

Animats 3 hours ago | parent [-]

The problem in Rust is that if B is inside of A,

    struct A {
        name: String,
        owned: B
    }

    struct B {
        name: String,
    }
you can't have a writeable reference to both A and B at the same time. This is alien to the way C/C++ programmers think. Yes, there are ways around it, but you spend a lot of time in Rust getting the ownership plumbing right to make this work.
anon291 26 minutes ago | parent | next [-]

It's kind of crazy that OOO is sold to people as 'thinking about the world as objects' and then people expect to have an object, randomly take out a part, do whatever they want with it and just stick it back in and voila

This is honestly such an insane take when you think about what the physical analogue would be (which again, is how OOP is sold).

The proper thing here is that, if A is the thing, then you really only have an A and your reference into B is just that, And should be represented as such, with appropriate syntactic sugar. In Haskell, you would keep around A and use a lens into B and both get passed around separately. The semantic meaning is different.

vlovich123 2 hours ago | parent | prev [-]

> you can't have a writeable reference to both A and B at the same time > but you spend a lot of time in Rust getting the ownership plumbing right to

I think you maybe meant to say something different because here's the most obvious thing:

    impl A {
        fn simultaneously_writeable(&mut self) -> (&mut str, &mut str) {
            (&mut self.name, &mut self.owned.name)
        }
    }

Now it may take you a while to figure out if you've never done Rust before, but this is trivial.

Did you perhaps mean simultaneous partial field borrows where you have two separate functions that return the name fields mutably and you want to use the references returned by those functions separately simultaneously? That's hopefully going to be solved at some point, but in practice I've only seen the problem rarely so you may be overstating the true difficulty of this problem in practice.

Also, even in a more complicated example you could use RefCell to ensure that you really are grabbing the references safely at runtime while side-stepping the compile time borrow checking rules.

Spivak 8 hours ago | parent | prev [-]

I'm spoiled by Python's incredibly sane inheritance and I always have to keep in mind that inheritance is a very different beast in other languages.

kevindamm 7 hours ago | parent | next [-]

And python didn't get it right the first time either. It wasn't until python 2.3 when method resolution order was decided by C3 linearization that the inheritance in python became sane.

http://mail.python.org/pipermail/python-dev/2002-October/029...

mekoka 7 hours ago | parent [-]

Inheritance being "sane" in Python is a red herring for which many smart people have fallen (e.g. https://www.youtube.com/watch?v=EiOglTERPEo). It's like saying that building a castle with sand is not a very good idea because first, it's going to be very difficult to extract pebbles (the technical difficulty) and also, it's generally been found to be a complicated and tedious material to work with and maintain. Then someone discovers a way to extract the pebbles. Now we have a whole bunch of castles sprouting that are really difficult to maintain.

anon291 24 minutes ago | parent | prev | next [-]

Python is slightly better because it can mostly be manipulated beyond recognition due to strong metaprogramming but pythons operator madness is dangerous. Random code can run at any minute. It's useful for something's and a good scripting language, and a very well designed one, no question there. Still it would be better if it supported proper type classes. It could retain the dynamic typing, just be more sensible.

xdennis 6 hours ago | parent | prev [-]

I'm always surprised by how arrogant and unaware Python developers are. JavaScript/C++/etc developers are quite honest about the flaws in their language. Python developers will stare a horrible flaw in their language and say "I see nothing... BTW JS sucks so hard.".

Let me give you just one example of Python's stupid implementation of inheritance.

In Python you can initialize a class with a constructor that's not even in the inheritance chain (sorry, inheritance tree because Python developers think multiple inheritance is a good idea).

    class A:
        def __init__(self):
            self.prop = 1

    class B:
        def __init__(self):
            self.prop = 2

    class C(A):
        def __init__(self):
            B.__init__(self)


    c = C()
    print(c.prop) # 2, no problem boss
And before you say "but no one does that", no, I've see that myself. Imagine you have a class that inherits from SteelMan but calls StealMan in it's constructor and Python's like "looks good to me".

I've seen horrors you people can't imagine.

* I've seen superclass constructors called multiple times.

* I've seen constructors called out of order.

* I've seen intentional skipping of constructors (with comments saying "we have to do this because blah blah blah)

* I've seen intentional skipping of your parent's constructor and instead calling your grandparent's constructor.

* And worst of all, calling constructors which aren't even in your inheritance chain.

And before you say "but that's just a dumb thing to do", that's the exact criticism of JS/C++. If you don't use any of the footguns of JS/C++, then they're flawless too.

Python developers would say "Hurr durr, did you know that if you add a object and an array in JS you get a boolean?", completely ignoring that that's a dumb thing to do, but Python developers will call superclass constructors that don't even belong to them and think nothing of it.

------------------------------

Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`. I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.

Yes, instead of putting an if condition in the constructor Python developers in the wild, people who walk among us, who put their pants on one leg at a time like the rest of us, will call `object.__new__(C)` to construct a `C` object.

    def init_c():
        c2 = object.__new__(C)
        c2.prop2 = 'three'
        print(c2.__dict__, type(c2)) # {'prop2': 'three'} <class '__main__.C'>
And Python developers will look at this and say "Wow, Python is so flawless".
dragonwriter 4 hours ago | parent | next [-]

> In Python you can initialize a class with a constructor that's not even in the inheritance chain

No, you can't. Or, at least, if you can, that’s not what you’ve shown. You’ve shown calling the initializer of an unrelated class as a cross-applied method within the initializer. Initializers and constructors are different things.

> Oh, bonus point. I've see people creating a second constructor by calling `object.__new__(C)` instead of `C()` to avoid calling `C.__init__`.

Knowing that there are two constructors that exist for normal, non-native, Python classes, and that the basic constructoe Class.__new__, and that the constructor Class() itself calls Class.__new__() and then, if Class.__new__() returns an instance i of Class, also calls Class.__init__(i) before returning i, is pretty basic Python knowledge.

> I didn't even know it was possible to construct an object while skipping its constructor, but dumb people know this and they use it.

I wouldn’t use the term “dumb people” to distinguish those who—unlike you, apparently—understand the normal Python constructors and the difference between a constructor and an initializer.

kccqzy 6 hours ago | parent | prev | next [-]

Oh I've seen one team constructing an object while skipping the constructor for a class owned by another team. The second team responded by rewriting the class in C. It turns out you cannot call `object.__new__` if the class is written in native code. At least Python doesn't allow you to mess around when memory safety is at stake.

4 hours ago | parent | prev | next [-]
[deleted]
maleldil 5 hours ago | parent | prev | next [-]

For what it's worth, pyright highlights the problem in your first example:

    t.py:11:20 - error: Argument of type "Self@C" cannot be assigned to parameter "self" of type "B" in function "__init__"
        "C*" is not assignable to "B" (reportArgumentType)
    1 error, 0 warnings, 0 information 
ty and pyrefly give similar results. Unfortunately, mypy doesn't see a problem by default; you need to enable strict mode.
drekipus 6 hours ago | parent | prev | next [-]

1. Your first example is very much expected, so I don't know what's wrong here.

2. Your examples / post in general seems to be "people can break semantics and get to the internals just to do anything" which I agree is bad, but python works of the principle of "we're all consenting adults" and just because you can, doesn't mean you should.

I definitely don't consent to your code, and I wouldn't allow it to be merged in main.

If you or your team members have code like this, and it's regularly getting pushed into main, I think the issue is that you don't have safeguards for design or architecture

The difference with JavaScript "hurr durr add object and array" - is that it is not an architectural thing. That is a runtime / language semantics thing. One would be right to complain about that

Spivak 4 hours ago | parent | prev [-]

I don't understand the problem with your first example. The __init__ method isn't special and B.__init__ is just a function. Your code boils down to:

    def some_function(obj):
      obj.prop = 2

    class Foo:
      def __init__(self):
        some_function(self)

    # or really just like

    class Foo:
      def __init__(self):
        self.prop = 2
Which like, yeah of course that works. You can setattr on any object you please. Python's inheritance system ends up being sane in practice because it promises you nothing except method resolution and that's how it's used. Inheritance in Python is for code reuse.

Your examples genuinely haven't even scratched the surface of the weird stuff you can do when you take control of Python's machinery—self is just a convention, you can remove __init__ entirely, types are made up and the points don't matter. Foo() isn't even special it's just __call__ on the classes type and you can make that do anything.

dragonwriter 4 hours ago | parent [-]

With the assumptions typical of static class-based OO (but which may or may not apply in programs in Python), this naively seems like a type error, an even when it isn't it introduces a coupling where the class where the call is made likely depends on the internal implementation (not just the public interface) of the called class, which is...definitely an opportunity to introduce unexpected bugs easily.

zozbot234 41 minutes ago | parent | prev | next [-]

There's nothing wrong with implementation inheritance, though. Generic typestate is implementation inheritance in a type-theoretic trench coat. We were just very wrong to think that implementation inheritance has anything to do with modularity or "programming in the large": it turns out that these are entirely orthogonal concerns, and implementation inheritance is best used "in the small"!

JBits 7 hours ago | parent | prev | next [-]

If CLU only supported composition, was the Liskov substitution principle still applicable to CLU?

ebiederm 5 hours ago | parent | next [-]

CLU implemeted abstract data types. What we commonly call generics today.

The Liskov substitute principle in that context pretty much falls out naturally. As the entire point is to substitute in types into your generic data structure.

mannykannot 5 hours ago | parent | prev | next [-]

Yes it is, as it is about the semantics of type hierarchies, not their syntax. If your software has type hierarchies, then it is a good idea for them conform to the principle, regardless of whether the implementation language syntax includes inheritance.

It might be argued that CLU is no better than typical OO languages in supporting the principle, but the principle is still valid - and it was particularly relevant at the time Liskov proposed it, as inheritance was frequently being abused as just a shortcut to do composition (fortunately, things are better now, right?)

jerf 7 hours ago | parent | prev [-]

No, because the LSP is specifically about inheritance, or subtyping more generally. No inheritance/subtyping, no LSP.

It is true that an interface defines certain requirements of things that claim to implement it, but merely having an interface lacks the critical essence of the LSP. The LSP is not merely a banal statement that "a thing that claims to implement an interface ought to actually implement it". It is richer and more subtle than that, though perhaps from an academic perspective, still fairly basic. In the real world a lot of code technically violates it in one way or another, though.

zelphirkalt 9 hours ago | parent | prev | next [-]

I mean, it's not that hard to understand, why composition is to be preferred, when you could easily just use composition instead of inheritance. It's just that people, who don't want to think have been cargo-culting inheritance ever since they first heard about it, as they don't think much further than the first reuse of a method through inheritance.

ardit33 7 hours ago | parent [-]

Composition folks can get very dogmatic.

I have some data types (structs or objects), that I want to serialize, persist, and that they have some common attributes of behaviors.

In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc... and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .

In swift you can do it by both protocols, (and extensions of them), but after a while they start looking exactly like object inheritance, and nothing like commposition.

Composition was preferred when many other languages didn't support object oriented out the gate (think Ada, Lua, etc), and tooling (IDEs) were primitive, but almost all modern languages do support it, and the tooling in insanely great.

Composition is great when you have behaviour that can be widely different, depending on runtime conditions. But, when you keep repeating yourself over and over by adopting the same protocols, perhaps you need some inheritance.

The one negative of inheretance is that when you change some behaviour of a parent class, you need to do more refactoring as there could be other classes that depend on it. But, again, with today's IDEs and tooling, that is a lot easier.

TLDR: Composition was preferred in a world where the languages didn't suport propper object inheretance out of the gate, and tooling and IDEs were still rudemmentary.

bccdee 4 hours ago | parent [-]

> In swift I can have each object to conform to Hashable, Identifiable, Codabele, etc etc... and keep repeating the same stuff over and over, or just create a base DataObject, and have the specific data object inherit it and just .

But then if you need a DataObject with an extra field, suddenly you need to re-implement serialization and deserialization. This only saves time across classes with exactly the same fields.

I'd argue that the proper tool for recursively implementing behaviours like `Eq`, `Hashable`, or `(De)Serialize` are decorator macros, e.g. Java annotations, Rust's `derive`, or Swift's attached macros.

jkhdigital an hour ago | parent [-]

Yes, all behaviors should be implemented like definitions in category theory: X behaves like a Y over the category of Zs, and you have to recursively unpack the definition of Y and Z through about 4-5 more layers before you have a concrete implementation.

7 hours ago | parent | prev | next [-]
[deleted]
wk_end 8 hours ago | parent | prev [-]

I mean, duh. The spicier take is that Barbara Liskov is smarter than Alan Kay.

bitwize 7 hours ago | parent [-]

Except that Smalltalk is so aggressively duck-typed that inheritance is not particularly first class except as an easy way to build derived classes using base classes as a template. When it comes to actually working with objects, the protocol they follow (roughly: the informally specified API they implement) is paramount, and compositional techniques have been a part of Smalltalk best practice since forever ago (something it took C++ and Java devs decades to understand). This allows you to abuse the snotdoodles out of the doesNotUnderstand: operator to delegate received messages to another object or other objects; and also the become: operator to substitute one object for another, even if they lie worlds apart on the class-hierarchy tree, usually without the caller knowing the switch has taken place. As long as they respond to the expected messages in the right way, it all adds up the same both ways.