Remix.run Logo
tptacek 8 hours ago

It was a bad essay at the time and I don't think you can make a good essay by trying to build off it. Adding "megachurch" to the already strained metaphor didn't improve it.

https://news.ycombinator.com/item?id=35939383

sethev 4 hours ago | parent | next [-]

As you point out in your linked comment, the original essay captured the zeitgeist of the time. It also influenced and inspired many people. From that perspective, it's hard for me to agree that it was bad. However, I don't think the content was original at the time (perhaps that's what you mean by bad?) - in the sense that ESR wasn't out ahead of people blazing some new trail and it also didn't hold up very well factually.

Taniwha 4 hours ago | parent | next [-]

Yeah, it's worth remembering that at the time a compiler cost $10k+, an OS $1000s/year - you couldn't work on OS or compiler work unless you worked for a big hardware company - a whole lot of interesting work was locked away from most programmers

jaredklewis 4 hours ago | parent | next [-]

Wasn’t Cathedral and the Bazaar originally published in 1999? Who was paying thousands of dollars a year for an OS in 199? And I think GCC was already widespread by then, no?

I didn’t start programming until a few years later, but for sure by 2002, it seemed to me a given that compilers were free. It was my impression that stuff like Borland was niche and that serious stuff like Java and C were free.

Not saying you are wrong, just your comment surprised me. Maybe I have a revisionist memory or maybe those intervening 3 years were quite transformational in the industry.

tptacek 4 hours ago | parent | next [-]

The firm I was at in 1997 was shipping commercial software with GCC. There were expensive compilers, but you weren't required to use them. For Windows builds, I think we were Borland C++, which was hundreds of dollars. Sun had a pretty expensive compiler for Solaris that I remember using for hunting down memory leaks.

LevGoldstein 3 hours ago | parent | prev | next [-]

I recall stuff like the Intel icc compiler being expensive and desirable, and things like client access licenses, hardware licenses (to allow using non-trivial amounts of RAM and multi-processing) and support plans for proprietary OSes being rather expensive. Consulting a SCO Unix price sheet from that era, a license that allowed 150 users and up to 32GB of RAM was $10k.

Prices also varied around OS features used. Vendors loved to nickel-and-dime you (separate *-user client licenses for file services, print services, remote access, etc), generally to drive you towards bigger packages that seemed like a better deal.

duskwuff 2 hours ago | parent | prev | next [-]

2002 was before the tipping point, IMO. Open-source software existed, but wasn't always taken seriously. Linux was still widely perceived as being a hobbyist OS unsuitable for "real" applications. A lot of the Internet still ran on Windows and commercial UNIX servers.

tptacek 2 hours ago | parent [-]

By 2002 I was at Arbor Networks, shipping security software to tier-1 ISPs, and if we'd shipped it on a commercial Unix (let alone Windows) people would have looked at us like we had 2 heads. The writing was on the wall by end of the first dot com boom.

scooke 2 hours ago | parent [-]

In 2003 I was somewhere south of Fort Worth, TX, having visited Dinosaur World, and shortly after leaving we stopped at a cafe that had three computers out which you could use. I looked at them while waiting for the coffee and they just seemed off, strange. It wasn't OS 9 nor X, it wasn't Windows... What was it? As I went over to look it hit me - holy cow, those are running that linux thing I've heard about! Their desktops were beautiful, totally different than the others. I knew then I wanted that.

queenkjuul 2 hours ago | parent | prev [-]

Apple was giving away a C compiler by 1999 afaik, GCC was well established (but going through the egcs drama?). Visual Studio/Visual C++ didn't get a free version until 2005 though.

But yeah imo you're closer to right than not, though Microsoft licenses were still fairly expensive.

sethev 4 hours ago | parent | prev [-]

Yes, that is the context in which I first read it (likely around 1999 when it appeared on slashdot), as a senior in high school with no access to the tools used by most professional programmers at the time.

tptacek 4 hours ago | parent [-]

FreeBSD 2.0 was 1994.

sethev 4 hours ago | parent [-]

Yes, I'm speaking about my experience as I remember it - not what was objectively possible for someone with the right resources and knowledge at the time :)

tptacek 4 hours ago | parent [-]

Right, I'm not so much pushing back on you as I am establishing a chronology for CATB. Ordinary people were absolutely belting out (what we would now call) open source software by the time it was written.

(That's not the biggest flaw in the essay, of course. It made predictions, some of which turned out to be comically wrong. The true parts of it weren't new, and the new parts of it weren't true.)

tptacek 4 hours ago | parent | prev [-]

It was certainly influential. It's just bad on its own merits.

bawolff 3 hours ago | parent | next [-]

I guess it depends on what you think the goal of the essay was. I always felt like the primary goal was to inspire people and a lot of the software engineering parts were more framing. To me it reads as a manifesto disguised as a software engineering essay.

If you take the goal as inspiring people, i think it achieved its goals and then some. I'm pretty sure that CATB brought more people into FOSS than the GNU manifesto ever did.

lurk2 3 hours ago | parent | prev [-]

> Please don't post shallow dismissals, especially of other people's work.

https://news.ycombinator.com/newsguidelines.html

tptacek 3 hours ago | parent [-]

(1) That rule refers to things people have posted to HN in things like "Show HNs" (or their moral equivalents). It isn't a general prohibition on critique, which would be silly.

(2) You may have missed the link to ~1,000 words of detailed criticism of CATB, on which I support my claim here that CATB is bad.

lurk2 2 hours ago | parent [-]

> (1) That rule refers to things people have posted to HN in things like "Show HNs" (or their moral equivalents).

There’s nothing I’m seeing in the text as it is written that suggests this to be the case. There are just a lot of comments I see that amount to: “I don’t like this,” which can be an interesting signal by itself but not if users refuse to elaborate on it, which is what I (erroneously) thought was happening here.

> You may have missed the link to ~1,000 words of detailed criticism of CATB, on which I support my claim here that CATB is bad.

I did miss it, sorry. I clicked through and didn’t notice that the top comment was yours. I assumed you were just linking to a past discussion.

I’m sure you already know this, but on the off chance you don’t, you can click on a comment’s timestamp to get a permalink to the specific comment, like this:

https://news.ycombinator.com/item?id=35940773

tptacek 2 hours ago | parent [-]

HN is a common law system; the real guidelines are the guidelines page itself, and the "jurisprudence" of years and years of Dan (and Tom) writing moderator comments. But you also know you're a little off the rails when you've derived a rule that would prohibit, say, criticism of a book --- "Teach Yourself C In 24 Hours is a bad book". Of course that's OK!

But yeah, the big thing here is that the substance of my critique is on a different thread. It's disfavored to retype things you can just link to. I'd be irritated with me too if I just said "CATB is bad!" and left it at that.

8 hours ago | parent | prev | next [-]
[deleted]
networkadmin 7 hours ago | parent | prev [-]

You're completely wrong. The fact that people are still talking about it today proves it has some kind of worth. The essay was great.

munificent 5 hours ago | parent | next [-]

People are still talking about a flat Earth and creationism. Given 8 billion people, there are enough available braincells to keep even the stupidest idea floating around in the memesphere.

wizzwizz4 7 hours ago | parent | prev | next [-]

People are still talking about null pointers: that doesn't mean they were ever a good idea.

networkadmin 7 hours ago | parent [-]

That's just how the hardware works. Don't like it? Make your own CPU.

tptacek 7 hours ago | parent | next [-]

So the case that you're making here is that CATB is renowned amongst the kind of practitioners who think NULL pointers are "just how the hardware works". Sounds about right.

dvt 6 hours ago | parent | next [-]

I know you're replying to a brand new (likely troll) account, but I'm also very confused by this and would be curious to learn if there's any truth to it. I personally don't really see what a Von Neumann machine has to do with null pointers (or how an implication would go either way), but maybe I'm missing something.

tptacek 6 hours ago | parent | next [-]

It has nothing to do with NULL pointers and is instead a property of a programming language.

z3512 6 hours ago | parent | prev | next [-]

NULL pointers working the way they do was a design decision made my hardware engineers a long time ago because it saved some transistors when that mattered. We’re past that point now for most ASICs and hardware can be changed. Although backward software compatibility is a thing too.

wizzwizz4 5 hours ago | parent [-]

Null pointers have nothing to do with the instruction set architecture, except as far as they are often represented by the value 0. Can you describe the scheme you're imagining, whereby their use saves transistors?

networkadmin 4 hours ago | parent | prev [-]

[dead]

networkadmin 4 hours ago | parent | prev [-]

[dead]

wizzwizz4 7 hours ago | parent | prev | next [-]

No, the CPU doesn't have a special pointer value which is designated invalid (except as far as modern address spaces are so large that you cannot possibly map memory to each address without mirroring). In many OSs, e.g. CP/M, address 0 is actually meaningful. The C idiom of cramming sum-type semantics into the nooks and crannies of a return value that ordinarily means something entirely different is an extremely poor one, and null pointers are the poster child: Tony Hoare's billion-dollar mistake.

It's absolutely fine to have a packed representation of a sum type "under the hood": this is how Rust implements Option<&T> (where T: Thin), for example. It's also fine to expose the layout of this packed representation to the programmer, as C's union does. But it's a huge footgun to have unchecked casts as the default. If not for this terrible convention, C wouldn't have any unchecked implicit casts: something like f(1 + 0.5) performs a coercion, a far more sensible behaviour.

The only reason we're talking about null pointers at all is because they were an influential idea, not because they were a good idea. Likewise with the essay.

leoc 4 hours ago | parent | next [-]

While it's narrowly true that CPU instruction sets generally don't have a null-pointer concept, I'm not sure how important that is: the null pointer seems to have been (I don't know enough to be sure) a well-established idiom in assembly programming which carried across naturally to BCPL and C. (In much the same way that record types were, apparently, a common assembly idiom long before they became particularly normal to have in HLLs.) Programmers like being able to null out a pointer field, 0 is an obvious "joker" value, and jump-if-0 instructions tend to be convenient and fast. Whether or not you'd want to say it's "how the hardware works" it does seem to have a certain character of inevitability. Even if the Bell Research guys had disapproved of the idiom they would likely have had difficulty keeping it out of other people's C programs once C became popular. The Hoare ALGOL W thing seems to be more relevant to null pointers in Java and the like.

wizzwizz4 4 hours ago | parent [-]

> Programmers like being able to null out a pointer field, 0 is an obvious "joker" value, and jump-if-0 instructions tend to be convenient and fast.

And there's nothing wrong with that! But you should write it

  union {
    char *ptr;
    size_t scalar;
  } my_nullable_pointer;
  if (my_nullable_pointer.scalar) {
    printf("%s", my_nullable_pointer.ptr);
  }
not:

  char *my_nullable_pointer;
  if (my_nullable_pointer) {
    printf("%s", my_nullable_pointer);
  }
Yes, this takes up more space, but it also makes the meaning of the code clearer. typedef in a header can bring this down to four extra lines per pointer type in the entire program. Add a macro, and it's five extra lines plus one extra line per pointer type. Put this in the standard library, and the programmer has to type a few extra characters – in exchange for it becoming extremely obvious (to an experienced programmer, or a quick-and-dirty linter) when someone's introduced a null pointer dereference, and when a flawed design makes null pointer dereferences inevitable.

> The Hoare ALGOL W thing seems to be more relevant to null pointers in Java and the like.

I believe you are correct; but I like blaming Tony Hoare for things. He keeps scooping me: I come up with something cool, and then Tony Hoare goes and takes credit for it 50 years in the past. Who does he think he is, Euler?

II2II 4 hours ago | parent | prev [-]

> No, the CPU doesn't have a special pointer value which is designated invalid

Sort of right, sort of wrong.

From my understanding: older, simpler, architectures treat memory location zero as a normal memory address. On x86 and x64, the OS can configure the MMU to treat certain pages as invalid. Many years ago, I ran across a reference to Sparcs treating accesses to memory location zero as invalid. In other words, it depends upon which architecture you're dealing with.

AnimalMuppet an hour ago | parent | next [-]

The 68000 series used 0 as the initial (boot) program counter, and 4 as the initial stack pointer. (I might have those two backwards; it's been a long time.) That meant that they had to be in ROM, which meant that they were not writable. But addresses 8 through 1K were the interrupt vector table, and they did have to be writable.

This led to strange hardware implementations like "0 and 4 point to 0x800000 and 0x800004 (or wherever the ROM is) until a latch is cleared, then they point to 0" - with the latch being cleared fairly early in the boot process. This let you create a different entry point for soft and hard boot, if you wanted.

In that implementation, you could read and write to 0, once the latch was cleared.

Or you could have an implementation where 0 and 4 pointed to ROM always, and you could not have a different entry point for soft boot, and you could not write to 0, ever.

wizzwizz4 3 hours ago | parent | prev [-]

Skimming appendix H of https://courses.grainger.illinois.edu/cs423/sp2011/lectures/..., I can't see any special treatment of the zero page, but https://stackoverflow.com/a/22847758/5223757 contains an anecdote about SPARCs not placing a page of zeroes at that address. I expect that's probably an OS restriction, and they considered it safer to modify the in-house software they understood, rather than tinker with the externally-sourced OS's memory management routines, but the anecdote is weak evidence that it might have been a hardware distinction at one point.

mrkeen 7 hours ago | parent | prev [-]

They aren't there in asm.

charcircuit 6 hours ago | parent [-]

  mov rax, qword ptr [0]
nyc_data_geek1 7 hours ago | parent | prev [-]

There are lots of proven bad ideas still being bandies about today, and it does not prove they are anything but enduringly worthless.