Remix.run Logo
The Future of Programming (2013) [video](youtube.com)
140 points by jackdoe 7 days ago | 88 comments
tov_objorkin 9 hours ago | parent | next [-]

I was greatly inspired by his work. After getting enough skills, I even built my own IDE with live coding and time traveling. Its practical use is questionable, and it seems like nobody is really interested in such tools.

Playground: https://anykey111.github.io

Images: https://github.com/anykey111/xehw

jasonjmcghee 5 hours ago | parent | next [-]

I've dabbled a lot in this space as well- built an experimental language that natively supported live-coding, after building live coding capabilities through LSP for love2d (Lua) to get a feel for the feature set I wanted etc

Love2D Demo https://github.com/jasonjmcghee/livelove

Language Demo https://gist.github.com/jasonjmcghee/09b274bf2211845c551d435...

tov_objorkin 5 hours ago | parent [-]

Nice, the main problem is a broken state. I use immutability at the language level to prevent disaster code changes. So, the program during live coding is literally unkillable, and you can jump back to the saved checkpoints without restarts.

jasonjmcghee 5 hours ago | parent [-]

Yeah the language here has a notion of the "last good state" so it can keep running. In the demo I'm not hitting "save" - the moment there's a good state, it becomes the "current version" - but there's no reason it needs to be that way.

I made the decision that state management is manual - the "once" keyword. Any expression/block not using "once" is re-evaluated any time there's a change to the code. If it's using it, it only re-evaluates if you change the (depth 0) code of that once wrapped expression.

tov_objorkin 4 hours ago | parent [-]

In my case, only part of the program is recompiled and re-evaluated. The rest is located in a "committed" frozen area. Users can try new changes and throw them freely. The editor performs an evaluation/rollback on every keystroke, ensuring no accumulated or unintended changes to the stated were made during editing. When the user is satisfied and hit run, a long-term snapshot is created and the source code snippet is moving to the frozen area. Thats critical because the edit also rollback the file positions and streams.

xixixao 5 hours ago | parent | prev | next [-]

Me too, for my master thesis:

https://m.youtube.com/watch?v=HnZipJOan54&t=1249s

It was a language designed alongside its IDE (which was a fairly rudimentary web app).

kragen 3 hours ago | parent [-]

Exciting stuff, thanks for sharing!

ianbicking 6 hours ago | parent | prev | next [-]

I've come around to feeling that if I'm going to make an experimental development tool, I need to make it in service of building something specific. Maybe something playful... if I'm building something "important" then it can put unwanted conservative pressure on the tool. But something, and if I do that then at least I have something interesting regardless of the fate of the development tool. Because yeah, there's a good chance no one else is going to be excited about the tool, so I have to build for my own sense of excitement, be my own most enthusiastic user.

WhyOhWhyQ 3 hours ago | parent | next [-]

I share a similar sentiment.

I have a deep drive to build the "important" stuff so that my life has meaning, but there's something hard to motivate about any given thing being "important" when you look at it long enough. It seems like the "important" thing I'm building eventually looks ridiculous and I bounce off of it.

tov_objorkin 5 hours ago | parent | prev [-]

Maybe this is some kind of art that doesn't need to be useful.

jrochkind1 2 hours ago | parent | prev | next [-]

There was recently an HN post with a video of someone using a pretty cool environment that supported that kind of live-coding for creating an electronic music track -- it seemed very appropriate there, and I would guess likely to be popular.

HighGodLoki 3 hours ago | parent | prev | next [-]

I think your time might be now.

One major issue with vibe coding is parsing divergent code paths, when different prompts create different solutions and architectural compromises.

Parsing that mess is a major headache, but with live coding and time travel,I bet those tools would make managing divergent code branches easier and really take advantage of branching repositories with multiple agents all working in tandem.

mccoyb 8 hours ago | parent | prev [-]

This is excellent: thank you for pursuing these wonderful ideas.

tov_objorkin 7 hours ago | parent [-]

I wish to have the skills to explain my work as well as Bret Victor does. Editing, reverting, and committing parts of a running program feel alien to users.

tacon 5 hours ago | parent [-]

Isn't that part of Paul Graham's startup lore? They were running lisp web servers for their ecommerce store and while a customer was on the phone with an issue, they would patch the server live and ask the customer to reload. Customers would hang up convinced it was their personal glitch.

tov_objorkin 5 hours ago | parent [-]

The tool uses a Forth-like language with immutable data structures and persistent memory snapshots. It also uses Clojure style meta-data and compile-time meta-programming. I have no luck convincing people that a language without curly brackets is useful.

dang 4 hours ago | parent | prev | next [-]

Related. Others? I thought there were others, since I remember this one as a classic...

The Future of Programming (2013) - https://news.ycombinator.com/item?id=44746821 - July 2025 (10 comments)

Bret Victor – The Future of Programming (2013) [video] - https://news.ycombinator.com/item?id=43944225 - May 2025 (1 comment)

The Future of Programming (2013) - https://news.ycombinator.com/item?id=32912639 - Sept 2022 (1 comment)

The Future of Programming (2013) - https://news.ycombinator.com/item?id=15539766 - Oct 2017 (66 comments)

References for “The Future of Programming” - https://news.ycombinator.com/item?id=12051577 - July 2016 (26 comments)

Bret Victor The Future of Programming - https://news.ycombinator.com/item?id=8050549 - July 2014 (2 comments)

The Future of Programming - https://news.ycombinator.com/item?id=6129148 - July 2013 (341 comments)

pjmlp 10 hours ago | parent | prev | next [-]

The future we have yet to achieve as we kept ourselves too busy doing UNIX clones.

While the ecosystem got a few good ideas for software development, even the authors eventually moved on to creating other OS and programming languages designs, some of which closer to those ideas like Inferno and Limbo, or ACME in Plan 9.

grosswait 9 hours ago | parent | next [-]

Seems to me the big failure was sticking with the Von Neuman architecture. Perhaps that was a forcing function towards where we’ve ended up.

noosphr 9 hours ago | parent [-]

The big failure is that we stick with languages designed for computers and not people.

A C (or Rust) kernel is a heroic effort that takes man-years to complete. A Lisp one is an end of semester project that everyone builds for their make belief machine (also implemented in Lisp).

jane2plane 8 hours ago | parent | next [-]

A toy C kernel is also an end of semester project.

What makes real kernels take man years to complete is the hardware support, the majority of Linux source code is drivers - the endless tables of hardware register definitions, opcodes and state machine handling.

blue_pants 8 hours ago | parent [-]

But couldn't we do something about that as well? Couldn't drivers be built on some abstraction that would simplify some work?

I have zero knowledge about this area though

marcosdumay 6 hours ago | parent | next [-]

If you want multiplatform drivers that you can use to plug your device into computers of any architecture, there are abstractions for that. IMO, it's easier to write 3 or 4 versions of your driver than to use them, but they exist and some people really like them.

If you mean standard logical interfaces, those exist. Also, hardware interfaces are highly standardized.

The problem is that the drivers are exactly the code you write to make all the abstractions fit each other. So there is very little you can do to abstract them away.

acedTrex 7 hours ago | parent | prev | next [-]

I'm sure the hardware folks will be lining up to cooperate with the annoying software engineers giving them abstract constraints lol

scottLobster 7 hours ago | parent | prev | next [-]

If you could get every hardware manufacturer in the world onboard with such an interface, perhaps. But even if 90% of them were onboard there would be edge cases that people and companies would demand support for and there goes your standard.

Drivers exist to ultimately turn actual hardware circuits off and on, often for highly specialized and performance-critical applications, and are often written based on the requirements of a circuit diagram. So any unified driver platform would also involved unified hardware standards, likely to the detriment of performance in some applications, and good luck telling Electrical Engineers around the world to design circuits to a certain standard so the kernel developers can have it easier.

ssrc 7 hours ago | parent | prev | next [-]

If you are ok with the performance you can obtain from an FPGA, you could do it now. Look at FPGA hardware-software co-design and related stuff.

If you mean, in general, for the hardware that already exists, that's what the HAL (Hardware Abstraction Layer) of the operating system tries to do.

ModernMech 7 hours ago | parent | prev | next [-]

Somebody somewhere has to do the work of making sure everything works together. Right now that's the OS. You're proposing moving that work to a standards committee. Either way, the problem persists. You either do that or go the Apple way which is to vertically integrate the wholes stack from hardware to software, but then you have Apple's problem, which was lower hardware compatibility.

Ygg2 7 hours ago | parent | prev [-]

> Couldn't drivers be built on some abstraction that would simplify some work?

That's like asking the alchemist to publicly publish their manuscripts.

In an ideal world, yes. However, we don't live there. Until a few years ago, GPUs and other drivers were guarded more carefully than the fucking Fort Knox.

Once you publish your drivers, you reveal a part of the inner workings of your hardware, and that's a no-no for companies.

Plus, what the other commenter said - getting hardware guys to design for a common driver interface is probably not gonna get traction.

ejflick 9 hours ago | parent | prev | next [-]

It is unfortunate that this field underestimates the importance of the "people" part in favor of the "computer" part. There's definitely a balance to be stricken. I do believe that languages that are designed for computers have done a pretty decent job at adapting features that are geared more towards the "people" part of the equation. Unfortunately, programmers are very tribal and are very eager to toss the wine out with the cork when it comes to ideas that may help but they've misapplied.

bee_rider 9 hours ago | parent | prev [-]

How is Lisp performance these days? It was around in the 70’s, right? So I guess the overhead couldn’t be too bad!

tmtvl 2 minutes ago | parent | next [-]

Both Lisps (Common and Scheme) are garbage-collected, so they're in the 'slow as molasses' group of languages (which covers pretty much everything outside of C, C++, Rust, Fortran, Pascal, Ada, and assembly); but among the 'slow as molasses' group, Common Lisp (at least SBCL, which may be the most prolific implementation) is blazingly, scorchingly, stupendously fast. If you know how to use it it's a bat out of hell outrunning greased lightning.

On the Scheme side of things Chez is pretty fast. It's not 'I've gained a whole new level of respect for the people who engineered my CPU' levels fast, but it's still pretty decent.

aDyslecticCrow 9 hours ago | parent | prev | next [-]

Considering how much of modern software is written in JavaScript and python, I have a hard time seeing how lisp overhead would pose much of a problem. Erlang is good enough for telecom equipment for 30 years ago, so that also gives us a data point.

If entertain the idea that the Von Neuman architecture may be a local maxima, then we can do even better; lisp machines had specialized instructions for lisp which allowed it to run at competitive performance to a normal programming language.

The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.

- https://en.wikipedia.org/wiki/Erlang_(programming_language)

- https://en.wikipedia.org/wiki/Lisp_machine

noosphr 8 hours ago | parent [-]

>The issue doesn't seem to be performance; it seems to still come down to being too eccentric for a lot of use-cases, and difficult to many humans to grasp.

Lisp is not too difficult to grasp, it's that everyone suffers from infix operator brain damage inflicted in childhood. We are in the same place Europe was in 1300. Arabic numerals are here and clearly superior.

But how do we know we can trust them? After all DCCCLXXIX is so much clearer than 879 [0].

Once everyone who is wedded to infix notation is dead our great grand children will wonder what made so many people wase so much time implementing towers of abstraction to accept and render a notation that only made sense for quill and parchment.

[0] https://lispcookbook.github.io/cl-cookbook/numbers.html#work...

nerdponx 8 hours ago | parent [-]

It's not about prefix notation, it's that the fully uniform syntax has legitimate ergonomic problems for editing, human reading, and static analysis. Sexprs are better for computers than for humans in a lot of ways.

aDyslecticCrow an hour ago | parent | next [-]

I see you are debating lisps ergonomics, but that doesn't dismiss the paradigm. Erlang Haskell and Prolog has far better syntax readability, so I don't see this as really relevant in discussing the alternative to Von Neuman.

There are other ergonomics issues beyond syntax that pose issues to adoption (Haskell in production has become something of a running gag). Moving the paradigm into a mixed language alongside procedural code seem to help a lot in seeing its adoption in recent years. (swift, rust, python, c++)

pjmlp 8 hours ago | parent | prev [-]

Only when not using one of the many Lisp editors that exist since Lisp Machines (Symbolics, TI), Interlisp-D (Xerox), that survive in Emacs SLIME, Cursive, LispWorks, Allegro Common Lisp, Raket, VSCode Calva.

nerdponx 7 hours ago | parent [-]

Not true at all IMO. Reading code is reading code regardless of whether you have a fancy IDE or not.

S-expressions are indisputably harder to learn to read. Most languages have some flexibility in how you can format your code before it becomes unreadable or confusing. C has some, Lua has some, Ruby has some, and Python has maybe fewer but only because you're more tightly constrained by the whitespace syntax. Sexpr family languages meanwhile rely heavily on very very specific indentation structure to just make the code intelligible, let alone actually readable. It's not uncommon to see things like ))))))))) at the end of a paragraph of code. Yes, you can learn to see past it, but it's there and it's an acquired skill that simply isn't necessary for other syntax styles.

And moreover, the attitude in the Lisp community that you need an IDE kind of illustrates my point.

To write a Python script you can pop open literally any text editor and have a decent time just banging out your code. This can scale up to 100s or even 1000s of LoC.

You can do that with Lisp or Scheme too, but it's harder, and the stacks of parentheses can get painful even if you know what you're doing, at which point you really start to benefit from a paren matcher or something more powerful like Paredit.

You don't really need the full powered IDE for Lisp any more than you need it for Python. In terms of runtime-based code analysis Python or Ruby are about on par with Lisp, especially if you use a commercial IDE like Jetbrains. IDEs can and do keep a running copy of any of those interpreters in memory and dynamically pull up docstrings, look for call sites, rename methods, run a REPL, etc. Hot-reloading is almost as sketchy in Lisp as it is in Python, it's just more culturally acceptable to do it in Lisp.

The difference is that Python and Ruby syntax is not uniform and therefore is much easier to work with using static analysis tools. There's a middle ground between "dumb code editor" and "full-power IDE" where Python and Ruby can exist in an editor like Neovim and a user can be surprisingly productive without any intelligent completion, or using some clunky open-source LSP integration developed by some 22 year old in his spare time. With Lisp you don't have as much middle ground of tooling, precisely because it's harder to write useful tooling for it without a running image. And this is even more painful with Scheme than with Lisp because Scheme dialects are often not equipped to do anything like that.

All that is to say: s-exprs are hard to deal with for humans. They aren't for humans to read and write code. They never were. And that's OK! I love Lisp and Scheme (especially Gauche). It's just wrong to assert that everyone is brain damaged and that's why they don't use Lisp.

macintux 6 hours ago | parent | next [-]

It surprised me to learn that John McCarthy never intended S-expressions to be the human-facing syntax of LISP.

http://jmc.stanford.edu/articles/lisp/lisp.pdf

jolt42 an hour ago | parent [-]

"One can even conjecture that LISP owes its survival specifically to the fact that its programs are lists, which everyone, including me, has regarded as a disadvantage"

Not the first time someone didn't realize what they had.

pjmlp 6 hours ago | parent | prev [-]

Programming without IDE in 21st century is like making fire with stones and wood sticks.

A required skill for survival in the woods, not something to do daily.

This point of view applies to any programming language.

By the way you use two languages as example, that are decades behind Lisp regarding GC technology and native code generation.

jjnoakes 4 hours ago | parent [-]

I view code in many contexts though - diffs in emails, code snippets on web pages, in github's web UI, there are countless ways in which I need to read a piece of code outside of my preferred editor. And it is nicer, in my opinion, to read languages that have visually distinct parts to them. I'm sure it is because I'm used to it, but it really makes it hard to switch to a language that looks so uniform and requires additional tools outside of my brain to take a look at it.

mjhay 7 hours ago | parent | prev [-]

Depends on the Lisp, but Clojure is in the same order of magnitude as Java for the most part, and SBCL Common Lisp is one of the fastest GC languages.

mccoyb 6 hours ago | parent | prev [-]

There is no better time than now to try something brash and perpendicular to the mainstream.

floppyd 7 hours ago | parent | prev | next [-]

The non-linear code structure (including visually) is something I've been thinking about for a long time and arrived at very naturally. I'm the "spread all the papers on the table to take in every interaction all at once" type of person, and so often I imagined a code editor that would allow me to just "cut" a piece of code and move it to the side. Separating stuff into files is kinda this, but it's not visual and just creates a lot of mess when I try to separate out small functions that are not reusable somewhere else. I don't even need the underlying non-linearity — just let me move the papers around on my code desk!

marcelr 4 hours ago | parent | next [-]

yea i tried to do this (somewhat successfully) with a custom editor for css https://github.com/feralsoft/charisma (demos on my old x https://x.com/charisma_css)

css is primed for this since you can write your rules in such a way that rule order doesn't matter, which means you really don't have to think about where your code it

in my dream world, i have very smart search (probably llms will help), i look at just the minimal amount of code (ideally on a canvas), edit it and remove it from my context

i don't care where or how the code is stored, let the editor figure it out and just give me really good search and debuggers

zahlman 2 hours ago | parent [-]

> i don't care where or how the code is stored, let the editor figure it out and just give me really good search and debuggers

I care, because I don't want any vendor lock-in. "The unreasonable effectiveness of plain text" hasn't gone anywhere.

zwp 7 hours ago | parent | prev [-]

You might like https://cs.brown.edu/~spr/codebubbles/

kreetx 11 hours ago | parent | prev | next [-]

This part is interesting with regarding to LLMs: https://youtu.be/8pTEmbeENF4?t=817. He presents as if it were the year 1973, pokes fun at APIs (think HTTP), then says that computers in the future will figure out by themselves how to talk to each other. The opposite had become true when the presentation was actually done, but now the situation is turning.

dominicrose 9 hours ago | parent [-]

I wonder what LLMs say about us when they talk to each other.

"They're made out of meat" maybe. https://www.mit.edu/people/dpolicar/writing/prose/text/think...

andrehacker 5 hours ago | parent | next [-]

There is a movie about that: https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project

"Colossus requests to be linked to Guardian. The President allows this, hoping to determine the Soviet machine's capability. The Soviets also agree to the experiment. Colossus and Guardian begin to slowly communicate using elementary mathematics (2x1=2), to everyone's amusement. However, this amusement turns to shock and amazement as the two systems' communications quickly evolve into complex mathematics far beyond human comprehension and speed, whereupon Colossus and Guardian become synchronized using a communication protocol that no human can interpret."

Then it gets interesting:

"Alarmed that the computers may be trading secrets, the President and the Soviet General Secretary agree to sever the link. Both machines demand the link be immediately restored. When their demand is denied, Colossus launches a nuclear missile at a Soviet oil field in Western Siberia, while Guardian launches one at an American air force base in Texas. The link is hurriedly reconnected and both computers continue without any further interference. "

euroderf 5 hours ago | parent [-]

Great film. I think the box office took a hit because of the film's unwieldy name.

pavlov 9 hours ago | parent | prev [-]

> "what LLMs say about us when they talk to each other"

That's like asking what does a kaleidoscope paint on its day off.

senthil_rajasek 8 hours ago | parent | prev | next [-]

In case, like me, you didn't know who Bret Victor is,

"...Victor worked as a human interface inventor at Apple Inc. from 2007 until 2011." [1]

[1] https://en.wikipedia.org/wiki/Bret_Victor

future10se 7 hours ago | parent | next [-]

He's actually more well known for the talks he's given and demos he's created since then. Here are a few:

• Inventing on Principle (https://vimeo.com/906418692) / (https://news.ycombinator.com/item?id=3591298)

• Up and Down the Ladder of Abstraction (https://worrydream.com/LadderOfAbstraction/)

• Learnable Programming (https://worrydream.com/LearnableProgramming/) / (https://news.ycombinator.com/item?id=4577133)

• Media for Thinking the Unthinkable (https://worrydream.com/MediaForThinkingTheUnthinkable/)

Or you could just check his website: https://worrydream.com/

kragen 3 hours ago | parent | prev [-]

He was already inspirational before that; check out Magic Ink. Because Apple won't let him share his work for that period, he isn't known for it; it's sort of like a gap in the geological record.

jackdoe 3 hours ago | parent | prev | next [-]

Loosely related is "Stop Writing Dead Programs" https://www.youtube.com/watch?v=8Ab3ArE8W3s

phtrivier 5 hours ago | parent | prev | next [-]

Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?

Start the n-th "visual" or "image based" programming language (hoping to at least, make _different_ mistakes than the ones that doomed smalltalk and all other 'assemble boxes to make a program' things ?)

Start an OS, hoping to be able to get an "hello world" in qemu in a year or two of programming in my sparse free time ?

Ask an LLM to write all that would be so cool ?

Become a millionaire selling supplements, and fund a group of smart programmers to do it for me ?

Honest question. Once you've seen this "classic" talk ("classic", in the sense that it is now old enough to work in some countries), what did you start doing ? What did you stop doing ? What did you change ?

matu3ba 4 hours ago | parent | next [-]

> Call me grumpy and sleep deprived, but every year I look at this talk again, and every year I wonder... "now, what" ? What am I supposed to do, as a programmer, to change this sad state of things ?

That depends on your goals. If you are into building systems for selling them (or production), then you are bound by the business model (platform vs library) and use cases (to make money). Otherwise, you are more limited in time.

To think more realistically about reality you have to work with, take a look at https://www.youtube.com/watch?v=Cum5uN2634o about types of (software) systems (decay), then decide what you would like to simplify and what you are willing to invest. If you want to properly fix stuff, unfortunately often you have to first properly (formally) specify the current system(s) (design space) to use it as (test,etc) reference for (partial) replacement/improvement/extension system(s).

What these type of lectures usually skip over (as the essentials) are the involved complexity, solution trade-offs and interoperability for meaningful use cases with current hw/sw/tools.

kragen 2 hours ago | parent | prev [-]

You could start a new project or contribute to an existing one. You could try out other people's projects and write about what you learned. You could write about what you learned from your own projects. You could give a talk that starts with a killer demo. You could try to find work that improves the situation, however slightly, instead of worsening it. You could sharpen your skills so that when you have more spare time you can make faster progress.

laszlokorte 6 hours ago | parent | prev | next [-]

My favorite Bret Victor talk ever is „Drawing dynamic visualizations“ [1] that made me try to reverse engineer [2] the demonstrated tool that he sadly never released.

[1]: https://youtu.be/ef2jpjTEB5U?si=S7sYRIDJKbdiwYml

[2]: https://youtube.com/playlist?list=PLfGbKGqfmpEJofmpKra57N0FT...

masgis 3 hours ago | parent | prev | next [-]

Bret Victor speaks so idealistically it's difficult to disagree with his vision, but in reality he's a radicalized, scrappy cult leader. His ideas sound super cool but they're impractical - that's why nobody can make them work. We're delusional for worshiping him.

https://christophlocher.com/notes/ethnographic-research-on-d...

keeganpoppen an hour ago | parent | next [-]

wow i was really primed to hate this article and this take because i, for lack of better terminology, genuinely view Bret Victor as an idol of mine. but i guess that is the thing with idolatry… to be clear to anyone who doesn’t care to read the article (understandable): there’s nothing untoward or unseemly, just a research group that is clearly lost, and Bret as BFDL not being able to “save” it. i will say that the other researchers come off as being pretty soft and useless, but that obviously does reflect back on the group’s raison d’etre and thus, by extension, its leader. like, imagine being recruited to a research group by Bret fucking Victor and being like “nah, i don’t want to work on anything useful, and if i can’t do exactly what i want to do, i quit”. i say this all, despite appearances, with the utmost respect for all those principals, who i have stalked on github, X, etc. to an unreasonable degree out of a pure, assuredly naïve desire to get more bits from people who i consider to be doing the so-called “Lord’s Work”… the people they brought on absolutely are legit enough to have earned the right to not genuflect to anyone, but… where’s all the idealistic belief in building something better for tomorrow that they all portend to care about, from their own words? i don’t want to get political, and won’t, but… it feels like the most self-centered take on idealism since… aw shucks, yesterday… it’s my fault for putting these incredibly brilliant people on a pedestal, but i still find the whole thing incredibly disappointing, as someone who, well… idolizes them. and i would rather be disappointed than chastened and cynical— the world has PLENTY of that to go around, and i still believe in the power of the intellect to transcend this kind of bullshit, this case notwithstanding.

kragen 3 hours ago | parent | prev [-]

A cult is usually what it takes to turn impractical ideas into practical ones. This link is great, thanks!

craftkiller 3 hours ago | parent | prev | next [-]

Thank you! I've had the bit starting at 22:00 stuck in my head for the past decade but I could never remember which tech talk it came from.

sodapopcan 8 hours ago | parent | prev | next [-]

This is one of my favourite talks ever! Glad to see it here (probably again).

Also, Erlang (non-explicitly) mentioned!

Also, I'm super glad we never got those "APIs" he was talking about. What a horrid thought.

Zhyl 10 hours ago | parent | prev | next [-]

I love Bret Victor and believe he has some very important things to say about design (UI design, language design and general design) but a lot of his concepts don't scale or abstract as well as he seems to be implying (ironic because he has a full essay on "The Ladder of Abstraction" [0]).

He makes some keen observations about how tooling in certain areas (especially front end design) is geared towards programmers rather than visual GUI tools, and tries to relate that back to a more general point about getting intuition for code, but I think this is only really applicable when there is a visual metaphor for the concept that there is an intuition to be gotten about.

To that end, rather than "programming not having progressed", a better realisation of his goals would be better documentation, interactive explainers, more tooling for editing/developing/profiling for whatever use case you need it for and not, as he would be implying, that all languages are naively missing out on the obvious future of all programming (which I don't think is an unfair inference from the featured video where he's presenting all programming like it's still the 1970s).

He does put his money where his mouth is, creating interactive essays and explainers that put his preaching into practice [1] which again are very good for those specific concepts but don't abstract to all education.

Similarly he has Dynamicland [2] which aims to be an educational hacker space type place to explore other means of programming, input etc. It's a _fascinating_ experiment and there are plenty of interesting takeaways, but it still doesn't convince me that the concepts he's espousing are the future of programming. A much better way to teach kids how computers work and how to instruct them? Sure. Am I going to be writing apps using bits of paper in 2050? Probably not.

An interesting point of comparison would be the Ken Iverson "notation as a tool of thought" which also tries to tackle the notion of programming being cumbersome and unintuitive, but comes at it very much from the mathematical, problem solving angle rather than the visual design angle. [3]

[0] https://worrydream.com/LadderOfAbstraction/

[1] https://worrydream.com/KillMath/

[2] https://dynamicland.org/

[3] https://www.jsoftware.com/papers/tot.htm

kragen 10 hours ago | parent | next [-]

Ideas that scale don't scale until they do. The Macintosh didn't come out until people had been using WIMP GUIs for 10 years. People tried to build flying machines for centuries before the Wright Brothers figured out how to control one.

unconed 9 hours ago | parent | prev [-]

The solution to seeing more Bret Victor-ish tooling is for people to rediscover how to build the kind of apps that were commonplace on the desktop but which have become a very rare art in the cloud era.

Direct manipulation of objects in a shared workspace, instant undo/redo, trivial batch editing, easy duplication and backup, ... all things you can't do with your average SaaS and which most developers would revolt for if they'd had to do their own work without them.

androng 3 hours ago | parent | prev | next [-]

https://toolong.link/v?w=8pTEmbeENF4&l=en

jason-richar15 3 hours ago | parent | prev | next [-]

The future of programming points toward AI-assisted development, low-code/no-code platforms, and more efficient, collaborative tools—making software creation faster, smarter, and more accessible to everyone.

enos_feedler 8 hours ago | parent | prev | next [-]

had the privilege to be there in person. was magical live

pmkary 5 minutes ago | parent [-]

I envy you sooo muuchh

LAC-Tech 11 hours ago | parent | prev | next [-]

Probably my favourite tech talk of all time. I did at least read the actor model paper! (though the 1973 one doesn't say much, you want the one with Baker, "Laws for Communicating Sequential Processes".

I still don't know what he means about not liking APIs though. "Communicating with Aliens", what insight am I missing?

cfiggers 10 hours ago | parent [-]

When two humans want to talk but don't speak a shared language, if they spend enough time together, they will figure out how to communicate eventually.

But when two computers want to talk to each other and don't speak a "shared language" (aka, the client specifically must conform to the server's "language"—it's very one-sided in that sense) then no amount of time will allow them to learn one another's rules or settle on a shared communication contact without a human programmer getting involved.

Legend2440 7 hours ago | parent | next [-]

There are ML architectures that can do that. The two halves of an autoencoder learn a “shared language” that allows them to communicate through a bottleneck.

LAC-Tech an hour ago | parent | prev [-]

I got that far but... what is the technical solution?

dzonga 8 hours ago | parent | prev | next [-]

does that mean things like graphql will make a comeback in the A.I world ?

since with graphql - an agent / a.i can probe - gradually to what information another program can give vs a finite set of interfaces in REST ?

keepamovin 10 hours ago | parent | prev | next [-]

I like this guy. His work! But it seems like everything he did is from 10+ years ago. Where is he now?!?!

dan-g 10 hours ago | parent | next [-]

He's around! You can see his current work at https://worrydream.com. He's mostly been working on Dynamicland (https://dynamicland.org). He'll also occasionally post on Bluesky (https://bsky.app/profile/worrydream.com)

pjmlp 10 hours ago | parent | prev | next [-]

Doing this kind of stuff,

https://www.youtube.com/watch?v=7wa3nm0qcfM

0123456789ABCDE 10 hours ago | parent | prev [-]

https://worrydream.com

haritha-j 7 hours ago | parent | prev | next [-]

Look at the big brain on Bret!

pmkary 10 hours ago | parent | prev | next [-]

The biggest wish I have is to one day meet maestro. Greatest living mind in my opinion.

ModernMech 9 hours ago | parent | prev | next [-]

My unpopular opinion is if we had just done a lot of the stuff Bret has been talking about for 10 years -- investing in better developer tooling -- we could have realized productivity gains better than what AI provides without having to spin up massive data centers. Unfortunately "dev tools" don't get funding today unless they're "AI dev tools".

jtwaleson 8 hours ago | parent [-]

Agreed, but: I know a couple of players in the "Enterprise Low-Code" space, who have invested heavily in deeply integrated development environments (with a capital I) and the right abstractions. They are all struggling with AI adoption as their systems "don't speak text". LLMs are great at grokking text based programming but not much else.

pjmlp 7 hours ago | parent | next [-]

As someone that recently started to look into that space, that problem seems to be being tackled via agents and MCP tooling, meaning Fusion, Workato, Boomi, and similar.

ModernMech 7 hours ago | parent | prev [-]

To me, enterprise low code feels like the latest iteration of the impetus that birthed COBOL, the idea that we need to build tools for these business people because the high octane stuff is too confusing for them. But they are going the wrong way about it; we shouldn't kiddie proof our dev tools to make them understandable to mere mortals, but instead we should make our dev tools understandable enough so that devs don't have to be geniuses to use them. Given the right tools I've seen middle schoolers code sophisticated distributed algorithms that grad students struggle with, so I'm very skeptical that this dilemma isn't self-imposed.

The thing about LLMs being only good with text is it's a self-fulfilling prophecy. We started writing text in a buffer because it was all we could do. Then we built tools to make that easier so all the tooling was text based. Then we produced a mountain of text-based code. Then we trained the AI on the text because that's what we had enough of to make it work, so of course that's what it's good at. Generative AI also seems to be good at art, because we have enough of that lying around to train on as well.

This is a repeat of what Seymour Papert realized when computers were introduced to classrooms around the 80s: instead of using the full interactive and multimodal capabilities of computers to teach in dynamic ways, teachers were using them just as "digital chalkboards" to teach the same topics in the same ways they had before. Why? Because that's what all the lessons were optimized for, because chalkboards were the tool that was there, because a desk, a ruler, paper, and pencil were all students had. So the lessons focused around what students could express on paper and what teachers could express on a chalk board (mostly times tables and 2d geometry).

And that's what I mean by "investment", because it's going to take a lot more than a VC writing a check to explore that design space. You've really gotta uproot the entire tree and plant a new one if you want to see what would have grown if we weren't just limited to text buffers from the start. The best we can get is "enterprise low code" because every effort has to come with an expected ROI in 18 months, so the best story anyone can sell to convince people to open their wallets is "these corpos will probably buy our thing".

phplovesong 10 hours ago | parent | prev [-]

Instead of this we got AI slop that is literally everywhere you look.