| ▲ | Notation as a Tool of Thought (1979)(jsoftware.com) |
| 125 points by susam 7 hours ago | 22 comments |
| |
|
| ▲ | jamesrom 5 hours ago | parent | next [-] |
| It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions. But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation. |
| |
| ▲ | peralmq 3 hours ago | parent | next [-] | | Good point. Notation matters in how we explore ideas. Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didn’t like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him. And he never stopped. Later, he invented entirely new ways to think about physics tied to how he expressed himself, like Feynman diagrams (https://en.wikipedia.org/wiki/Feynman_diagram) and slash notation (https://en.wikipedia.org/wiki/Feynman_slash_notation). | | |
| ▲ | nonrandomstring 2 hours ago | parent [-] | | > Notation matters in how we explore ideas. Indeed, historically. But are we not moving into a society where
thought is unwelcome? We build tools to hide underlying notation and
structure, not because it affords abstraction but because its
"efficient". Is there not a tragedy afoot, by which technology, at its
peak, nullifies all its foundations? Those who can do mental
formalism, mathematics, code etc, I doubt we will have any place in a
future society that values only superficial convenience, the
appearance of correctness, and shuns as "slow old throwbacks" those
who reason symbolically, "the hard way" (without AI). (cue a dozen comments on how "AI actually helps" and amplifies
symbolic human thought processes) |
| |
| ▲ | veqq 3 hours ago | parent | prev [-] | | > paired with new notation The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression. | | |
| ▲ | xelxebar 24 minutes ago | parent [-] | | You stole the words from my mouth! One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs. If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate. APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches. |
|
|
|
| ▲ | FilosofumRex 3 hours ago | parent | prev | next [-] |
| Historically, speaking what killed off APL (besides the wonky keyboard), was Lotus 123 by IBM and shortly thereafter MS Excel. Engineers, academicians, accountants, and MBAs needed something better than their TI-59 & HP-12C. But the CS community was obsessing about symbolics, AI and LISP, so the industry stepped in... This was a very unfortunate coincidence, because APL could have had much bigger impact and solve far more problems than spreadsheets ever will. |
|
| ▲ | jweir 6 hours ago | parent | prev | next [-] |
| After years of looking at APL as some sort of magic I spent sometime earlier this year to learn it. It is amazing how much code you can fit into a tweet using APL. Fun but hard for me to write. |
| |
| ▲ | fc417fc802 6 hours ago | parent | next [-] | | It's not as extreme but I feel similarly every time I write dense numpy code. Afterwards I almost invariably have the thought "it took me how long to write just that?" and start thinking I ought to have used a different tool. For some reason the reality is unintuitive to me - that the other tools would have taken me far longer. All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner. I think it's like climbing a steeper but much shorter path. It feels like more work but it's actually less. (The point of my rambling here is that I probably ought to learn APL and use it instead.) | | |
| ▲ | skruger 5 hours ago | parent | next [-] | | Should you ever decide to take that leap, maybe start here: https://xpqz.github.io/learnapl (disclosure: author) | |
| ▲ | Qem 6 hours ago | parent | prev | next [-] | | > It's not as extreme but I feel similarly every time I write dense numpy code. https://analyzethedatanotthedrivel.org/2018/03/31/numpy-anot... | |
| ▲ | jonahx 4 hours ago | parent | prev | next [-] | | Indeed numpy is essentially just an APL/J with more verbose and less elegant syntax. The core paradigm is very similar, and numpy was directly inspired by the APLs. | |
| ▲ | xelxebar 4 hours ago | parent | prev [-] | | > All the stuff that feels difficult and like it's just eating up time is actually me being forced to work out the problem specification in a more condensed manner. Very well put! Your experience aligns with mine as well. In APL, the sheer austerity of architecture means we can't spend time on boilerplate and are forced to immediately confront core domain concerns. Working that way has gotten me to see code as a direct extension of business, organizational, and market issues. I feel like this has made me much more valuable at work. |
| |
| ▲ | pinkamp 4 hours ago | parent | prev [-] | | Any examples you can share? |
|
|
| ▲ | InfinityByTen 2 hours ago | parent | prev | next [-] |
| > Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context. I personally disagree to the premise of this paper. I think notation that is separated from visualization and ergonomics of the problem has a high cost. Some academics prefer a notation that hides away a lot of the complexity which can potentially result in "Eureka" realizations, wild equivalences and the like. In some cases, however, it can be obfuscating and be prone to introducing errors. Yet, it's a important tool in communicating a train of thought. In my opinion, having one standard notation for any domain/ closely related domains is quite stifling of creative, artistic or explorative side of reasoning and problem solving. Also, here's an excellent exposition about notation by none other than Terry Tao
https://news.ycombinator.com/item?id=23911903 |
| |
| ▲ | tossandthrow 2 hours ago | parent [-] | | This feels like the types programming vs. none typed programming. There are efforts in math to build "enterprise" reasoning systems. For these it makes sense to have a universal notation system (Lean, Coq, the likes). But for a personal exploration, it might be better to just jam in whatever. My personal strife in this space is more on teaching: Taking algebra classes, etc. where the teacher is not consistent nor honest about the personal decision and preference they have on notation - I became significantly better at math when I started studying type theory and theory of mechanical proofs. |
|
|
| ▲ | colbyn 6 hours ago | parent | prev | next [-] |
| I really wish I finished my old Freeform note taking app that complies down to self contained webpages (via SVG). IMO it was a super cool idea for more technical content that’s common in STEM fields. Here’s an example from my old chemistry notes: https://colbyn.github.io/old-school-chem-notes/dev/chemistry... |
|
| ▲ | xelxebar 4 hours ago | parent | prev | next [-] |
| > Subordination of detail The paper doesn't really explore this concept well, IMHO. However, after a lot of time reading and writing APL applications, I have found that it points at a way of managing complexity radically different from abstraction. We're inundated with abstraction barriers: APIs, libraries, modules, packages, interfaces, you name it. Consequences of this approach are almost cliché at this point—dizzyingly high abstraction towers, developers as just API-gluers, disconnect from underlying hardware, challenging to reason about performance, _etc._ APL makes it really convenient to take a different tack. Instead of designing abstractions, we can carefully design our data to be easily operated on with simple expressions. Where you would normally see a library function or DSL term, this approach just uses primitives directly: For example, we can create a hash map of vector values and interred keys with something like str←(⊂'') 'rubber' 'baby' 'buggy' 'bumpers' ⍝ string table
k←4 1 2 2 4 3 4 3 4 4 ⍝ keys
v←0.26 0.87 0.34 0.69 0.72 0.81 0.056 0.047 0.075 0.49 ⍝ values
Standard operations are then immediately accessible: k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92) ⍝ insert values
k{str[⍺] ⍵}⌸v ⍝ pretty print
k v⌿⍨←⊂k≠str⍳⊂'buggy' ⍝ deletion
What I find really nice about this approach is that each expression is no longer a black box, making it really natural to customize expressions for specific needs. For example, insertion in a hashmap would normally need to have code for potentially adding a new key, but above we're making use of a common invariant that we only need to append values to existing keys.If this were a library API, there would either be an unused code path here, lots of variants on the insertion function, or some sophisticated type inference to do dead code elimination. Those approaches end up leaking non-domain concerns into our codebase. But, by subordinating detail instead of hiding it, we give ourselves access to as much domain-specific detail as necessary, while letting the non-relevant detail sit silently in the background until needed. Of course, doing things like this in APL ends up demanding a lot of familiarity with the APL expressions, but honestly, I don't think that ends up being much more work than deeply learning the Python ecosystem or anything equivalent. In practice, the individual APL symbols really do fade into the background and you start seeing semantically meaningful phrases instead, similar to how we read English words and phrases atomically and not one letter at a time. |
| |
| ▲ | DHRicoF 3 hours ago | parent | next [-] | | > k v⍪←↓⍉↑(2 0.33)(2 0.01)(3 0.92) ⍝ insert values
> k{str[⍺] ⍵}⌸v ⍝ pretty print
> k v⌿⍨←⊂k≠str⍳⊂'buggy' ⍝ deletion I like your funny words. No, really, I should expend some time learning APL. But your idea deeply resonate with my last weeks struggle. I have a legacy python code with too much coupling, and every prior attempt to "improve things" went adding more abstraction over a plain wrong data model. You can't infer, reading the code linearly, what methods mutate their input objects. Some do, some don't. Sometimes the same input argument is returned even without mutation. I would prefer some magic string that could be analyzed and understood than this sea of indirection with factories returning different calculators that in some instances they don't even share the same interface. Sorry for the rant. | |
| ▲ | jonahx 3 hours ago | parent | prev [-] | | To rephrase crudely: "inline everything". This is infeasible in most languages, but if your language and concise and expressive enough, it becomes possible again to a large degree. I always think about how Arthur Whitney just really hates scrolling. Let alone 20 open files and chains of "jump to definition". When the whole program fits on page, all that vanishes. You navigate with eye movements. |
|
|
| ▲ | cess11 4 hours ago | parent | prev | next [-] |
| Last year The Array Cast republished an interview with Iverson from 1982. https://www.arraycast.com/episodes/episode92-iverson It's quite interesting, and arguably more approachable than the Turing lecture. In 1979 APL wasn't as weird and fringe as it is today, because programming languages weren't global mass phenomena in the way that they are today, pretty much all of them were weird and fringe. C was rather fresh at the time, and if one squints a bit APL can kind of look like an abstraction that isn't very far from dense C and allows you to program a computer without having to implement pointer juggling over arrays yourself. |
| |
|
| ▲ | gitroom 4 hours ago | parent | prev [-] |
| man i always try squishing code into tiny spaces too and then wonder why i'm tired after, but i kinda love those moments when it all just clicks |