| ▲ | jamesrom 12 hours ago |
| It's easy to think of notation like shell expansions, that all you're doing is replacing expressions with other expressions. But it goes much deeper than that. Once my professor explained how many great discoveries are often paired with new notation. That new notation signifies "here's a new way to think about this problem". And that many unsolved problems today will give way to powerful notation. |
|
| ▲ | veqq 10 hours ago | parent | next [-] |
| > paired with new notation The DSL/language driven approach first creates a notation fitting the problem space directly, then worries about implementing the notation. It's truly empowering. But this is the lisp way. The APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10. So instead of creating a DSL in APL, you design and layout your data very carefully and then everything just falls into place, a bit backwards from the first impression. |
| |
| ▲ | xelxebar 7 hours ago | parent | next [-] | | You stole the words from my mouth! One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That is, the process of writing an implementation seems to be synonymous with the process of learning what DSL your problem really needs. If you can manage to fluidly update your DSL design along the way, it might work, but in my experience the premature assumptions of initial designs end up getting baked in to so much code that it's really painful to migrate. APL, on the other hand, I have found extremely amenable to updates and rewrites. I mean, even just psychologically, it feels way more sensible to rewrite a couple lines of code versus a couple hundred, and in practice, I find the language to be very amenable for quickly exploring a problem domain with code sketches. | | |
| ▲ | skydhash 3 hours ago | parent | next [-] | | I was playing with Uiua, a stack and array programming languages. It was amazing to solve the Advent of Code's problems with just a few lines of code. And as GP said. Once you got the right form of array, the handful of functions the standard library was sufficient. | |
| ▲ | marcosdumay 2 hours ago | parent | prev [-] | | > One of the issues DSLs give me is that the process of using them invariably obsoletes their utility. That means your DSL is too specific. It should be targeted at the domain, not at the application. But yes, it's very hard to make them general enough to be robust, but specific enough to be productive. It takes a really deep understanding of the domain, but even this is not enough. | | |
| ▲ | xelxebar an hour ago | parent [-] | | Indeed! Another way of putting it is that, in practice, we want the ability to easily iterate and find that perfect DSL, don't you think? IMHO, one big source of technical debt is code relying on some faulty semantics. Maybe initial abstractions baked into the codebase were just not quite right, or maybe the target problem changed under our feet, or maybe the interaction of several independent API boundaries turned out to be messy. What I was trying to get at above is that APL is pretty great for iteratively refining our knowledge of the target domain and producing working code at the same time. It's just that APL works best when reifying that language down into short APL expressions instead of English words. |
|
| |
| ▲ | smikhanov 3 hours ago | parent | prev [-] | | APL (or Clojure) way is about making your base types truly useful, 100 functions on 1 data structure instead of 10 on 10
If this is indeed this simple and this obvious, why didn't other languages followed this way? | | |
| ▲ | electroly 41 minutes ago | parent | next [-] | | "APL is like a diamond. It has a beautiful crystal structure; all of its parts are related in a uniform and elegant way. But if you try to extend this structure in any way - even by adding another diamond - you get an ugly kludge. LISP, on the other hand, is like a ball of mud. You can add any amount of mud to it and it still looks like a ball of mud."
-- https://wiki.c2.com/?JoelMosesOnAplAndLisp | |
| ▲ | diggan 3 hours ago | parent | prev | next [-] | | That particular quote is from the "Epigrams on Programming" article by Alan J. Perlis, from 1982. Lots of ideas/"Epigrams" from that list are useful, and many languages have implemented lots of them. But some of them aren't so obvious until you've actually put it into practice. Full list can be found here: https://web.archive.org/web/19990117034445/http://www-pu.inf... (the quote in question is item #9) I think most people haven't experienced the whole "100 functions on 1 data structures instead of 10 on 10" thing themselves, so there is no attempts to bring this to other languages, as you're not aware of it to begin with. Then the whole static typing hype (that is the current cycle) makes it kind of difficult because static typing kind of tries to force you into the opposite of "1 function you can only use for whatever type you specify in the parameters", although of course traits/interfaces/whatever-your-language-calls-it helps with this somewhat, even if it's still pretty static. | |
| ▲ | marcosdumay 2 hours ago | parent | prev | next [-] | | Because it's domain specific. If you push this into every kind of application, you will end-up with people recreating objects with lists of lists, and having good reasons to do so. | |
| ▲ | exe34 3 hours ago | parent | prev [-] | | some of us think in those terms and daily have to fight those who want 20 different objects, each 5-10 deep in inheritance, to achieve the same thing. I wouldn't say 100 functions over one data structure, but e.g. in python I prefer a few data structures like dictionary and array, with 10-30 top level functions that operate over those. if your requirements are fixed, it's easy to go nuts and design all kinds of object hierarchies - but if your requirements change a lot, I find it much easier to stay close to the original structure of the data that lives in the many files, and operate on those structures. |
|
|
|
| ▲ | peralmq 9 hours ago | parent | prev | next [-] |
| Good point. Notation matters in how we explore ideas. Reminds me of Richard Feynman. He started inventing his own math notation as a teenager while learning trigonometry. He didn’t like how sine and cosine were written, so he made up his own symbols to simplify the formulas and reduce clutter. Just to make it all more intuitive for him. And he never stopped. Later, he invented entirely new ways to think about physics tied to how he expressed himself, like Feynman diagrams (https://en.wikipedia.org/wiki/Feynman_diagram) and slash notation (https://en.wikipedia.org/wiki/Feynman_slash_notation). |
| |
| ▲ | nonrandomstring 8 hours ago | parent [-] | | > Notation matters in how we explore ideas. Indeed, historically. But are we not moving into a society where
thought is unwelcome? We build tools to hide underlying notation and
structure, not because it affords abstraction but because its
"efficient". Is there not a tragedy afoot, by which technology, at its
peak, nullifies all its foundations? Those who can do mental
formalism, mathematics, code etc, I doubt we will have any place in a
future society that values only superficial convenience, the
appearance of correctness, and shuns as "slow old throwbacks" those
who reason symbolically, "the hard way" (without AI). (cue a dozen comments on how "AI actually helps" and amplifies
symbolic human thought processes) | | |
| ▲ | PaulRobinson 6 hours ago | parent | next [-] | | Let's think about how an abstraction can be useful, and then redundant. Logarithms allow us to simplify a hard problem (multiplying large numbers), into a simpler problem (addition), but the abstraction results in an approximation. It's a good enough approximation for lots of situations, but it's a map, not the territory. You could also solve division, which means you could take decent stabs at powers and roots and voila, once you made that good enough and a bit faster, an engineering and scientific revolution can take place. Marvelous. For centuries people produced log tables - some so frustratingly inaccurate that Charles Babbage thought of a machine to automate their calculation - and we had slide rules and we made progress. And then a descendant of Babbage's machine arrived - the calculator, or computer - and we didn't need the abstraction any more. We could quickly type 35325 x 948572 and far faster than any log table lookup, be confident that the answer was exactly 33,508,305,900. And a new revolution is born. This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. For a while we had a tool that helped (roughly), and then we got a better tool thanks to that tool. And we might be about to get a better tool again where instead of doing the maths, the tool can use more impressive models of physics and engineering to help us build things. The metaphor I often use is that these tools don't replace people, they just give them better tools. There will always be a place for being able to work from fundamentals, but most people don't need those fundamentals - you don't need to understand the foundations of how calculus was invented to use it, the same way you don't need to build a toaster from scratch to have breakfast, or how to build your car from base materials to get to the mountains at the weekend. | | |
| ▲ | ryandv 5 hours ago | parent [-] | | > This is the path we're on. You don't need to know how multiplication by hand works in order to be able to do multiplication - you use the tool available to you. What tool exactly are you referring to? If you mean LLMs, I actually view them as a regression with respect to basically every one of the "characteristics of notation" desired by the article. There is a reason mathematics is no longer done with long-form prose and instead uses its own, more economical notation that is sufficiently precise as to even be evaluated and analyzed by computers. Natural languages have a lot of ambiguity, and their grammars allow nonsense to be expressed in them ("colorless green ideas sleep furiously"). Moreover two people can read the same word and connect two different senses or ideas to them ("si duo idem faciunt, non est idem"). Practice with expressing thoughts in formal language is essential for actually patterning your thoughts against the structures of logic. You would not say that someone who is completely ignorant of Nihongo understands Japanese culture, and custom, and manner of expression; similarly, you cannot say that someone ignorant of the language of syllogism and modus tollens actually knows how to reason logically. You can, of course, get a translator - and that is what maybe some people think the LLM can do for you, both with Nihongo, and with programming languages or formal mathematics. Otherwise, if you already know how to express what you want with sufficient precision, you're going to just express your ideas in the symbolic, formal language itself; you're not going to just randomly throw in some nondeterminism at the end by leaving the output up to the caprice of some statistical model, or allow something to get "lost in translation." | | |
| ▲ | PaulRobinson 3 hours ago | parent [-] | | You need to see the comment I was replying to, in order to understand the context I was making. LLMs are part of what I was thinking of, but not the totality. We're pretty close to Generative AI - and by that I don't just mean LLMs, but the entire space - being able to use formal notations and abstractions more usefully and correctly, and therefore improve reasoning. The comment I was replying to complained about this shifting value away from fundamentals and this being a tragedy. My point is that this is just human progress. It's what we do. You buy a microwave, you don't build one yourself. You use a calculator app on your phone, you don't work out the fundamentals of multiplication and division from first principles when you're working out how to split the bill at dinner. I agree with your general take on all of this, but I'd add that AI will get to the point where it can express "thoughts" in formal language, and then provide appropriate tools to get the job done, and that's fine. I might not understand Japanese culture without knowledge of Nihongo, but if I'm trying to get across Tokyo in rush hour traffic and don't know how to, do I need to understand Japanese culture, or do I need a tool to help me get my objective done? If I care deeply about understanding Japanese culture, I will want to dive deep. And I should. But for many people, that's not their thing, and we can't all dive deep on everything, so having tools that do that for us better than existing tools is useful. That's my point: abstractions and tools allow people to get stuff done that ultimately leads to better tools and better abstractions, and so on. Complaining that people don't have a first principle grasp of everything isn't useful. |
|
| |
| ▲ | WJW 5 hours ago | parent | prev [-] | | > But are we not moving into a society where thought is unwelcome? Not really, no. If anything clear thinking and insight will give an even bigger advantage in a society with pervasive LLM usage. Good prompts don't write themselves. |
|
|
|
| ▲ | agumonkey 3 hours ago | parent | prev | next [-] |
| There's something about economy of thought and ergonomics.. on a smaller scale, when coffeescript popped up, it radically altered how i wrote javascript, because lambda shorthand and all syntactic conveniences. Made it easier to think, read and rewrite. Same goes for sml/haskell and lisps (at least to me) |
|
| ▲ | mac9 5 hours ago | parent | prev [-] |
| [dead] |