Remix.run Logo
saghm 4 days ago

To me, the best argument for constraints imposed by a language is around correctness, not efficiency. When we write a program, we tend to have an idea of how we want it to behave, but it's a pretty universal fact that this is hard to get exactly right on the first try (and often it can be hard to even tell without extensive testing, which is why bugs aren't all fixed by the time software actually gets released). In a certain sense, the act of writing and debugging a program can be thought of as searching through the space of all the possible programs that you could be writing, repeatedly narrowing down the set of potential choices by ruling out ones you know are incorrect, and then eventually picking one to use as the candidate you think is the one you want. From this perspective, language constraints can help with this process pretty much every step of the way; some programs are ruled out because you can't even express them in the first place, others are able to be rejected as you narrow down the set you're looking at based each new line of code you write and how the constraints interact with that, and potentially even with debugging after the fact when trying to figure out what went wrong with an incorrect selection (i.e. one that has bugs).

When we're using Turing complete languages for pretty much everything, constraints are pretty much the only thing that semantically differentiates the code we write in them at all. To me, this is basically the concept people are trying to convey when they argue for picking "the right tool for the right job". At the end of the day, what makes a language useful is just as much defined by what you _can't_ say as exact you can.

holowoodman 4 days ago | parent [-]

Problem is that those constraints are often very hard to express, even in more expressive languages.

One example would be integers. There are bigint-by-default languages like Haskell, where the type you usually use is an arbitrary-sized bigint. But usually you don't need that, and you usually know that something like a 32bit integer is sufficient. Often you get a int32 type that does that stuff for you. But then the question becomes about overflow behaviour, signedness, existence of -0, behaviour of -INT_MAX and stuff like that. Even in C, you are in undefined-behaviour-launch-the-rockets territory very quickly. And there are usually no "screw it, I don't care, give me whatever the CPU does" types. You don't get to choose your overflow/underflow behaviour. Often no bounded types either (unsigned integer day from 1 to 365). And if those types are available in libraries, the compiler won't be able to optimize.

There are tons more of those examples. It's always a compromise between the expressiveness that we want and that would be great, and the complexity that will make an implementation non-viable. So you get as much expressivenes as the language designer thinks he could maybe cram into his compiler. But it's always to much to be really optimized all the way through. And it's always too little for what one would actually need to be really exact.