| ▲ | addaon 2 days ago |
| I can’t speak to the C++ contract design — it’s possible bad choices were made. But contracts in general are absolutely exactly what C++ needs for the next step of its evolution. Programming languages used for correct-by-design software (Ada, C++, Rust) need to enable deep integration with proof assistants to allow showing arbitrary properties statically instead of via testing, and contracts are /the/ key part of that — see e.g. Ada Spark. |
|
| ▲ | derriz 2 days ago | parent | next [-] |
| C++ is the last language I'd add to any list of languages used for correct-by-design - it's underspecified in terms of semantics with huge areas of UB and IB. Given its vast complexity - at every level from the pre-processor to template meta-programming and concepts, I simply can't imagine any formal denotational definition of the language ever being developed. And without a formal semantics for the language, you cannot even start to think about proof of correctness. |
| |
| ▲ | addaon a day ago | parent [-] | | As with Spark, proving properties over a subset of the language is sufficient. Code is written to be verified; we won’t be verifying interesting properties of large chunks of legacy code in my career span. The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way. | | |
| ▲ | extrabajs a day ago | parent | next [-] | | I don’t think this is a good comparison. Ada (on which Spark is based) has every safety feature and guardrail under the sun, while C++ (or C) has nothing. | | |
| ▲ | uecker a day ago | parent [-] | | There is a lot of tooling for C though, just not in mainstream compilers. |
| |
| ▲ | jesse__ a day ago | parent | prev [-] | | > The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way. In my experience, this is absolutely true. I wrote my own metaprogramming frontend for C and that's basically all you need. At this point, I consider the metaprogramming facilities of a language it's most important feature, by far. Everything else is pretty much superfluous by comparison | | |
|
|
|
| ▲ | ozgrakkurt a day ago | parent | prev | next [-] |
| I don’t understand this “next evolution” approach to language design. It should be done at some point. People can always develop languages with more or less things but piling more things on is just not that useful. It sounds cool in the minds of people that are designing these things but it is just not that useful. Rust is in the same situation of adding endless crap that is just not that useful. Specifically about this feature, people can just use asserts. Piling things onto the type system of C++ is never going to be that useful since it is not designed to be a type system like Rust's type system. Any improvement gained is not worth piling on more things. Feels like people that push stuff do it because "it is just what they do". |
| |
| ▲ | jandrewrogers a day ago | parent | next [-] | | Many of the recent C++ standards have been focused on expanding and cleaning up its powerful compile-time and metaprogramming capabilities, which it initially inherited by accident decades ago. It is difficult to overstate just how important these features are for high-performance and high-reliability systems software. These features greatly expand the kinds of safety guarantees that are possible to automate and the performance optimizations that are practical. Without it, software is much more brittle. This isn’t an academic exercise; it greatly reduces the amount of code and greatly increases safety. The performance benefits are nice but that is more on the margin. One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++. | | |
| ▲ | rienbdj a day ago | parent | next [-] | | > One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++. Aren’t Rust macros more powerful than C++ template metaprogramming in practice? | | |
| ▲ | menaerus 15 hours ago | parent | next [-] | | No, they are not. | | |
| ▲ | aw1621107 14 hours ago | parent | next [-] | | They are both; there are things that Rust's macros can do metaprogramming-wise that C++ templates cannot do and vice-versa. Rust's macros work on a syntactic level, so they are more powerful in that they can work with "normally" invalid code and perform token-to-token transformations (and in the case of proc macros effectively function as compiler extensions/plugins) and less powerful in that they don't have access to semantic information. | |
| ▲ | aldanor 10 hours ago | parent | prev [-] | | Incorrect. |
| |
| ▲ | tialaramex a day ago | parent | prev [-] | | Rust has two separate macro systems. It has declarative "by example" macros which are a nicer way to write the sort of things where you show an intern this function for u8 and ask them to create seven more just like it except for i8, u16, i16, u32, i32, u64, i64. Unlike the C pre-processor these macros understand how loops work (sort of) and what types are, and so on, and they have some hygiene features which make them less likely to cause mayhem. Declarative macros deliberately don't share Rust's syntax because they are macros for Rust so if they shared the same syntax everything you do is escape upon escape sequence as you want the macro to emit a loop but not loop itself etc. But other than the syntax they are pretty friendly, a one day Rust bootstrap course should probably cover these macros at least enough that you don't use copy-paste to make those seven functions by hand. However the powerful feature you're thinking of is procedural or "proc" macros and those are a very different beast. The proc macros are effectively compiler plugins, when the compiler sees we invoked the proc macro, it just runs that code, natively. So in that sense these are certainly more powerful, they can for example install Python, "Oh, you don't have Python, but I'm a proc macro for running Python, I'll just install it...". Mara wrote several "joke" proc macros which show off how dangerous/ powerful it is, you should not use these, but one of them for example switches to the "nightly" Rust compiler and then seamlessly compiles parts of your software which don't work in stable Rust... |
| |
| ▲ | jesse__ a day ago | parent | prev | next [-] | | > powerful compile-time and metaprogramming capabilities While I agree that, generally, compile time metaprogramming is a tremendously powerful tool, the C++ template metaprogramming implementation is hilariously bad. Why, for example, is printing the source-code text of an enum value so goddamn hard? Why can I not just loop over the members of a class? How would I generate debug vis or serialization code with a normal-ish looking function call (spoiler, you can't, see cap'n proto, protobuf, flatbuffers, any automated dearimgui generator) These things are incredibly basic and C++ just completely shits all over itself when you try to do them with templates | | | |
| ▲ | sidkshatriya 19 hours ago | parent | prev [-] | | One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++ In the space of language design, everything "more powerful" is not necessary good. Sometimes less power is better because it leads to more optimisable code, less implementation complexity, less abstraction, better LSP support. TL;DR More flexibility and complexity is not always good. Though I would also challenge the fact that Rust's metaprogramming model is "not powerful enough". I think it can be. | | |
| ▲ | germandiago 18 hours ago | parent [-] | | But compile-time processing is certainly useful in a performance-oriented language. And not only for performance but also for thread safety (eliminates initialization races, for example, for non-trivial objects). Rust is just less powerful. For example you cannot design something that comes evwn close to expression templates libraries. | | |
| ▲ | ux266478 13 hours ago | parent | next [-] | | > And not only for performance but also for thread safety This is already built-in to the language as a facet of the affine type system. I'm curious as to how familiar you actually are with Rust? > Rust is just less powerful. On the contrary. Zig and C++ have nothing even remotely close to proc macros. And both languages have to defer things like thread safety into haphazard metaprogramming instead of baking them into the language as a basic semantic guarantee. That's not a good thing. | | |
| ▲ | germandiago 8 hours ago | parent [-] | | Writing general generic code without repetition for Rust without specialization is ome thing where it fails. It does not have variadics or so powerful compile metaprogramming. It does not come even remotely close. Proc macros is basically plugins. I do not think thos is even part of the "language" as such. It is just plugging new stuff into the compiler. |
| |
| ▲ | aw1621107 15 hours ago | parent | prev [-] | | > For example you cannot design something that comes evwn close to expression templates libraries. You keep saying this and it's still wrong. Rust is quite capable of expression templates, as its iterator adapters prove. What it isn't capable of (yet) is specialization, which is an orthogonal feature. | | |
| ▲ | Conscat 15 hours ago | parent | next [-] | | Rust cannot take a const function and evaluate that into the argument of a const generic or a proc macro. As far as I can tell, the reasons are deeply fundamental to the architecture of rustc. It's difficult to express HOW FUNDAMENTAL this is to strongly typed zero overhead abstractions, and we see where Rust is lacking here in cases like `Option` and bitset implementations. | | |
| ▲ | aw1621107 14 hours ago | parent [-] | | > Rust cannot take a const function and evaluate that into the argument of a const generic Assuming I'm interpreting what you're saying here correctly, this seems wrong? For example, this compiles [0]: const fn foo(n: usize) -> usize {
n + 1
}
fn bar<const N: usize>() -> usize {
N + 1
}
pub fn baz() -> usize {
bar::<{foo(0)}>()
}
In any case, I'm a little confused how this is relevant to what I said?[0]: https://rust.godbolt.org/z/rrE1Wrx36 |
| |
| ▲ | menaerus 14 hours ago | parent | prev [-] | | > Rust is quite capable of expression templates, as its iterator adapters prove. AFAIU iterator adapters are not quite what expression templates are because they rely on the compiler optimizations rather than the built-in feature of the language, which enable you to do this without relying on the compiler pipeline. | | |
| ▲ | aw1621107 14 hours ago | parent [-] | | I had always thought expression templates at the very least needed the optimizer to inline/flatten the tree of function calls that are built up. For instance, for something like x + y * z I'd expect an expression template type like sum<vector, product<vector, vector>> where sum would effectively have: vector l;
product& r;
auto operator[](size_t i) {
return l[i] + r[i];
}
And then product<vector, vector> would effectively have: vector l;
vector r;
auto operator[](size_t i) {
return l[i] * r[i];
}
That would require the optimizer to inline the latter into the former to end up with a single expression, though. Is there a different way to express this that doesn't rely on the optimizer for inlining? | | |
| ▲ | menaerus 13 hours ago | parent [-] | | Expression templates do not rely on optimizer since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST). This guarantees that you get zero cost when you really need it. What you're describing is something keen of copy elision and function folding though inlining which is pretty much basics in any c++ compiler and happens automatically without special care. | | |
| ▲ | aw1621107 12 hours ago | parent [-] | | > since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST). Right, I understand that. What is not exactly clear to me is how you get from the tree of deferred expressions to the "flat" optimized expression without involving the optimizer. Take something like the above example for instance - w = x + y * z for vectors w/x/y/z. How do you get from that to effectively for (size_t i = 0; i < w.size(); ++i) {
w[i] = x[i] + y[i] * z[i];
}
without involving the optimizer at all? | | |
| ▲ | menaerus 22 minutes ago | parent [-] | | The example is false because that's not how you would write an expression template for given computation so the question being how is it that the optimizer is not involved is also not quite set in the correct context so I can't give you an answer for that. Of course that the optimizer is generally going to be involved, as it is for all the code and not the expression templates, but expression templates do not require the optimizer in the way you're trying to suggest. Expression templates do not rely on O1, O2 or O3 levels being set - they work the same way in O0 too and that may be the hint you were looking for. |
|
|
|
|
|
|
|
| |
| ▲ | dbdr a day ago | parent | prev [-] | | What "endless crap that is just not that useful" has been added to Rust in your opinion? | | |
| ▲ | ozgrakkurt a day ago | parent [-] | | returning "impl Trait". async/await unpin/pin/waker. catch_unwind. procedural macros. "auto impl trait for type that implements other trait". I understand some of these kinds of features are because Rust is Rust but it still feels useless to learn. I'm not following rust development since about 2 years so don't know what the newest things are. | | |
| ▲ | tialaramex 20 hours ago | parent | next [-] | | RPIT (Return Position impl Trait) is Rust's spelling of existential types. That is, the compiler knows what we return (it has certain properties) but we didn't name it (we won't tell you what exactly it is), this can be for two reasons: 1. We didn't want to give the thing we're returning a name, it does have one, but we want that to be an implementation detail. In comparison the Rust stdlib's iterator functions all return specific named Iterators, e.g. the split method on strings returns a type actually named Split, with a remainder() function so you can stop and just get "everything else" from that function. That's an exhausting maintenance burden, if your library has some internal data structures whose values aren't really important or are unstable this allows you to duck out of all the extra documentation work, just say "It's an Iterator" with RPIT. 2. We literally cannot name this type, there's no agreed spelling for it. For example if you return a lambda its type does not have a name (in Rust or in C++) but this is a perfectly reasonable thing to want to do, just impossible without RPIT. Blanket trait implementations ("auto impl trait for type that implements other trait") are an important convenience for conversions. If somebody wrote a From implementation then you get the analogous Into, TryFrom and even TryInto all provided because of this feature. You could write them, but it'd be tedious and error prone, so the machine does it for you. | | |
| ▲ | ozgrakkurt 16 hours ago | parent [-] | | Like you said it is possible to not use this feature and it arguably creates better code. It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object. A lambda is essentially a struct with a method so it is the same. I understand about auto trait impl and agree but it is still annoying to me | | |
| ▲ | Twey 15 hours ago | parent [-] | | > It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object. IMO it is a hack to use dynamic dispatch (a runtime behaviour with honestly quite limited use cases, like plugin functionality) to get existential types (a type system feature). If you are okay with parametric polymorphism/generics (universal types) you should also be okay with RPIT (existential types), which is the same semantic feature with a different syntax, e.g. you can get the same effect by CPS-encoding except that the syntax makes it untenable. Because dynamic dispatch is a runtime behaviour it inherits a bunch of limitations that aren't inherent to existential types, a.k.a. Rust's ‘`dyn` safety’ requirements. For example, you can't have (abstract) associated types or functions associated with the type that don't take a magic ‘receiver’ pointer that can be used to look up the vtable. | | |
| ▲ | ozgrakkurt 15 hours ago | parent [-] | | It takes less time to compile and that is a huge upside for me personally. I am also not ok with parametric polymorphism except for containers like hashmap |
|
|
| |
| ▲ | mattstir 20 hours ago | parent | prev [-] | | Returning impl trait is useful when you can't name the type you're trying to return (e.g. a closure), types which are annoyingly long (e.g. a long iterator chain), and avoids the heap overhead of returning a `Box<dyn Trait>`. Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important. Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here" | | |
| ▲ | andriy_koval 13 hours ago | parent | next [-] | | > Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important. curious if you have benchmarks of "catastrofically slow". Also, on linux, mainstream implementation translates async calls to blocked logic with thread pool on kernel level anyway. | |
| ▲ | ozgrakkurt 16 hours ago | parent | prev [-] | | Impl trait is just an enabler to create bad code that explodes compile times imo. I didn’t ever see a piece of code that really needs it. I exclusively wrote rust for many years, so I do understand most of the features fair deeply. But I don’t think it is worth it in hindsight. |
|
|
|
|
|
| ▲ | wpollock a day ago | parent | prev | next [-] |
| > Programming languages used for correct-by-design software (Ada, C++, Rust) ... A shoutout to Eiffel, the first "modern" (circa 1985) language to incorporate Design by Contract. Well done Bertrand Meyer! |
| |
|
| ▲ | bluGill 2 days ago | parent | prev | next [-] |
| The people who did contracts are aware of ada/spark and some have experience using it. Only time will tell if it works in c++ but they at least did all they could to give it a chance. Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex. |
| |
| ▲ | dislikedopinion 2 days ago | parent [-] | | Might be the case that Ada folks successfully got a bad version of contracts not amenable for compile-time checking into C++, to undermine the competition. Time might tell. | | |
| ▲ | stackghost a day ago | parent | next [-] | | I strongly doubt that C++ is what's standing in the way of Ada being popular. | | |
| ▲ | dislikedopinion a day ago | parent [-] | | Ada used to be mandated in the US defense industry, but lots of developers and companies preferred C++ and other languages, and for a variety of reasons, the mandate ended, and Ada faded from the spotlight. | | |
| ▲ | stackghost a day ago | parent [-] | | >the mandate ended, and Ada faded from the spotlight Exactly. People stopped using Ada as soon as they were no longer forced to use it. In other words on its own merits people don't choose it. | | |
| ▲ | hansvm a day ago | parent | next [-] | | On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors. Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison. | | |
| ▲ | stackghost a day ago | parent [-] | | I made no judgement about whether Ada is subjectively "bad" or not. I used it for a single side project many years ago, and didn't like it. But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it. So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada. C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries. My original point, that C++ isn't what's standing in the way of Ada being popular, still stands. |
| |
| ▲ | mastermage a day ago | parent | prev [-] | | Ada is a greatly designed language and I mean this. The problem Ada has is highly proprietary compilers. | | |
|
|
| |
| ▲ | steveklabnik a day ago | parent | prev [-] | | This is some pretty major conspiracy thinking, and would need some serious evidence. Do you have any? | | |
|
|
|
| ▲ | Sharlin 21 hours ago | parent | prev | next [-] |
| Problem is contracts mean different things to different people, and that leads standard contracts support being a compromise that makes nobody happy. To some people contracts are something checked at runtime in debug mode and ignored in release mode. To others they’re something rigorous enough to be usable in formal verification. But the latter essentially requires a completely new C++ dialect for writing contract assertions that has no UB, no side effects, and so on. And that’s still not enough as long as C++ itself is completely underspecified. |
| |
| ▲ | bluGill 19 hours ago | parent [-] | | This contacts was intended to be a minimum viable product that does a little for a few people, but more importantly provides a framework that the people who want everything else can start building off of. |
|
|
| ▲ | steveklabnik 2 days ago | parent | prev | next [-] |
| The devil is in the details, because standardization work is all about details. From my outside vantage point, there seems to be a few different camps about what is desired for contracts to even be. The conflict between those groups is why this feature has been contentious for... a decade now? Some of the pushback against this form of contracts is from people who desire contracts, but don't think that this design is the one that they want. |
|
| ▲ | StilesCrisis 2 days ago | parent | prev | next [-] |
| Right, I think the tension here is that we would like contracts to exist in the language, but the current design isn't what it needs to be, and once it's standardized, it's extremely hard to fix. |
|
| ▲ | kajaktum a day ago | parent | prev | next [-] |
| C++ needs to give itself up and make way for other, newer, modern, language that have far, far fewer baggage. It should be working with other language to provide tools for interop and migration. C++ will never, ever be modern and comprehensible because of 1 and 1 reason alone: backward compatibility. It does not matter what version of C++ you are using, you are still using C with classes. |
| |
| ▲ | Guvante a day ago | parent | next [-] | | Why should C++ stop improving? Other languages don't need C++ to die to beat it. | | |
| ▲ | mcdeltat a day ago | parent [-] | | Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".) | | |
| ▲ | jandrewrogers a day ago | parent | next [-] | | The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it. Even C++23 is largely usable at this point, though there are still gaps for some features. | |
| ▲ | yolina 19 hours ago | parent | prev | next [-] | | gcc seems to have full C++20, almost everything in 23 and and implemented reflection for 26 which is probably the only thing anyone cares about in 26. https://en.cppreference.com/w/cpp/compiler_support.html Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc? | |
| ▲ | germandiago 17 hours ago | parent | prev [-] | | Relfection was a desperate need. Useful and difficult to design feature. There are also things like template for or inplace_vector. I think it has useful things. Just not all things are useful to everyone. |
|
| |
| ▲ | m-schuetz a day ago | parent | prev | next [-] | | C++ isn't great but so far I haven't seen anything I'd rather use. | | |
| ▲ | bigfishrunning 16 hours ago | parent [-] | | I think you need to spend more time with literally any tool -- "Haven't seen anything I'd rather used" reads like "Haven't gotten over the initial learning curve with any other tool" C++ is sub-optimal for almost any task. For low level stuff plain C or maybe Rust. for higher level Python, Lua, or some Lisp. C++ is a weird in-between language that's impossible to hold correctly. | | |
| ▲ | m-schuetz 16 hours ago | parent [-] | | > For low level stuff plain C The nice thing about C++ is that you can more or less turn it into C, if you want. My C++ code is closer to C than idiomatic, modern C++, but I wouldn't want to miss the nice parts that C++ adds, such as lambda functions and the occasional template for generalization. Pretty much the only thing I'm missing from C are order-independent designated initializers, which became order-dependent in C++, and thus useless. > "Haven't seen anything I'd rather used" reads like "Haven't gotten over the initial learning curve with any other tool" What an odd thing to say. I simply don't like certain design decisions in other languages that I've checked out and tried, and therefore do not see any reason to switch. E.g. I tried Rust, but it's absolutely terrible for quick&dirty prototyping, which is my main job. |
|
| |
| ▲ | yolina 19 hours ago | parent | prev | next [-] | | Some other language need to step up and rewrite/replace LLVM then, because no language that relies on a ~30 million loc backend written in C++ can ever hope to replace it. | | |
| ▲ | kinjba11 11 hours ago | parent | next [-] | | Zig plans to make LLVM optional. Rust has Cranelift. Go afaik has no dependencies on the C++ ecosystem including LLVM. Python and some other languages are built with C, not C++. So, progress is being made slowly to replace LLVM as the defacto optimizing code backend. Alternatives are out there, may they compete and win! C++ makes me pessimistic about the future of humanity.. | |
| ▲ | bigfishrunning 16 hours ago | parent | prev [-] | | Languages don't write code, people do. No one has rewritten LLVM because it already exists, and such a project would be insanely expensive for little benefit. |
| |
| ▲ | germandiago 17 hours ago | parent | prev | next [-] | | A bureau from the top call is not the way to do it. Just beat it. Ah, not so easy huh? Libraries, ecosystem, real use, continuous improvements. Even if it does not look so "clean". Just beat it, I will move to the next language. I am still waiting. | |
| ▲ | 72deluxe a day ago | parent | prev | next [-] | | C with classes is a very simplistic view of C++. I for one can write C++ but I cannot write a single program in C. If the overlap was so vast, I would be able to write good C but I cannot. I've done things with templates to express my ideas in C++ that I cannot do in other languages, and the behaviour of deterministic destructors is what sets it apart from C. It is comprehensible and readable to me. I would argue that C++ is modern, since it is in use today. Perhaps your definition of "modern" is too narrow? | |
| ▲ | mastermage a day ago | parent | prev [-] | | I mean the Carbon project exists |
|
|
| ▲ | quotemstr a day ago | parent | prev [-] |
| But why? You can do everything contracts do in your own code, yes? Why make it a language feature? I'm not against growing the language, but I don't see the necessity of this specific feature having new syntax. |
| |
| ▲ | spacechild1 a day ago | parent | next [-] | | Pre- and postconditions are actually part of the function signature, i.e. they are visible to the caller. For example, static analyzers could detect contract violations just by looking at the callsite, without needing access to the actual function implementation. The pre- and postconditions can also be shown in IDE tooltips. You can't do this with your own contracts implementation. Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries. | | |
| ▲ | kevin_thibedeau a day ago | parent [-] | | Is a pointer parameter an input, output, or both? | | |
| ▲ | drfloyd51 a day ago | parent [-] | | Input. You are passing in a memory location that can be read or written too. That’s it. | | |
| ▲ | 72deluxe a day ago | parent | next [-] | | In terms of contract in a function, you might be passing the pointer to the function so that the function can write to the provided pointer address. Input/output isn't specifying calling convention (there's fastcall for that) - it is specifying the intent of the function. Otherwise every single parameter to a function would be an input because the function takes it and uses it... I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse. This seems like an extension of that. | | |
| ▲ | drfloyd51 8 hours ago | parent [-] | | Annotation sounds good. (As long as it is enforced or honored.) which is the best you can do in C++. A language like C# has true directional parameters. C only truly has “input” |
| |
| ▲ | kevin_thibedeau a day ago | parent | prev [-] | | A pointer doesn't necessarily point to memory. | | |
| ▲ | j1elo a day ago | parent [-] | | A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware. | | |
| ▲ | peterfirefly a day ago | parent | next [-] | | No. Neither in the language (NULL exists) nor necessarily on real CPUs. | | |
| ▲ | bluGill 19 hours ago | parent [-] | | NULL exists on real CPUs. Maybe you meant nullptr which is a very different thing, don't confuse the two. | | |
| ▲ | tialaramex 17 hours ago | parent [-] | | I don't agree. Null is an artefact of the type system and the type system evaporates at runtime. Even C's NULL macro just expands to zero which is defined in the type system as the null pointer. Address zero exists in the CPU, but that's not the null pointer, that's an embarrassment if you happen to need to talk about address zero in a language where that has the same spelling as a null pointer because you can't say what you meant. | | |
| ▲ | bluGill 15 hours ago | parent [-] | | Null doesn't expand to zero on some weird systems. tese days zero is special on most hardware so having zero and nullptr be the same is importnt - even though on some of them zero is also legal. | | |
| ▲ | tialaramex 6 minutes ago | parent [-] | | Historically C's null pointer, provided as the pre-processor constant NULL, is the integer literal 0 (optionally cast to a void pointer in newer standards) even though the hardware representation may not be the zero address. It's OK that you didn't know this if you mostly write C++ and somewhat OK that you didn't know this even if you mostly write C but stick to pre-defined stuff like that NULL constant, if you write important tools in or for C this was a pretty important gap in your understanding. In C23 the committee gave C the C++ nullptr constant, and the associated nullptr_t type, and basically rewrote history to make this entire mess, in reality the fault of C++ now "because it's for compatibility with C". This is a pretty routine outcome, you can see that WG14 members who are sick of this tend to just walk away from the committee because fighting it is largely futile and they could just retire and write in C89 or even K&R C without thinking about Bjarne at all. |
|
|
|
| |
| ▲ | kevin_thibedeau 19 hours ago | parent | prev [-] | | You can point to a register which is certainly not memory. |
|
|
|
|
| |
| ▲ | addaon a day ago | parent | prev | next [-] | | Contracts are about specifying static properties of the system, not dynamic properties. Features like assert /check/ (if enabled) static properties, at runtime. static_assert comes closer, but it’s still an awkward way of expressing Hoare triples; and the main property I’m looking for is the ability to easily extract and consider Hoare triples from build-time tooling. There are hacky ways to do this today, but they’re not unique hacky ways, so they don’t compose across different tools and across code written to different hacks. | |
| ▲ | jevndev a day ago | parent | prev | next [-] | | The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them. But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions. | |
| ▲ | jandrewrogers a day ago | parent | prev [-] | | DYI contracts don't compose when mixing code using different DYI implementations. Some aspects of contracts have global semantics. | | |
|