• suby
    I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

    Here's a quote from Bjarne,

    > So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal. Do we vote against the standard because there is a feature we think is bad? Because I think this one is bad. And that is a much harder problem. People vote yes because they think: "Oh we are getting a lot of good things out of this.", and they are right. We are also getting a lot of complexity and a lot of bad things. And this proposal, in my opinion is bloated committee design and also incomplete.

    • I implemented Contracts in the C++ language in the early 90's as an extension.

      Nobody wanted it.

      https://www.digitalmars.com/ctg/contract.html

      • I think it's also true that, regardless of the desirability of the feature at the time (which sibling comments discuss eloquently) people who've bought into a language are usually quite wary of also buying into extensions to that language. The very act of ratification by the committee gives this proposal a ‘feature’ that the DMC++ extension lacked in compatibility expectations over time and across implementations — it's not necessarily a comment on the technical quality or desirability of the work itself.
      • How do you know nobody wanted it?
        • > How do you know nobody wanted it?

          Some imperfect data points on how to judge if a language feature is wanted (or not):

          - Discussion on forums about how to use the feature

          - Programs in the wild using the feature

          - Bug reports showing people trying to use the feature and occasionally producing funny interactions with other parts of the language

          - People wanting to do more complex things on the initially built feature by filing features requests. (Showing that there is uptake on the feature and people want to do more fancy advanced things)

      • > Nobody wanted it.

        The fact that the C++ standard community has been working on Contracts for nearly a decade is something that by itself automatically refutes your claim.

        I understand you want to self-promote, but there is no need to do it at the expense of others. I mean, might it be that your implementation sucked?

        • Late nineties is approaching thirty decades ago; if the C++ committee has now been working on this for nearly a decade, that's fifteen to twenty years of them not working on it. It's quite plausible that contracts simply weren't valued at the time.

          Also, in my view the committee has been entertaining wider and wider language extensions. In 2016 there was a serious proposal for a graphics API based on (I think) Cairo. My own sense is that it's out of control and the language is just getting stuff added on because it can.

          Contracts are great as a concept, and it's hard to separate the wild expanse of C++ from the truly useful subset of features.

          There are several things proposed in the early days of C++ that arguably should be added.

          • I am not sure what the "truly useful features are" if you take into account that C++ goes from games to servers to embedded, audio, heterogeneous programming, some GUI frameworks, real-time systems (hard real-time) and some more.

            I would say some of the features that are truly useful in some niches are les s imoortant in others and viceversa.

          • > Late nineties is approaching thirty decades ago

            Boy, this makes me feel old... oh wait :)

            (I agree with your point; early 90s vs. mid-10s are two very different worlds, in this context.)

        • > I understand you want to self-promote

          Not a very fair assumption. However, even if your not so friendly point was even true, I'd like people who have invented popular languages to "self-promote" more (here dlang). It is great to get comments on HN from people who have actually achieved something nice !

        • In the early 1990s, C++ had not yet been standardized by ISO, so your argument doesn’t apply to that period.
      • it could be possible that llms can mak great use of them
        • > it could be possible that llms can mak great use of them

          This is actually a good point. Yes, LLMs have saturated the conversation everywhere but contracts help clarify the pre-post conditions of methods well. I don't know how good the implementation in C++ will be but LLMs should be able to really exploit them well.

    • > It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

      This is a common sentiment about C++, but I find it very interesting that everyone seems to have a different feature in mind when they say it.

      • I think that's a clear and unambiguous point in favor of the argument. There are so many hellishly complex things in C++ that the community can't settle on even a small subset to be the worst contender.

        Half Life 3 rules apply. Every time someone complains about complexity in C++, the committee adds a new overly complex feature. It remains a problem because complexity keeps getting shoveled on top of the already complex language.

    • I can’t speak to the C++ contract design — it’s possible bad choices were made. But contracts in general are absolutely exactly what C++ needs for the next step of its evolution. Programming languages used for correct-by-design software (Ada, C++, Rust) need to enable deep integration with proof assistants to allow showing arbitrary properties statically instead of via testing, and contracts are /the/ key part of that — see e.g. Ada Spark.
      • C++ is the last language I'd add to any list of languages used for correct-by-design - it's underspecified in terms of semantics with huge areas of UB and IB. Given its vast complexity - at every level from the pre-processor to template meta-programming and concepts, I simply can't imagine any formal denotational definition of the language ever being developed. And without a formal semantics for the language, you cannot even start to think about proof of correctness.
        • As with Spark, proving properties over a subset of the language is sufficient. Code is written to be verified; we won’t be verifying interesting properties of large chunks of legacy code in my career span. The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way.
          • I don’t think this is a good comparison. Ada (on which Spark is based) has every safety feature and guardrail under the sun, while C++ (or C) has nothing.
            • There is a lot of tooling for C though, just not in mainstream compilers.
          • > The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way.

            In my experience, this is absolutely true. I wrote my own metaprogramming frontend for C and that's basically all you need. At this point, I consider the metaprogramming facilities of a language it's most important feature, by far. Everything else is pretty much superfluous by comparison

      • I don’t understand this “next evolution” approach to language design.

        It should be done at some point. People can always develop languages with more or less things but piling more things on is just not that useful.

        It sounds cool in the minds of people that are designing these things but it is just not that useful. Rust is in the same situation of adding endless crap that is just not that useful.

        Specifically about this feature, people can just use asserts. Piling things onto the type system of C++ is never going to be that useful since it is not designed to be a type system like Rust's type system. Any improvement gained is not worth piling on more things.

        Feels like people that push stuff do it because "it is just what they do".

        • Many of the recent C++ standards have been focused on expanding and cleaning up its powerful compile-time and metaprogramming capabilities, which it initially inherited by accident decades ago.

          It is difficult to overstate just how important these features are for high-performance and high-reliability systems software. These features greatly expand the kinds of safety guarantees that are possible to automate and the performance optimizations that are practical. Without it, software is much more brittle. This isn’t an academic exercise; it greatly reduces the amount of code and greatly increases safety. The performance benefits are nice but that is more on the margin.

          One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.

          • > One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.

            Aren’t Rust macros more powerful than C++ template metaprogramming in practice?

            • No, they are not.
              • They are both; there are things that Rust's macros can do metaprogramming-wise that C++ templates cannot do and vice-versa.

                Rust's macros work on a syntactic level, so they are more powerful in that they can work with "normally" invalid code and perform token-to-token transformations (and in the case of proc macros effectively function as compiler extensions/plugins) and less powerful in that they don't have access to semantic information.

            • Rust has two separate macro systems. It has declarative "by example" macros which are a nicer way to write the sort of things where you show an intern this function for u8 and ask them to create seven more just like it except for i8, u16, i16, u32, i32, u64, i64. Unlike the C pre-processor these macros understand how loops work (sort of) and what types are, and so on, and they have some hygiene features which make them less likely to cause mayhem.

              Declarative macros deliberately don't share Rust's syntax because they are macros for Rust so if they shared the same syntax everything you do is escape upon escape sequence as you want the macro to emit a loop but not loop itself etc. But other than the syntax they are pretty friendly, a one day Rust bootstrap course should probably cover these macros at least enough that you don't use copy-paste to make those seven functions by hand.

              However the powerful feature you're thinking of is procedural or "proc" macros and those are a very different beast. The proc macros are effectively compiler plugins, when the compiler sees we invoked the proc macro, it just runs that code, natively. So in that sense these are certainly more powerful, they can for example install Python, "Oh, you don't have Python, but I'm a proc macro for running Python, I'll just install it...". Mara wrote several "joke" proc macros which show off how dangerous/ powerful it is, you should not use these, but one of them for example switches to the "nightly" Rust compiler and then seamlessly compiles parts of your software which don't work in stable Rust...

          • > powerful compile-time and metaprogramming capabilities

            While I agree that, generally, compile time metaprogramming is a tremendously powerful tool, the C++ template metaprogramming implementation is hilariously bad.

            Why, for example, is printing the source-code text of an enum value so goddamn hard?

            Why can I not just loop over the members of a class?

            How would I generate debug vis or serialization code with a normal-ish looking function call (spoiler, you can't, see cap'n proto, protobuf, flatbuffers, any automated dearimgui generator)

            These things are incredibly basic and C++ just completely shits all over itself when you try to do them with templates

          • One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++

            In the space of language design, everything "more powerful" is not necessary good. Sometimes less power is better because it leads to more optimisable code, less implementation complexity, less abstraction, better LSP support. TL;DR More flexibility and complexity is not always good.

            Though I would also challenge the fact that Rust's metaprogramming model is "not powerful enough". I think it can be.

            • But compile-time processing is certainly useful in a performance-oriented language.

              And not only for performance but also for thread safety (eliminates initialization races, for example, for non-trivial objects).

              Rust is just less powerful. For example you cannot design something that comes evwn close to expression templates libraries.

              • > And not only for performance but also for thread safety

                This is already built-in to the language as a facet of the affine type system. I'm curious as to how familiar you actually are with Rust?

                > Rust is just less powerful.

                On the contrary. Zig and C++ have nothing even remotely close to proc macros. And both languages have to defer things like thread safety into haphazard metaprogramming instead of baking them into the language as a basic semantic guarantee. That's not a good thing.

              • > For example you cannot design something that comes evwn close to expression templates libraries.

                You keep saying this and it's still wrong. Rust is quite capable of expression templates, as its iterator adapters prove. What it isn't capable of (yet) is specialization, which is an orthogonal feature.

                • Rust cannot take a const function and evaluate that into the argument of a const generic or a proc macro. As far as I can tell, the reasons are deeply fundamental to the architecture of rustc. It's difficult to express HOW FUNDAMENTAL this is to strongly typed zero overhead abstractions, and we see where Rust is lacking here in cases like `Option` and bitset implementations.
                  • > Rust cannot take a const function and evaluate that into the argument of a const generic

                    Assuming I'm interpreting what you're saying here correctly, this seems wrong? For example, this compiles [0]:

                        const fn foo(n: usize) -> usize {
                            n + 1
                        }
                    
                        fn bar<const N: usize>() -> usize {
                            N + 1
                        }
                    
                        pub fn baz() -> usize {
                            bar::<{foo(0)}>()
                        }
                    
                    In any case, I'm a little confused how this is relevant to what I said?

                    [0]: https://rust.godbolt.org/z/rrE1Wrx36

                • > Rust is quite capable of expression templates, as its iterator adapters prove.

                  AFAIU iterator adapters are not quite what expression templates are because they rely on the compiler optimizations rather than the built-in feature of the language, which enable you to do this without relying on the compiler pipeline.

                  • I had always thought expression templates at the very least needed the optimizer to inline/flatten the tree of function calls that are built up. For instance, for something like x + y * z I'd expect an expression template type like sum<vector, product<vector, vector>> where sum would effectively have:

                        vector l;
                        product& r;
                        auto operator[](size_t i) {
                            return l[i] + r[i];
                        }
                    
                    And then product<vector, vector> would effectively have:

                        vector l;
                        vector r;
                        auto operator[](size_t i) {
                            return l[i] * r[i];
                        }
                    
                    That would require the optimizer to inline the latter into the former to end up with a single expression, though. Is there a different way to express this that doesn't rely on the optimizer for inlining?
                    • Expression templates do not rely on optimizer since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST). This guarantees that you get zero cost when you really need it. What you're describing is something keen of copy elision and function folding though inlining which is pretty much basics in any c++ compiler and happens automatically without special care.
                      • > since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST).

                        Right, I understand that. What is not exactly clear to me is how you get from the tree of deferred expressions to the "flat" optimized expression without involving the optimizer.

                        Take something like the above example for instance - w = x + y * z for vectors w/x/y/z. How do you get from that to effectively

                            for (size_t i = 0; i < w.size(); ++i) {
                                w[i] = x[i] + y[i] * z[i];
                            }
                        
                        without involving the optimizer at all?
        • dbdr
          What "endless crap that is just not that useful" has been added to Rust in your opinion?
          • returning "impl Trait". async/await unpin/pin/waker. catch_unwind. procedural macros. "auto impl trait for type that implements other trait".

            I understand some of these kinds of features are because Rust is Rust but it still feels useless to learn.

            I'm not following rust development since about 2 years so don't know what the newest things are.

            • RPIT (Return Position impl Trait) is Rust's spelling of existential types. That is, the compiler knows what we return (it has certain properties) but we didn't name it (we won't tell you what exactly it is), this can be for two reasons:

              1. We didn't want to give the thing we're returning a name, it does have one, but we want that to be an implementation detail. In comparison the Rust stdlib's iterator functions all return specific named Iterators, e.g. the split method on strings returns a type actually named Split, with a remainder() function so you can stop and just get "everything else" from that function. That's an exhausting maintenance burden, if your library has some internal data structures whose values aren't really important or are unstable this allows you to duck out of all the extra documentation work, just say "It's an Iterator" with RPIT.

              2. We literally cannot name this type, there's no agreed spelling for it. For example if you return a lambda its type does not have a name (in Rust or in C++) but this is a perfectly reasonable thing to want to do, just impossible without RPIT.

              Blanket trait implementations ("auto impl trait for type that implements other trait") are an important convenience for conversions. If somebody wrote a From implementation then you get the analogous Into, TryFrom and even TryInto all provided because of this feature. You could write them, but it'd be tedious and error prone, so the machine does it for you.

              • Like you said it is possible to not use this feature and it arguably creates better code.

                It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object.

                A lambda is essentially a struct with a method so it is the same.

                I understand about auto trait impl and agree but it is still annoying to me

                • > It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object.

                  IMO it is a hack to use dynamic dispatch (a runtime behaviour with honestly quite limited use cases, like plugin functionality) to get existential types (a type system feature). If you are okay with parametric polymorphism/generics (universal types) you should also be okay with RPIT (existential types), which is the same semantic feature with a different syntax, e.g. you can get the same effect by CPS-encoding except that the syntax makes it untenable.

                  Because dynamic dispatch is a runtime behaviour it inherits a bunch of limitations that aren't inherent to existential types, a.k.a. Rust's ‘`dyn` safety’ requirements. For example, you can't have (abstract) associated types or functions associated with the type that don't take a magic ‘receiver’ pointer that can be used to look up the vtable.

                  • It takes less time to compile and that is a huge upside for me personally. I am also not ok with parametric polymorphism except for containers like hashmap
            • Returning impl trait is useful when you can't name the type you're trying to return (e.g. a closure), types which are annoyingly long (e.g. a long iterator chain), and avoids the heap overhead of returning a `Box<dyn Trait>`.

              Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

              Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here"

              • > Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

                curious if you have benchmarks of "catastrofically slow".

                Also, on linux, mainstream implementation translates async calls to blocked logic with thread pool on kernel level anyway.

              • Impl trait is just an enabler to create bad code that explodes compile times imo. I didn’t ever see a piece of code that really needs it.

                I exclusively wrote rust for many years, so I do understand most of the features fair deeply. But I don’t think it is worth it in hindsight.

      • > Programming languages used for correct-by-design software (Ada, C++, Rust) ...

        A shoutout to Eiffel, the first "modern" (circa 1985) language to incorporate Design by Contract. Well done Bertrand Meyer!

      • Problem is contracts mean different things to different people, and that leads standard contracts support being a compromise that makes nobody happy. To some people contracts are something checked at runtime in debug mode and ignored in release mode. To others they’re something rigorous enough to be usable in formal verification. But the latter essentially requires a completely new C++ dialect for writing contract assertions that has no UB, no side effects, and so on. And that’s still not enough as long as C++ itself is completely underspecified.
        • This contacts was intended to be a minimum viable product that does a little for a few people, but more importantly provides a framework that the people who want everything else can start building off of.
      • The people who did contracts are aware of ada/spark and some have experience using it. Only time will tell if it works in c++ but they at least did all they could to give it a chance.

        Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex.

        • Might be the case that Ada folks successfully got a bad version of contracts not amenable for compile-time checking into C++, to undermine the competition. Time might tell.
          • I strongly doubt that C++ is what's standing in the way of Ada being popular.
            • Ada used to be mandated in the US defense industry, but lots of developers and companies preferred C++ and other languages, and for a variety of reasons, the mandate ended, and Ada faded from the spotlight.
              • >the mandate ended, and Ada faded from the spotlight

                Exactly. People stopped using Ada as soon as they were no longer forced to use it.

                In other words on its own merits people don't choose it.

                • On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors.

                  Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.

                  • I made no judgement about whether Ada is subjectively "bad" or not. I used it for a single side project many years ago, and didn't like it.

                    But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.

                    So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.

                    C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.

                    My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.

                • Ada is a greatly designed language and I mean this. The problem Ada has is highly proprietary compilers.
                  • Not having experience myself, how is GNAT?
          • This is some pretty major conspiracy thinking, and would need some serious evidence. Do you have any?
            • [flagged]
              • Okay, on one hand, I'm very curious, but on the other hand, not really on topic for this forum. So I'll just leave a "wut".
      • The devil is in the details, because standardization work is all about details.

        From my outside vantage point, there seems to be a few different camps about what is desired for contracts to even be. The conflict between those groups is why this feature has been contentious for... a decade now?

        Some of the pushback against this form of contracts is from people who desire contracts, but don't think that this design is the one that they want.

      • Right, I think the tension here is that we would like contracts to exist in the language, but the current design isn't what it needs to be, and once it's standardized, it's extremely hard to fix.
      • C++ needs to give itself up and make way for other, newer, modern, language that have far, far fewer baggage. It should be working with other language to provide tools for interop and migration.

        C++ will never, ever be modern and comprehensible because of 1 and 1 reason alone: backward compatibility.

        It does not matter what version of C++ you are using, you are still using C with classes.

        • Why should C++ stop improving? Other languages don't need C++ to die to beat it.
          • Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)
            • The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.

              Even C++23 is largely usable at this point, though there are still gaps for some features.

            • gcc seems to have full C++20, almost everything in 23 and and implemented reflection for 26 which is probably the only thing anyone cares about in 26.

              https://en.cppreference.com/w/cpp/compiler_support.html

              Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?

            • Relfection was a desperate need. Useful and difficult to design feature.

              There are also things like template for or inplace_vector. I think it has useful things. Just not all things are useful to everyone.

        • C++ isn't great but so far I haven't seen anything I'd rather use.
          • I think you need to spend more time with literally any tool -- "Haven't seen anything I'd rather used" reads like "Haven't gotten over the initial learning curve with any other tool"

            C++ is sub-optimal for almost any task. For low level stuff plain C or maybe Rust. for higher level Python, Lua, or some Lisp. C++ is a weird in-between language that's impossible to hold correctly.

            • > For low level stuff plain C

              The nice thing about C++ is that you can more or less turn it into C, if you want. My C++ code is closer to C than idiomatic, modern C++, but I wouldn't want to miss the nice parts that C++ adds, such as lambda functions and the occasional template for generalization. Pretty much the only thing I'm missing from C are order-independent designated initializers, which became order-dependent in C++, and thus useless.

              > "Haven't seen anything I'd rather used" reads like "Haven't gotten over the initial learning curve with any other tool"

              What an odd thing to say. I simply don't like certain design decisions in other languages that I've checked out and tried, and therefore do not see any reason to switch. E.g. I tried Rust, but it's absolutely terrible for quick&dirty prototyping, which is my main job.

        • Some other language need to step up and rewrite/replace LLVM then, because no language that relies on a ~30 million loc backend written in C++ can ever hope to replace it.
          • Languages don't write code, people do. No one has rewritten LLVM because it already exists, and such a project would be insanely expensive for little benefit.
        • A bureau from the top call is not the way to do it.

          Just beat it. Ah, not so easy huh? Libraries, ecosystem, real use, continuous improvements.

          Even if it does not look so "clean".

          Just beat it, I will move to the next language. I am still waiting.

        • C with classes is a very simplistic view of C++.

          I for one can write C++ but I cannot write a single program in C. If the overlap was so vast, I would be able to write good C but I cannot.

          I've done things with templates to express my ideas in C++ that I cannot do in other languages, and the behaviour of deterministic destructors is what sets it apart from C. It is comprehensible and readable to me.

          I would argue that C++ is modern, since it is in use today. Perhaps your definition of "modern" is too narrow?

        • I mean the Carbon project exists
      • But why? You can do everything contracts do in your own code, yes? Why make it a language feature? I'm not against growing the language, but I don't see the necessity of this specific feature having new syntax.
        • Pre- and postconditions are actually part of the function signature, i.e. they are visible to the caller. For example, static analyzers could detect contract violations just by looking at the callsite, without needing access to the actual function implementation. The pre- and postconditions can also be shown in IDE tooltips. You can't do this with your own contracts implementation.

          Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.

          • Is a pointer parameter an input, output, or both?
            • Input.

              You are passing in a memory location that can be read or written too.

              That’s it.

              • In terms of contract in a function, you might be passing the pointer to the function so that the function can write to the provided pointer address. Input/output isn't specifying calling convention (there's fastcall for that) - it is specifying the intent of the function. Otherwise every single parameter to a function would be an input because the function takes it and uses it...

                I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.

                This seems like an extension of that.

              • A pointer doesn't necessarily point to memory.
                • A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware.
                  • No. Neither in the language (NULL exists) nor necessarily on real CPUs.
                    • NULL exists on real CPUs. Maybe you meant nullptr which is a very different thing, don't confuse the two.
                      • I don't agree. Null is an artefact of the type system and the type system evaporates at runtime. Even C's NULL macro just expands to zero which is defined in the type system as the null pointer.

                        Address zero exists in the CPU, but that's not the null pointer, that's an embarrassment if you happen to need to talk about address zero in a language where that has the same spelling as a null pointer because you can't say what you meant.

                        • Null doesn't expand to zero on some weird systems. tese days zero is special on most hardware so having zero and nullptr be the same is importnt - even though on some of them zero is also legal.
                  • You can point to a register which is certainly not memory.
        • Contracts are about specifying static properties of the system, not dynamic properties. Features like assert /check/ (if enabled) static properties, at runtime. static_assert comes closer, but it’s still an awkward way of expressing Hoare triples; and the main property I’m looking for is the ability to easily extract and consider Hoare triples from build-time tooling. There are hacky ways to do this today, but they’re not unique hacky ways, so they don’t compose across different tools and across code written to different hacks.
        • The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them.

          But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.

        • DYI contracts don't compose when mixing code using different DYI implementations. Some aspects of contracts have global semantics.
    • C++ contracts standardizes what people already do in C++. Where is the complexity in that? It removes the need to write your own implementation because the language provides a standard interoperable one.

      An argument can be made that C++26 features like reflection add complexity but I don't follow that argument for contracts.

      • The quote of Bjarne is a bit out of context. It was made after an hour long talk about the pitfalls and problems of contracts in c++26: https://youtu.be/tzXu5KZGMJk

        This should also clarify the complexity issue.

    • Is there any good documentation about contracts? https://en.cppreference.com/w/cpp/language/contracts.html is incredibly confusing - its first displayed example seems to be an edge case where the assertion itself causes a mutation?

      https://en.cppreference.com/w/cpp/language/function.html#Fun... is vaguely better, but still quite dense.

      IMO the syntax makes things hard for a newcomer to the syntax to understand, which I see as core to any programming language's goals of community.

          double square_root(double num) asserts_pre(num >= 0)
      
      would have been far more self-evident than just

          double square_root(double num) pre(num >= 0)
      
      But I suppose brevity won out.
      • I believe that https://isocpp.org/files/papers/P2900R14.pdf is the paper, which doesn't mean it's good documentation, as it's meant for modifying the standard. However, in its early sections, it does link to other papers which have more information, and the "proposed wording" section should be where the standardize lives, with the rest of it being context.
    • C++ isn't the first language to do things, but was/is often the first mainstream language to do things.

      And then people complain about C++ for doing it wrong, or its complexity, and show language 'X' that does it better/right, but only because they saw C++ do it first, and 'not quite right'.

      I expect contracts to be similar - other languages will watch, learn, and do version two, and then complain about c++, etc.

      It took 'quite a while' to get rid of auto_ptr, for example.

      If it wasn't for the fact this is a language feature, it would be better off in boost where it can be tested in the wild.

    • That's a genius idea, keep adding broken stuff into the standard until there's no choice but to break compatibility to fix it.
      • No no no, you add new stuff that will totally fix those problems!
    • Contracts are already informally a thing: most functions have preconditions, and if you break those preconditions, the function doesn't make any guarantees of what it does.

      We already have some primitive ways to define preconditions, notably the assert macro and the 'restrict' qualifier.

      I don't mind a more structured way to define preconditions which can automatically serve as both documentation and debug invariant checks. Though you could argue that a simpler approach would be to "standardize" a convention to use assert() more liberally in the beginning of functions as precondition checks; that a sequence of 'assert's before non-'assert' code should semantically be treated as the functions preconditions by documentation generators etc.

      I haven't looked too deep into the design of the actual final contracts feature, maybe it's bad for reasons which have nothing to do with the fundamental idea.

    • Just because Bjarne thinks the feature is bad doesnt mean it is bad. He can be wrong. The point is, most peoppe disagree with him, and so a lot of peoppe do think it is good.
      • There have been several talks about contracts and the somewhat hidden complexities in them. C++ contracts are not like what you'd initally expect. Compiler switches can totally alter how contracts behave from getting omitted to reporting failures to aborting the program. There is also an optional global callback for when a contract check fails.

        Different TUs can be compiled with different settings for the contract behavior. But can they be binary compatible? In general, no. If a function is declared in-line in a header, the comoiler may have generated two different versions with different contract behaviors, which violates ODR.

        What happens if the contract check calls a helper function that throws an exception?

        The whole things is really, really complex and I don't assume that I understand it properly. But I can see that there are some valid concerns against the feature as standardized and that makes me side with the opposition here: this was not baked enough yet

        • That sounds like the worst kind of misfeature.

          It sounds like it should solve your problem. At first it seems to work. Then you keep on finding the footguns after it is too late to change the design.

          • Contracts are designed as a minimum thing that can work. The different groups who want different - conflicting - things out of contracts now have a common place and syntax examples to start adding what they want without coming up with something that either breaks someone else, or worse each group doing things in a non-uniform way thus causing foot guns.

            Contracts as they are today won't solve every problem. However they can expand over time to solve more problems. (or at least that is the hope, time will tell - there is already a lot of discussion on what the others should be)

            • I think that a "minimal viable baseline" type implementation should not break the ODR.

              In Rust these types of proposals are common, in C++ less so. The incredibly tedious release process encourages everyone to put in just as much complexity as they can safely get away with.

        • Coroutines went through the same cycle. Standardized in C++20, and I still hit compiler-specific differences in how symmetric transfer gets lowered.
    • I wonder if C++ already has so much complexity, that it would actually be a good idea to ignore feature creep, and implement any feature with even the most remote use-case.

      It sounds (and probably is) insane. But if a feature breaks backwards compatibility, or can't be implemented in a way that non-negligibly affects compiler/IDE performance for codebases that ignore it, what's the issue? Specifically, what significant new issues would it cause that C++’s existing bloat hasn’t?

      C++20 isn't fully implemented in any one compiler (https://en.cppreference.com/w/cpp/compiler_support.html#C.2B...).

      • GCC and MSVC are pretty close. fyi, the tables on cppreference are rather outdated at this point. I made a more up-to-date, community-maintained site: https://cppstat.dev/?conformance=cpp20
        • wow, that's weird. One would think that updating the reference table is something a team or individual - who just spent a lot of time and effort on implementing a feature - would also do.
          • For a while now cppreference.com has been in "temporary read-only mode" in which it isn't updated. Eventually I expect a "temporary" replacement will dominate and eventually it won't be "temporary" after all. Remember when some of Britain's North American colonies announced they were declaring independence? Yeah me either, but at the time I expect some people figured hey, we send a bunch of troops, burn down some stuff, by Xmas we'll have our colonies back.
        • Why does this need to access to all my repository just for generating a PR?
    • >to a language which has already surpassed its complexity budget

      I've been thinking that way for many years now, but clearly I've been wrong. Perhaps C++ is the one language to which the issue of excess complexity does not apply.

      • In essence, a standard committee thinks like bureaucrats. They have little to no incentive to get rid of cruft and only piling on new stuff is rewarded.
        • In D, we are implementing editions so features that didn't prove effective can be removed.
          • Yeah dude but you've really marketed D poorly. I remember looking at D what must be 15 years back or so? And I loved the language and was blown away by its beauty and cool features. But having no FOSS compiler and the looming threat of someone claiming a patent (back then it was unclear that Mono/C# was "legal" and even Java hung in the balance) was too scary for me to touch it.

            Now I'm old and I believe D has missed its opportunity. Kinda sad.

          • I don't know what you mean by effective - I can come up with several different/conflicting definitions in this context.

            I think what you meant to say is popular. If a feature is popular it doesn't matter how bad it turns out in hindsight: you can't remove it without breaking too much code (you can slowly deprecate it over time, I'm not sure how you handle deprecation in D, so perhaps that is what editions give you). However if a great feature turns out not to be used you can remove it (presumably to replace it with a better version that you hope people will use this time, possibly reusing the old syntax in a slightly incompatible way)

        • The scheme folks managed to shed complexity between R6RS and R7RS, I believe.

          So perhaps I think the issue is not committees per se, but how the committees are put together and what are the driving values.

          • Notably they didn't fully shed it, they compartmentalized it. They proposed to split the standard into two parts: r7rs-small, the more minimal subset closer in spirit to r5rs and missing a lot of stuff from r6rs, and r7rs-large, which would contain all of r6rs plus everyone's wildest feature dreams as well as the kitchen sink.

            It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".

    • Can you share what aspects of the design you (and Stroustroup) aren't happy with? Stroustroup has a tendency of being proven right, with 1-3 decade lag.
      • Certainly we can say that Bjarne will insist he was right decades later. We can't necessarily guess - at the time - what it is he will have "always" believed decades later though.
        • You made me laugh!...Bjarne indeed can't be accused of being a modest man. And by some accounts, he's quite a political animal.

          But in fairness, when was D&E first published? Argued for auto there, long before their acceptance. Argued for implicit template instantiation - thank god the "everything-must-be-explicit" curmudgeons were vanquished there, too.

          He's got a pretty good batting average - certainly better than Herb Sutter.

      • Well thats not always true. Initializer list is a glaring example. So are integer promotion some other things like
        • Integer promotion? - Stroustroup pleads C source compat else stillborn.

          Initializes lists suck mainly because of C source compat constraints, too. In fact, most things that suck in C++ came from B via C.

    • I mean... it's C++. The complexity budget is like the US government's debt ceiling.
    • Has any project ever tried to quantify a “complexity budget” and stick to it?

      I’m fascinated by the concept of deciding how much complexity (to a human) a feature has. And then the political process of deciding what to remove when everyone agrees something new needs to be accepted.

    • Geez if Bjarne thinks it's

      > bloated committee design and also incomplete

      That's truly in that backdoor alley catching fire

    • > I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

      I don't think this opinion is well informed. Contracts are a killer feature that allows implementing static code analysis that covers error handling and verifiable correct state. This comes for free in components you consume in your code.

      https://herbsutter.com/2018/07/02/trip-report-summer-iso-c-s...

      Asserting that no one wants their code to correctly handle errors is a bold claim.

      • Contracts aren't for handling errors. That blog post is extremely out of date, and doesn't reflect the current state of contracts

        Modern C++ contracts are being sold as being purely for debugging. You can't rely on contracts like an assert to catch problems, which is an intentional part of the design of contracts

    • > So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal.

      Offtopic, but this is a problem in the web world, too. Once something is on a standards track, there are almost mechanisms to vote "no, this is bad, we don't need this". The only way is to "champion" a proposal and add fixes to it until people are somewhat reasonably happy and a consensus is reached. (see https://x.com/Rich_Harris/status/1841605646128460111)

    • Without a significant amount of needed context that quote just sounds like some awkward rambling.

      Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.

      I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.

  • As more of a C# and Java guy, I'm curious to understand something - what sort of apps do folks here build? I am very interested to hear what problems get solved with these languages today. I know there must be many use-cases, but I don't hear about them too much.
  • The "erroneous behavior" redefinition for reads of uninitialized variables is really interesting: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p27...

    It does have a runtime cost. There's an attribute to force undefined behavior on read again and avoid the cost:

        int x [[indeterminate]];
        std::cin >> x;
    • D initializes all variables. If you don't provide an initializer, the compiler inserts the default initializer for it.

      But if you really, really want to leave it uninitialized, write:

          int x = void;
      
      where you're not writing that by accident.
      • > If you don't provide an initializer, the compiler inserts the default initializer for it.

        This requires that there is a default. Several modern languages (such as Go) insist on this, it means now your types don't even model reality in this very fundamental way. Who is a person's default spouse? Even where you can imagine a default it's sometimes undesirable to have one, for example we already live in a society where somebody decided default gender is male - and it might look too much like real data, default birthday being 1 January also matches hundreds of thousands of Americans...

        The most likely place you go after "Everything has a default" is the billion dollar mistake because you're inclined to just incorporate "or it's default invalid" into the type definition to get your default, and when you do that everywhere needs to have "check it's valid" code added, even if we already just checked that a moment ago.

        • I don’t care that much about everything having a default (although it’s nice), but if a language insists on a default value for every type for safety, can’t you just use std::optional?
          • I can't tell if you imagine std::optional is a value (it is not) or if you know it's a templated type but you imagine that somehow it would be OK to redefine all programs so that every type is std::optional<T> of that type instead so as to simplify initialization.

            Either way no, that can't work.

        • I often see arguments like yours. I reject them wholeheartedly. Your argument is pro-poor-design. I tell you: design your software better. Design your software so that you can't have undefined behavior. It's harder, yes. LLMs suck at it, yes. But building well-designed software is a significant part of being a better engineer.
          • It is easier to design the software so that you don't have confusing behavior when you're not required to include behaviors you don't want. Most things do not need to be nullable. Requiring all things to have a zero value, even when they do not have one, makes it harder to be correct by construction, not easier.
      • That is a way better syntax. I wonder why C++ didn't adopt it.
        • Because you can't adopt that syntax after the fact. there is 30 years of C++ in the real world, initializing everything by default unless you opt-in will break some performance critical code that should not initialize everything (until it is updated manually - it has to be manual because tools are not smart enough to know where something was intentionally not initialized 100% of the time)

          Thus the current erroneous. It means this isn't a bug (compilers used to optimized out code paths where an uninitialized value is read and this did cause real world bugs when it doesn't matter what value is read). It also means the compiler is free to put whatever value they want there - one of the goals was the various sanitizers that check for using uninitialized values need to still work - the vast majority of the time when an uninitialized value is read that is a bug in the code.

          There are a lot of situations where a compiler cannot tell if a variable would be used uninitialized, so we can't rely on compiler warnings (it sometimes needs solving the halting problem).

          • > There are a lot of situations where a compiler cannot tell if a variable would be used uninitialized, so we can't rely on compiler warnings (it sometimes needs solving the halting problem).

            It's an explicit choice in C++ to always accept correct programs (the alternative being to always reject incorrect programs†). The committee does not have to stick by this bad decision in each C++ version, of course they aren't likely to stop making the same bad choice, but it is possible to do so.

            If you're allowed to take the other side, you can of course (Rust and several other languages do this) reject programs where the compiler isn't satisfied that you definitely always initialize the variable before it's value is needed. Most obviously (but it's pretty annoying, so Rust does not do this) you could insist on the initialization as part of the variable definition in the actual syntax.

            † You can't have both, by Rice's Theorem, Henry Rice got his PhD for figuring out how to prove this, last century, long before C++ was conceived. So you must pick, one or the other.

          • > there is 30 years of C++ in the real world, initializing everything by default unless you opt-in will break some performance critical code that should not initialize everything

            ...But the change to EB in this case does initialize everything by default?

            • No it doesn't. It says the value is unspecified but it exists. Sometimes some compilers did initialize everything (this was common in debug builds) before. Some of them will in the future, but most won't do anything difference.

              The only difference is some optimizer used to eliminate code paths where they could prove that path would read an uninitialized variable - causing a lot of weird bugs in the real world.

              • > It says the value is unspecified but it exists.

                The precise value is not specified, but whatever value is picked also has to be something that isn't tied to the state of the program so some kind of initialization needs to take place.

                Furthermore, the proposal explicitly states that (some) variables are initialized by default:

                > Default-initialization of an automatic-storage object initializes the object with a fixed value defined by the implementation

                > The automatic storage for an automatic variable is always fully initialized, which has potential performance implications.

                > The automatic storage for an automatic variable is always fully initialized, which has potential performance implications.

          • I don't understand its claim of a "self-documentation trap".

            I'm surprised the "= void;" wasn't discussed. People liked it immediately in D, and other alternatives were not proposed.

            • The syntax is probably fine but I feel that the default kind of sucks; default initialization has mostly fallen out of favor these days.
    • On a quick read of the paper, I see two surprising things:

      1. If there’s no initializer and various conditions are met, then “the bytes have erroneous values, where each value is determined by the implementation independently of the state of the program.

      What does “independently” mean? Are we talking about all zeros? Is the implementation not permitted to use whatever arbitrary value was in memory? Why not?

      2. What’s up with [[indeterminate]]? I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.

      • > What does “independently” mean?

        It can pick whatever value it wants and doesn't have to care what the program is doing.

        Also the value has to stay the same until it's 'replaced'.

        > Are we talking about all zeros?

        It might be, but probably won't be. What makes you bring up all zeroes?

        > Is the implementation not permitted to use whatever arbitrary value was in memory? Why not?

        (Edit: probably wrong, also affects other things I said) It can. What suggests it wouldn't be able to?

        > 2. What’s up with [[indeterminate]]? I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.

        "has a value that happens to be arbitary" would be the default without [[indeterminate]]. Well, it can also error out if the compiler wants to do that.

        • > It can. What suggests it wouldn't be able to?

          "Whatever value was in memory" would be depending on the (former?) state of the program, wouldn't it?

          • If that's what they're going for, it's way too much weight to hang on a single vague word like that. Trying to define "state of the program" in a detailed way sounds nightmarish. Let's say I'm the implementation. If I go get fresh (but not zeroed) memory from the OS to put my stack on, the garbage in there isn't state of the program, right? If I then run a function and the function exits, is the garbage now state of the program, or is it outside the state of the program? If I want a fixed init value per address, is that allowed as a hardening feature or disallowed as being based on allocation patterns? Does the as-if rule apply, so I'm fine if the program can't know for sure where I got my arbitrary byte values from?

            And would that mean there's still no way to say "Don't waste time initializing it, but don't do any UB shenanigans either. (Basically, pretend it was initialized by a random number generator.)"

            • > Let's say I'm the implementation. If I go get fresh (but not zeroed) memory from the OS to put my stack on, the garbage in there isn't state of the program, right?

              I'd argue that once you get the memory it's now part of the state of your program, which precludes it from being involved in whatever value you end up reading from the variable(s) corresponding to that memory.

              > If I want a fixed init value per address, is that allowed as a hardening feature or disallowed as being based on allocation patterns?

              I'd guess that that specific implementation would be disallowed, but as I'm an internet nobody I'd take that with an appropriately-sized grain of salt.

              > And would that mean there's still no way to say "Don't waste time initializing it, but don't do any UB shenanigans either. (Basically, pretend it was initialized by a random number generator.)"

              I feel like you'd need something like LLVM's `freeze` intrinsic for that kind of functionality.

      • > What does “independently” mean?

        It means what it says on the tin. Whatever value ends up being used must not depend on the state of the program.

        > Are we talking about all zeros?

        All zeros is an option, but the intent is to allow the implementation to pick other values as it sees fit:

        > Note that we do not want to mandate that the specific value actually be zero (like P2723R1 does), since we consider it valuable to allow implementations to use different “poison” values in different build modes. Different choices are conceivable here. A fixed value is more predictable, but also prevents useful debugging hints, and poses a greater risk of being deliberately relied upon by programmers.

        > Is the implementation not permitted to use whatever arbitrary value was in memory?

        No, because the value in such a case can depend on the state of the program.

        > Why not?

        Doing so would defeat the purpose of the change, which is to turn nasal-demons-on-mistake into something with less dire consequences:

        > In other words, it is still an "wrong" to read an uninitialized value, but if you do read it and the implementation does not otherwise stop you, you get some specific value. In general, implementations must exhibit the defined behaviour, at least up until a diagnostic is issued (if ever). There is no risk of running into the consequences associated with undefined behaviour (e.g. executing instructions not reflected in the source code, time-travel optimisations) when executing erroneous behaviour.

        > What’s up with [[indeterminate]]?

        The idea is to provide a way to opt into the old full-UB behavior if you can't afford the cost of the new behavior.

        > I would expect “indeterminate” to mean that the variable has a value that happens to be arbitrary (and may contain sensitive data, etc), not that it turns back into actual UB.

        I believe the spelling matches how the term was used in previous standards. For example, from the C++23 standard [0] (italics in original):

        > When storage for an object with automatic or dynamic storage duration is obtained, the object has an indeterminate value, and if no initialization is performed for the object, that object retains an indeterminate value until that value is replaced.

        [0]: https://open-std.org/JTC1/SC22/WG21/docs/papers/2023/n4950.p...

    • Hm, I wonder if this will be a compiler flag too, probably yes, since some projects would prefer to init all variables by hand anyway.
    • I envy the person that will walk into a c++ codebase and see "[[indeterminate]]" on some place. And then they will need to absolutely waste their time searching and reading what "[[indeterminate]]" means. Or over time they will just learn to ignore this crap and mentally filter it out when looking at code.

      Just like when I was learning rust and trying to read some http code but it was impossible because each function had 5 generics and 2 traits.

      • What is non-obvious about “[[indeterminate]]”? That terminology has been used throughout the standards in exactly this context for ages. This just makes it explicit instead of implicit so that the compiler can know your intent.
        • I know roughly what indeterminate means in english but it is not obvious to me when I see something like this in code.

          So I would have to look it up and be very careful about it since I can break something easily in C++.

          This just makes things more difficult from the perspective of using/learning the language.

          Similar problem with "unsequenced" and "reproducible" attributes added in c. It sounded cool after I took the time to learn exactly (/s) what it means. But it is not worth the time to learn it. And it is not worth the cognitive load it will put on people that will read the code later imo.

          • I wonder if you're fine with const, constexpr and volatile also being things. I mean, "const" really doesn't mean what one would naively think (that's what "constexpr" is actually for) and the semantics of "volatile" are also widely misunderstood.
            • Nope, not only is C++ const not a constant, C++ constexpr isn't a constant either, and C++ constinit isn't a constant, C++ consteval is closest, but it's only available for functions.

                  const int a = 10; // Just an immutable variable named a
                  constexpr int b = 20; // Still an immutable variable named b
                  static constinit int c = 30; // Now it isn't even immutable
              
              For functions const says this function promises it doesn't change things, constexpr says this function is shiny and modern and has no other real meaning (hence "constexpr all the things" memes, you might as well) but consteval does mean that we're promising this must always be evaluated at compile time, so the evaluation is frozen by runtime, however only a function can have this label.

              Volatile is a mess because what you actually want are the volatile intrinsics, indeed you might want more (or fewer) depending on the target. If your target can do single bit hardware writes it'd be nice to provide an intrinsic for that, rather than hoping you can write in code REG |= 0x40 and have that write a single bit... which on platforms which do not have this single bit write feature that's going to compile to an unsynchronized read-modify-write which may cause problems. However instead of having intrinsics C's volatile was hacked into the type system instead and C++ tries to keep that.

              • groans See, and that's why I'm personally fine with [[indeterminate]], etc: all of this is already a finely-splitted hairy mess and I'd rather not see even more keywords introduced if we can just use attributes instead.

                And yeah, it would probably be nice to also have some sane intrinsics to provide memory_order_consume semantics... but what can you do.

                • Consume is dead. Long live acquire!
                • Const, constexpr etc. Are mandatory to understand at this point. That situation doesn’t justify adding more things imo
              • spot on... that difference between evaluation and storage is exactly why C++ is so hard to keep in my head

                I thought constexpr was a hard physical constant, but in reality it's a weird hybrid

                this visualisation helped me to wrap my head around it - https://vectree.io/c/c-constness-and-evaluation-qualifiers

        • I mean there's a sibling comment that literally says that the word was chosen to be mysterious and make people look up what it means
  • This is awesome. I've was a dev on the C++ team at MS in the 90s and was sure that RTTI was the closest the language would ever get to having a true reflection system.
  • Hosting the meeting in Croydon and not letting people leave until the thing is signed-off is definitely a cunning strategy. Never want to work down there again, ever.
  • > Second, conforming compiler and standard library implementations are coming quickly. Throughout the development of C++26, at any given point both GCC and Clang had already implemented two-thirds of C++26 features. Today, GCC already has reflection and contracts merged in trunk, awaiting release.

    How far is Clang on reflection and contracts?

  • Biggest open question is whether the small changes to the module system in this standard will actually lead to more widespread adoption
    • The best thing the C++ WG could do is to spend an entire release cycle working on modules and packaging.

      It's nice to have new features, but what is really killing C++ is Cargo. I don't think a new generation of developers are going to be inspired to learn a language where you can't simply `cargo add` whatever you need and instead have to go through hell to use a dependency.

      • To me, the most important feature of Cargo isn't even the dependency management but that I don't ever need to tell it which files to compile or where to find them. The fact that it knows to look for lib.rs or main.rs in src and then recursively find all my other modules without me needing to specify targets or anything like that is a killer feature on its own IMO. Over the past couple of years I've tried to clone and build a number of dotnet packages for various things, but for an ecosystem that's supposedly cross-platform, almost none of them seem to just work by default when I run `dotnet build` and instead require at least some fixes in the various project files. I don't think I've ever had an issue with a Rust project, and it's hard not to feel like a big part of that is because there's not really much configuration to be done. The list of dependencies is just about the only thing in there that effects the default build; if there's any other configuration other than that and the basic metadata like the name, the repo link, the license, etc., it almost always will end up being specifically for alternate builds (like extra options for release builds, alternate features that can be compiled in, etc.).
        • > The fact that it knows to look for lib.rs or main.rs in src and then recursively find all my other modules without me needing to specify targets or anything like that is a killer feature on its own IMO.

          In the interest of pedantry, locating source files relative to the crate root is a language-level Rust feature, not something specific to Cargo. You can pass any single Rust source file directly to rustc (bypassing Cargo altogether) and it will treat it as a crate root and locate additional files as needed based on the normal lookup rules.

          • Interesting, I didn't realize this! I know that a "crate" is specifically the unit of compilation for rustc, but I assumed there was some magic in cargo that glued the modules together into a single AST rather than it being in rustc itself.

            That being said, I'd argue that the fact that this happens so transparently that people don't really need to know this to use Cargo correctly is somewhat the point I was making. Compared to something like cmake, the amount of effort to use it is at least an order of magnitude lower.

        • > I don't think I've ever had an issue with a Rust project, and it's hard not to feel like a big part of that is because there's not really much configuration to be done.

          For most crates, yes. But you might be surprised how many crates have a build.rs that is doing more complex stuff under the hood (generating code, setting environment variables, calling a C compiler, make or some other build system, etc). It just also almost always works flawlessly (and the script itself has a standardised name), so you don't notice most of the time.

          • True, but if anything, a build.rs is a lot easier for me to read and understand (or even modify) if needed because I already know Rust. With something like cmake, the build configuration is an entirely separate language the one I'm actually working in, and I haven't seen a project that doesn't have at least some amount of custom configuration in it. Starting up a cargo project literally doesn't require putting anything in the Cargo.toml that doesn't exist after you run `cargo new`.
            • Oh sure, build.rs is (typically) a great experience. My favourite example is Skia which is notoriously difficult to build, but relatively easy to build with the Rust bindings. My point was just that this isn't always because there's nothing complex going on, but because it still works well even though there sometimes are complex things going on!
        • Yep ... go/zig pkg management has the same benefit compared to c/c++.
        • But you are specifying source files, although indirectly, aren't you? That's what all those `mod blah` with a corresponding `blah.rs` file present in the correct location are.
        • For me the lack of dependency hell until I hit a c/c++ component somewhere in the build is the real winner.
      • I’m still surprised how people ignore Meson. Please test it :)

        https://mesonbuild.com/

        And Mesons awesome dependency handling:

        https://mesonbuild.com/Dependencies.html

        https://mesonbuild.com/Using-the-WrapDB.html#using-the-wrapd...

        https://nibblestew.blogspot.com/2026/02/c-and-c-dependencies...

        I suffered with Java from Any, Maven and Gradle (the oldest is the the best). After reading about GNU Autotools I was wondering why the C/C++ folks still suffer? Right at that time Meson appeared and I skipped the suffering.

            * No XML
            * Simple to read and understand
            * Simple to manage dependencies
            * Simple to use options
        
        
        Feel free to extend WrapDB.
        • Meson is indeed nice, but has very poor support for GPU compilation compared to CMake. I've had a lot of success adopting the practices described in this talk, https://www.youtube.com/watch?v=K5Kg8TOTKjU. I thought I knew a lot of CMake, but file sets definitely make things a lot simpler.
        • It lacks the first party support cmake enjoys.
        • Meson merges the crappy state of C/C++ tooling with something like Cargo in the worst way possible: by forcing you to handle the complexity of both. Nothing about Meson is simple, unless you're using it in Rust, in which case you're better off with Cargo.

          In C++ you don't get lockfiles, you don't get automatic dependency install, you don't get local dependencies, there's no package registry, no version support, no dependency-wide feature flags (this is an incoherent mess in Meson), no notion of workspaces, etc.

          Compared to Cargo, Meson isn't even in the same galaxy. And even compared to CMake, Meson is yet another incompatible incremental "improvement" that offers basically nothing other than cute syntax (which in an era when AI writes all of your build system anyway, doesn't even matter). I'd much rather just pick CMake and move on.

        • Build system generators (like Meson, autotools, CMake or any other one) can't solve programming language module and packaging problems, even in principle. So, it's not clear what your argument is here.

          > I’m still surprised how people ignore Meson. Please test it :)

          I did just that a few years ago and found it rather inconvenient and inflexible, so I went back to ignoring it. But YMMV I suppose.

          > After reading about GNU Autotools

          Consider Kitware's CMake.

      • Agreed, arcane cmake configs and or bash build scripts are genuinely off-putting. Also cpp "equivalents" of cargo which afaik are conan and vcpkg are not default and required much more configuring in comparison with cargo. Atleast this was my experience few years ago.
        • It's fundamentally different; Rust entirely rejects the notion of a stable ABI, and simply builds everything from source.

          C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.

          There are of course good ways of building C++, but those are the exception rather than the standard.

          • "Stable ABI" is a joke in C++ because you can't keep ABI and change the implementation of a templated function, which blocks improvements to the standard library.

            In C, ABI = API because the declaration of a function contains the name and arguments, which is all the info needed to use it. You can swap out the definition without affecting callers.

            That's why Rust allows a stable C-style ABI; the definition of a function declared in C doesn't have to be in C!

            But in a C++-style templated function, the caller needs access to the definition to do template substitution. If you change the definition, you need to recompile calling code i.e. ABI breakage.

            If you don't recompile calling code and link with other libraries that are using the new definition, you'll violate the one-definition rule (ODR).

            This is bad because duplicate template functions are pruned at link-time for size reasons. So it's a mystery as to what definition you'll get. Your code will break in mysterious ways.

            This means the C++ committee can never change the implementation of a standardized templated class or function. The only time they did was a minor optimization to std::string in 2011 and it was such a catastrophe they never did it again.

            That is why Rust will not support stable ABIs for any of its features relying on generic types. It is impossible to keep the ABI stable and optimize an implementation.

          • It's not true that Rust rejects "the notion of a stable ABI". Rust rejects the C++ solution of freeze everything and hope because it's a disaster, it's less stable than some customers hoped and yet it's frozen in practice so it disappoints others. Rust says an ABI should be a promise by a developer, the way its existing C ABI is, that you can explicitly make or not make.

            Rust is interested in having a properly thought out ABI that's nicer than the C ABI which it supports today. It'd be nice to have say, ABI for slices for example. But "freeze everything and hope" isn't that, it means every user of your language into the unforeseeable future has to pay for every mistake made by the language designers, and that's already a sizeable price for C++ to pay, "ABI: Now or never" spells some of that out and we don't want to join them.

            • > It'd be nice to have say, ABI for slices for example.

              The de-facto ABI for slices involves passing/storing pointer and length separately and rebuilding the slice locally. It's hard to do better than that other than by somehow standardizing a "slice" binary representation across C and C-like languages. And then you'll still have to deal with existing legacy code that doesn't agree with that strict representation.

            • If Rust makes no progress towards choosing an ABI and decides that freezing things is bad, then Rust is de facto rejecting the notion of a stable ABI.
              • Rust is just a bit less than 11 years old, C++ was 13 years old when screwed up std::string ABI, so, I think Rust has a few years yet to do less badly.

                Obviously it's easier to provide a stable ABI for say &'static [T] (a reference which lives forever to an immutable slice of T) or Option<NonZeroU32> (either a positive 32-bit unsigned integer, or nothing) than for String (amortized growable UTF-8 text) or File (an open file somewhere on the filesystem, whatever that means) and it will never be practical to provide some sort of "stable ABI" for arbitrary things like IntoIterator -- but that's exactly why the C++ choice was a bad idea. In practice of course the internal guts of things in C++ are not frozen, that would be a nightmare for maintenance teams - but in theory there should be no observable effect from such changes and so that discrepancy leads to endless bugs where a user found some obscure way to depend on what you'd hidden inside some implementation detail, the letter of the ISO document says your change is fine but the practice of C++ development says it is a breaking change - and the resulting engineering overhead at C++ vendors is made even worse by all the UB in real C++ software.

                This is the real reason libc++ still shipped Quicksort as its unstable sort when Biden was President, many years after this was in theory prohibited by the ISO standard† Fixing the sort breaks people's code and they'd rather it was technically faulty and practically slower than have their crap code stop working.

                † Tony's Quicksort algorithm on its own is worse than O(n log n) for some inputs, you should use an introspective comparison sort aka introsort here, those existed almost 30 years ago but C++ only began to require them in 2011.

          • > C and C++ are usually stuck in that antiquated thinking that you should build a module, package it into some libraries, install/export the library binaries and associated assets, then import those in other projects. That makes everything slow, inefficient, and widely dangerous.

            It seems to me the "convenient" options are the dangerous ones.

            The traditional method is for third party code to have a stable API. Newer versions add functions or fix bugs but existing functions continue to work as before. API mistakes get deprecated and alternatives offered but newly-deprecated functions remain available for 10+ years. With the result that you can link all applications against any sufficiently recent version of the library, e.g. the latest stable release, which can then be installed via the system package manager and have a manageable maintenance burden because only one version needs to be maintained.

            Language package managers have a tendency to facilitate breaking changes. You "don't have to worry" about removing functions without deprecating them because anyone can just pull in the older version of the code. Except the older version is no longer maintained.

            Then you're using a version of the code from a few years ago because you didn't need any of the newer features and it hadn't had any problems, until it picks up a CVE. Suddenly you have vulnerable code running in production but fixing it isn't just a matter of "apt upgrade" because no one else is going to patch the version only you were using, and the current version has several breaking changes so you can't switch to it until you integrate them into your code.

            • This is all wishful thinking disconnected from practicalities.

              First you confuse API and ABI.

              Second there is no practical difference between first and third-party for any sufficiently complex project.

              Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering. It's a bad idea and a recipe for ODR violations.

              In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries. Being able to maintain ABI compatibility, deprecating things while introducing replacement etc. is a massive engineering work and simply makes people much less likely to change the way things are done, even if they are broken or not ideal. That's an effort you'll do for a kernel (and only on specific boundaries) but not for the average program.

              • > First you confuse API and ABI.

                I'm not confusing API with ABI. If you don't have a stable ABI then you essentially forfeit the traditional method of having every program on the system use the same copy (and therefore version) of that library, which in turn encourages them to each use a different version and facilitates API instability by making the bad thing easier.

                > Second there is no practical difference between first and third-party for any sufficiently complex project.

                Even when you have a large project, making use of curl or sqlite or openssl does not imply that you would like to start maintaining a private fork.

                There are also many projects that are not large enough to absorb the maintenance burden of all of their external dependencies.

                > Third you cannot have multiple versions of the same thing in the same program without very careful isolation and engineering.

                Which is all the more reason to encourage every program on the system to use the same copy by maintaining a stable ABI. What do you do after you've encouraged everyone to include their own copy of their dependencies and therefore not care if there are many other incompatible versions, and then two of your dependencies each require a different version of a third?

                > In any non-trivial project there will be complex dependency webs across different files and subprojects, and humans are notoriously bad at packaging pieces of code into sensible modules, libraries or packages, with well-defined and maintained boundaries.

                This feels like arguing that people are bad at writing documentation so we should we should reduce their incentive to write it, instead of coming up with ways to make doing the good thing easier.

          • I would suggest importing binaries and metadata is going to be faster than compiling all the source for that.
            • You'd be wrong. If the build system has full knowledge on how to build the whole thing, it can do a much better job. Caching the outputs of the build is trivial.

              If you import some ready made binaries, you have no way to guarantee they are compatible with the rest of your build or contain the features you need. If anything needs updating and you actually bother to do it for correctness (most would just hope it's compatible) your only option is usually to rebuild the whole thing, even if your usage only needed one file.

          • "That makes everything slow, inefficient, and widely dangerous."

            There nothing faster and more efficient than building C programs. I also not sure what is dangerous in having libraries. C++ is quite different though.

            • Of course there is. Raw machine code is the gold standard, and everything else is an attempt to achieve _something_ at the cost of performance, C included, and that's even when considering whole-program optimization and ignoring the overhead introduced by libraries. Other languages with better semantics frequently outperform C (slightly) because the compiler is able to assume more things about the data and instructions being manipulated, generating tighter optimizations.
              • I was talking about building code not run-time. But regarding run-time, no other language does outperform C in practice, although your argument about "better semantics" has some grain of truth in it, it does not apply to any existing language I know of - at least not to Rust which is in practice for the most part still slower than C.
            • ODR violations are very easy to trigger unless you build the whole thing from source, and are ill-formed, no diagnostic required (worse than UB).
              • Neither "ODR violations" nor IFNDR exist in C. Incompatibility across translation units can cause undefined behavior in C, but this can easily be avoided.
                • C simply has less wording for it because less work has been put into it.

                  The same problems exist.

                  • The ODR problem is much more benign in C. Undefined behavior at translation time (~ IFNDR) still exists in C but for C2y we have removed most of it already.
          • >There are of course good ways of building C++, but those are the exception rather than the standard.

            What are the good ways?

            • "Do not do it" looks like the winning one nowadays.
            • Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again.

              You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.

              I'd also argue you shouldn't have any kind of declaration of dependencies and simply deduce them transparently based on what the code includes, with some logic to map header to implementation files.

              • The problem is doing this requires a team to support it that is realistically as large as your average product team. I know Bazel is the solution here but as someone who has used C++, modified build systems and maintained CI for teams for years, I have never gotten it to work for anything more than a toy project.
                • I have several times built my own system to do just that when it wasn't even my main job. Doesn't take more than a couple of days.

                  Bazel is certainly not the solution; it's arguably closer to being the problem. The worst build system I have ever seen was Bazel-based.

                  • > I have several times built my own system to do just that when it wasn't even my main job. Doesn't take more than a couple of days.

                    Really? I'd love a link to even something that works as a toy project

                    > Bazel is certainly not the solution; it's arguably closer to being the problem. The worst build system I have ever seen was Bazel-based.

                    I agree

                    • It usually ends up somewhat non-generic, with project-specific decisions hardcoded rather than specified in a config file.

                      I usually make it so that it's fully integrated with wherever we store artifacts (for CAS), source (to download specific revisions as needed), remote running (which depending on the shop can be local, docker, ssh, kubernetes, ...), GDB, IDEs... All that stuff takes more work for a truly generic solution, and it's generally more valuable to have tight integration for the one workflow you actually use.

                      Since I also control the build image and toolchain (that I build from source) it also ends up specifically tied to that too.

                      In practice, I find that regardless of what generic tool you use like cmake or bazel, you end up layering your own build system and workflow scripts on top of those tools anyway. At some point I decided the complexity and overhead of building on top of bazel was more trouble than it was worth, while building it from scratch is actually quite easy and gives you all the control you could possibly need.

                      • This is all great, but it doesn’t sound simple or like 200 lines of code.
              • >Build everything from source within a single unified workspace, cache whatever artifacts were already built with content-addressable storage so that you don't need to build them again.

                Which tool do you use for content-addressable storage in your builds?

                >You should also avoid libraries, as they reduce granularity and needlessly complexify the logic.

                This isn't always feasible though.

                What's the best practice when one cannot avoid a library?

                • You can use S3 or equivalent; a normal filesystem (networked or not) also works well.

                  You hash all the inputs that go into building foo.cpp, and then that gives you /objs/<hash>.o. If it exists, you use it; if not, you build it first. Then if any other .cpp file ever includes foo.hpp (directly or indirectly), you mark that it needs to link /objs/<hash>.o.

                  You expand the link requirements transitively, and you have a build system. 200 lines of code. Your code is self-describing and you never need to write any build logic again, and your build system is reliable, strictly builds only what it needs while sharing artifacts across the team, and never leads to ODR.

      • I may be in the minority but I like that C++ has multiple package managers, as you can use whichever one best fits your use case, or none at all if your code is simple enough.

        It's the same with compilers, there's not one single implementation which is the compiler, and the ecosystem of compilers makes things more interesting.

        • lmm
          Multiple package managers is fine, what's needed is a common repository standard (or even any repository functionality at all). Look at how it works in Java land, where if you don't want to use Maven you can use Gradle or Bazel or what have you, or if you hate yourself you can use Ant+Ivy, but all of them share the same concept of what a dependency is and can use the same repositories.
          • Also, having one standard packaging format and registry doesn't preclude having alternatives for special use cases.

            There should be a happy path for the majority of C++ use cases so that I can make a package, publish it and consume other people's packages. Anyone who wants to leave that happy path can do so freely at their own risk.

            The important thing is to get one system blessed as The C++ Package Format by the standard to avoid xkcd 927 issues.

            • In the Linux world and even Haiku, there is a standard package dependacy format, so dependencies aren’t really a problem. Even OSX has Homebrew. Windows is the odd man out.
              • Are you talking about system/application dependencies for installed applications or programming dependencies like compiled libraries and header files?
            • That would actually be pretty cool. Though I think there might have been papers written on this a few years ago. Does anyone know of these or have any updates about them?
              • CPS[1] is where all the effort is currently going for a C++ packaging standard, CMake shipped it in 4.3 and Meson is working on it. Pkgconf maintainer said they have vague plans to support at some point.

                There's no current effort to standardize what a package registry is or how build frontends and backends communicate (a la PEP 517/518), though its a constant topic of discussion.

                [1]: https://github.com/cps-org/cps

      • In my experience, no one does build systems right; Cargo included.

        The standard was initially meant to standardize existing practice. There is no good existing practice. Very large institutions depending heavily on C++ systematically fail to manage the build properly despite large amounts of third party licenses and dedicated build teams.

        With AI, how you build and integrate together fragmented code bases is even more important, but someone has yet to design a real industry-wide solution.

        • Speedy convenience beats absolute correctness anyday. Humans are not immortal and have finite amount of time for life and work. If convenience didn't matter, we would all still be coding in assembly or toggling hardware switches.
          • C++ builds are extremely slow because they are not correct.

            I'm doing a migration of a large codebase from local builds to remote execution and I constantly have bugs with mystery shared library dependencies implicitly pulled from the environment.

            This is extremely tricky because if you run an executable without its shared library, you get "file not found" with no explanation. Even AI doesn't understand this error.

            • The dynamic linker can clearly tell you where it looks for files and in which order, and where it finds them if it does.

              You can also very easily harden this if you somehow don't want to capture libraries from outside certain paths.

              You can even build the compiler in such a way that every binary it produces has a built-in RPATH if you want to force certain locations.

              • That is what I'm doing so I can get distributed builds working. It sucks and has taken me days of work.
                • It's pretty simple and works reliably as specified.

                  I can only infer that your lack of familiarity was what made it take so long.

                  Rebuilding GCC with specs does take forever, and building GCC is in general quite painful, but you could also use patchelf to modify the binary after the fact (which is what a lot of build systems do).

                  • > I can only infer that your lack of familiarity was what made it take so long

                    Pretty much.

                    Trying to convert an existing build that doesn't explicitly declare object dependencies is painful. Rust does it properly by default.

                    For example, I'm discovering our clang toolchain has a transitive dependency on a gcc toolchain.

          • The Mars Polar Lander and Mars Climate Orbiter missions would beg to differ.

            (And "absolute" or other adjectives don't qualify "correctness"... it simply is or isn't.)

      • 100% agree this is something that would have immediate, high value impact.

        The fact that building C++ is this opaque process defined in 15 different ways via make, autoconf, automake, cmake, ninja, with 50 other toolchains is something that continues to create a barrier to entry.

        I still remember the horrors of trying to compile c++ in 2004 on windows without anything besides borland...

        Standardizing the build system and toolchain needs to happen. It's a hard problem that needs to be solved.

        • > Standardizing the build system and toolchain needs to happen. It's a hard problem that needs to be solved.

          I agree, and I also think it’s never happening. It requires agreeing on so many things that are subjective and likely change behaviour. C++ couldn’t even manage to get module names to be required to match the file name. That was for a new feature that would have allowed us to figure out esports without actually opening the file…

      • It is already there, with vcpkg and conan, alongside cmake.

        You cannot cargo add Unreal, LLVM, GCC, CUDA,...

      • I didn’t think header only was that bad - now we have a nightmare of incompatible standards and compilers.
    • No, because most major compilers don't support header units, much less standard library header units from C++26.

      What'll spur adoption is cmake adopting Clang's two-step compilation model that increases performance.

      At that point every project will migrate overnight for the huge build time impact since it'll avoid redundant preprocessing. Right now, the loss of parallelism ruins adoption too much.

    • No. Modules are a failed idea. Really really hard for me to see them becoming mainstream at this point.
      • The idea is great, the execution is terrible. In JS, modules were instantly popular because they were easy to use, added a lot of benefit, and support in browsers and the ecoysystem was fairly good after a couple of years. In C++, support is still bad, 6 years after they were introduced.
        • The idea is great in the same way the idea of a perpetual motion machine is great: I'd love to have a perpetual motion machine (or C++ modules), but it's just not realistic.

          IMO, the modules standard should have aimed to only support headers with no inline code (including no templates). That would be a severe limitation, but at least maybe it might have solved the problem posed by protobuf soup (AFAIK the original motivation for modules) and had a chance of being a real thing.

        • Exactly. C++ is still waiting for its "uv" moment, so until then modules aren't even close to solved.
          • And uv required some ground work, where the PEP process streamlined how you define a python project, and then uv could be built on top.
      • No idea if modules themselves are failed or no, but if c++ wants to keep fighting for developer mindshare, it must make something resembling modules work and figure out package management.

        yes you have CPM, vcpkg and conan, but those are not really standard and there is friction involved in getting it work.

        • Much like contracts--yes, C++ needs something modules-like, but the actual design as standardized is not usable.

          Once big companies like Google started pulling out of the committee, they lost their connection to reality and now they're standardizing things that either can't be implemented or no one wants as specced.

          • Usable enough for Office, and the initial proposal was done by Microsoft.
        • I emphatically agree. C++ needs a standard build system that doesn’t suck ass. Most people would agree it needs a package manager although I think that is actually debatable.

          Neither of those things require modules as currently defined.

          • That is not even half realistic. Are you going to port all that code out there (autotools, cmake, scons,meson, bazel, waf...) to a "true" build system?

            Only the idea is crazy. What Conan does is much more sensible: give s layer independent of the build system (and a way to consume packages and if you want some predefined "profiles" such as debug, etc), leave it half-open for extensions and let existing tools talk with that communication protocol.

            That is much more realistic and you have way more chances of having a full ecosystem to consume.

            Also, noone needs to port full build system or move from oerfectly working build systems.

        • It has the developer mindshare of game engines, games and VFX industry standards, CUDA, SYCL, ROCm, HIP, Khronos APIS, game consoles SDK, HFT, HPC, research labs like CERN, Fermilab,...

          Ah, and the two compiler major frameworks that all those C++ wannabe replacements use as their backend.

      • Can you explain why you think modules are a failed idea? Because not that many use them right now?

        Personally I use them in new projects using XMake and it just works.

        • I'm not the PC but I think you miss most of the pain points due to: 'personal' projects.

          There's not a compatible format between different compilers, or even different versions of the same compiler, or even the same versions of the same compiler with different flags.

          This seems immediately to create too many permutations of builds for them to be distributable artifacts as we'd use them in other languages. More like a glorified object file cache. So what problem does it even solve?

          • BMIs are not considered distributable artifacts and were never designed to be. Same as PCHs and clang-modules which preceded them. Redistribution of interface artifacts was not a design goal of C++ modules, same as redistribution of CPython byte code is not a design goal for Python's module system.

            Modules solve the problems of text substitution (headers) as interface description. It's why we call the importable module units "interface units". The goals were to fix all the problems with headers (macro leakage, uncontrolled export semantics, Static Initialization Order Fiasco, etc) and improve build performance.

            They succeeded at this rather wonderfully as a design. Implementation proved more difficult but we're almost there.

        • Because as a percentage of global C++ builds they’re used in probably 0.0001% of builds with no line of sight to that improving.

          They have effectively zero use outside of hobby projects. I don’t know that any open source C++ library I have ever interacted with even pretends that modules exist.

      • "Failed idea" gives modules too much credit. Outside old codebases, almost no one outside C++ diehards have the patience for the build and tooling circuss they create, and if you need fast iteration plus sane integration with existing deps, modules are like trading your shoes for roller skates in a gravel lot. Adopting them now feels like volunteering to do tax forms in assembbly.
    • I frankly wish we'd stop developing C++. It's so hard to keep track of all the new unnecessary toys they're adding to it. I thought I knew C++ until I read some recent C++ code. That's how bad it is.

      Meanwhile C++ build system is an abomination. Header files should be unnecessary.

      • You don't have to keep up with or use any of the new features. I still pay my bills writing C++98 and have no desire to use a higher version.
  • Quite unrelated to the main topic, but shouldn't it be Croydon, London? I have never heard anyone called it London Croydon before. Generally addresses/places go from most specific to least and given Croydon is an area of London it should go first.
    • Yes, I noticed that too -- why "London Croydon" rather than "Croydon, London" ?

      Date in Europe: 30/03/2026

      Date in China: 2026/03/30

      Then you have Little Endian and you have Big Endian.

      TL;DR: Some humans like to talk about the specific and then the general and others vice versa.

      But here is really why I think the author referred to it as "London, Croydon"

      "London, Croydon" communicates "Hey we had this C++ standards meeting in London, one of the coolest cities in the world. (Be jealous!). We were helping add more complexity to the most complex language in the world in the lovely environment of London, England. Croydon is a piece of irrelevant detail... meeting was in London, remember that !

      "Croydon, London" communicates "Hey we had this C++ standards meeting in gritty Croydon... it was in London so I guess it was OK ?? Sorry our budgets could not put us up in Westminister, London"

      [End of Joke]

      • Generously - specifying Croydon does help travellers figure out where they need to be more specifically than just London. I'd like to hope if they met in New York City it'd say e.g. "New York - Riverdale" or something rather than leaving you to guess where in the city exactly.

        Most things "in" London aren't in the centre unless they're tourist destinations or they're extremely old. The most surprising thing I ran into right in the centre was the International Maritime Organisation's headquarters, which is right on the Thames because historically that makes sense in a way that arguably it already didn't when that was built, and certainly not today.

    • Like London Gatwick Airport?

      Addresses are one thing, but the inverse has its own logic. In terms of (mental) planning you want to know that you need to go to the UK then London then Croydon, otherwise there's an element of "where's that?" as you read left to right.

  • Just in time for language deprecation
  • If C++29 was exclusively about quality-of-life improvements, improving what exists, I'd bet the community wouldn't mind too much.
    • All I want from C++29 is a single-line random() function.
    • That depends on what else comes. There are a lot of ideas, some of which will get the community excited.
  • As long as programmers still have to deal with header files, all of this is lipstick on a pig.
    • You don't on new projects. CMake + ninja has support for modules on gcc, clang, and MSVC.

      This should be your default stack on any small-to-medium sized C++ project.

      Bazel, the default pick for very large codebases, also has support for C++20 modules.

      • Thanks. It's been a long time since I started a C++ project, and I've never set up any build chain in Visual Studio or Xcode other than the default.
      • I have yet to see modules in the wild. What I have seen extensively are header-only projects.
        • Modules need a lot of tooling. The tool vendors have been working hard on this for years. They have only just now said this is ready for early adopters. Most people are waiting for the early adopters to write the books on what best practices are - this needs a few more years of experience.
          • if something so simple needs years of experience it's poorly designed
            • Modules are not simple. They sound simple only to people who have never digged into them.
              • I've worked extensively on module/import semantics for multiple products in my life. It is complex. However this complexity is on the implementer and not the user.

                If "best practices" need to be refined over years, it is poorly designed. This is not untrodden ground, other languages and ecosystems do sane things.

                • This was considered during standardization. The feeling among tool developers at the time was it was "close enough" to Fortran modules to be mostly solvable.

                  This was wrong, mostly because C++ compiler flag semantics are far more complicated than in Fortran, you live and you learn. The bones of most implementations is identical to Fortran though, we got a ~3 year head start on the work because of that.

                  Ninja already had the dyndep patch ready to go from Fortran, CMake knew basically how to use scanners in build steps. However, it took longer than expected to get scanner support into the compilers, which then delayed everything downstream. Understanding when BMIs need to be rebuilt is still tricky. Packaging formats needed to be updated to understand module maps, etc, etc.

                  Each step took a little longer than was initially hoped, and delays snowballed a bit. We'll get there.

        • It's the fault of built systems. CMake still doesn't support `import std` officially and undocumented things are done in the ecosystem [1]

          But once it works and you setup the new stuff, having started a new CPP26 Project with modules now, it's kinda awesome. I'm certainly never going back. The big compilers are also retroactively adding `import std` to CPP20, so support is widening.

          [1] https://gitlab.kitware.com/cmake/cmake/-/work_items/27706

          • I wanted to ship import std in 4.3 but there are some major disagreements over where the std.o symbols are supposed to come from.

            Clang says "we don't need them", GCC says "we'll ship them in libstdc++", and MSVC says "you are supposed to provide them".

            I didn't know about that when I was working on finishing import std for CMake and accidentally broke a lot of code in the move to a native implementation of the module manifest format, so everything got reverted and put back into experimental.

          • weird to blame build systems for a problem caused by the language
        • You're not supposed to distribute the precompiled module file. You are supposed to distribute the source code of the module.

          Header-only projects are the best to convert to modules because you can put the implementation of a module in a "private module fragment" in that same file and make it invisible to users.

          That prevents the compile-time bloat many header-only dependencies add. It also avoids distributing a `.cpp` file that has to be compiled and linked separately, which is why so many projects are header-only.

          • What I mean is, I have yet to see projects in the wild _use modules at all_.
            • Plenty of examples on Github, Microsoft has talks on how Office has migrated to modules, and the Vulkan updated tutorials from Khronos, have an optional learning path with modules.
      • sgt
        How about using Zig to build C++ projects?
        • I haven't used it.

          That being said, while it looks better than CMake, for anything professional I need remote execution support to deviate from the industry standard. Zig doesn't have that.

          This is because large C++ projects reach a point where they cannot be compiled locally if they use the full language. e.g. Multi-hour Chromium builds.

          • Surely Zig can also be invoked using any CI/CD flow running on a remote machine too.
            • I'm referring to this:

              https://github.com/bazelbuild/remote-apis

              Once you get a very large C++ project with several thousand compilation jobs over hundreds of devs, you need to distribute the build across multiple computers and have a shared cache for object files.

              Zig doesn't seem to support that.

    • I use modules in all my private projects since the last two years.
    • I don't understand this at all. There are modules.

      But headers are perfectly fine to deal with and have been for decades and decades! Next you'll be arguing that contents pages in all books should be removed.

  • Finally, reflection has arrived, five years after I last touched a line in c++. I wonder how long would it take the committee, if ever, to introduce destructing move.
    • C++26 adds destructive moves. They are called relocatable types.

      There are edge cases where destructive moves are not safe and it is impossible for the compiler to know they aren't safe. C++ uses non-destructive moves when it can't prove the safety of destructive moves, even if destructive moves may in fact be safe. C++26 adds a type annotation that guarantees destructive moves are safe in cases where you can't prove they are un-safe.

      The concept of relocatable types is actually a bit broader in scope than just destructive moves, but destructive moves are one of the things it enables. It is a welcome change.

      • From the proposal, I see a bunch of new keywords and rules - alright given the language's heritage. But what happens if I "relocate" a variable value - would a "shell" remain or how exactly C++ is supposed to handle this:

          auto value = create_value();
          if (some_cond) {
            consume_value(std::move(value)); // not sure whether it's move here, but I guess my point is clear
          }
        
          use_value(value);
      • > C++26 adds destructive moves. They are called relocatable types.

        I thought those were removed? For example, see Herb's 2025-11/Kona trip report [0]:

        > For trivial relocatability, we found a showstopper bug that the group decided could not be fixed in time for C++26, so the strong consensus was to remove this feature from C++26.

        [0]: https://herbsutter.com/2025/11/10/trip-report-november-2025-...

    • What do you mean by a destructing move? Are you trying to avoid use of a moved object after you've moved it?

      eg. B = std::move(A); // You are worried about touching A when it's in this indeterminate state?

      • destructive moves are required to make moves zero cost.

        Currently move semantics in C++ requires that A is left in a 'moved from, but valid state' which means that:

        1. The compiler must still generate code that calls the destructor.

        2. Every destructor need have to have some flag and a test in it like: if(moved_from) // do nothing else { free_resources(); }

        (Granted, for some simple types the compiler might inline and removed redundant checks so it ends up with no extra code, but that is not guaranteed)

        With destructive moves the compiler can just forget about the object completely, no need to call it, destructurs can be written as normal and only care about the invariants established in the constructor.

    • Yeah I feel the same way. Lots of nice features that would have been helpful 5 years ago before I switched to Rust.
  • C++ is so tantalizingly close to being an amazing embedded c++ language if they could JUST support first-class polymorphism.

    Embedded is such a perfect fit for interface-based programming, but because it cant determine call resolution outside of a single source file, EVERYTHING gets vtable'd, which ruins downstream optimizations.

    There's some ugly workarounds.... CRTP, c-style (common header + different source files. To the person who says "use templates!".... no. I dont like templates. They are verbose, complex, and every time i try to use them they I end up foot-gunning myself. Maybe its a skill issue, but if you designed something that most people cant figure out, I'd argue the design is wrong.

    C++ is SOOO close to doing compile-time polymorphism. If just needs a way to determine type across source files, which LTO sorta-kinda-but-not-really does.

    I've seen some examples of C++ contracts replacing CRTP, but it used templates, which again, not a fan of.

    • > I've seen some examples of C++ contracts replacing CRTP, but it used templates, which again, not a fan of.

      I think you meant concepts.

      C++ Concepts are the right answer in my opinion, if you want compile time polymorphism against an interface.

      I don't think, there is a way around templates, they are C++'s way of compile-time polymorphism. Other languages, which allow for compile-time polymorphism, have similar mechanisms with similar constraints. I get where you come from, when you say that you're not a fan of templates, though. At least concepts help with clearer error messages for templates.

      One advantage, that concepts have over CRTP is, that only consumers of your interface, not implementers, need to know about your concept.

    • Rust’s trait system and the embedded HAL say “hi there.”
    • use templates.
  • > C++26 is done

    Now do C++27. Why do we need every year a standard ? CADT ?

  • I am actually excited for post and pre conditions. I think they are an underused feature in most languages.
    • Postconditions are in conflict with programmers' love of early returns.
      • Formally, it doesn't appear to be so (tests can be added to each early return); in practice, encouraging the reorganization of messy early returns would be zero cost developer reeducation.
  • std::execution is very interesting, but will be difficult to get started with, as cautioned by Sutter. This HPC Wire article demonstrates how to use standard C++ to benefit from asynchronously parallel computation on both CUDA and MPI:

    https://www.hpcwire.com/2022/12/05/new-c-sender-library-enab...

    Overlapping communication and computation has been a common technique for decades in high-performance computing to "hide latency", which leads to better scaling. Now standard C++ can be used to express parallel algorithms without tying to a specific scheduler.

    • NVidia is the main sponsor of this kind of work, and a few key figures are nowadays on their payroll.
  • It looks like they didn't even add _BitInt types yet. Adding concepts but not adding _BitInt types sounds insane considering how simple _BitInt types are as a programmer (not sure about implementation but it already works in clang).
    • _BitInt types probably aren’t a priority because they are more or less trivial to implement yourself in C++.

      Also, some of the implementation details and semantics do matter in an application dependent way, which makes it a bit of an opinionated feature. I would guess there is a lot of arguing over the set of tradeoffs suitable for a standard, since C++ tends to avoid opinionated designs if it can.

    • Just like restrict never made it.

      Someone has to write a proposal, bring it to the various meetings, and getting it to win the features selection election from all parties.

      Also WG21 tends to disregard C features that can already be implemented within C++'s type system.

  • I don't care until they stop pretending Unicode doesn't exist.
    • What are you talking about, there is actually too much unicode awareness in C++. Unicode is not the same thing as utf-8. And, frankly, no language does it right, I'm not even sure "right" exists with Unicode
      • Too much unicode in standard C++? Where?
        • c++20's u8strings took a giant steaming dump on a number of existing projects, to the point that compiler flags had to be introduced to disable the feature just so c++20 would work with existing codebases. Granted that's utf-8 (not the same thing as unicode, as mentioned) but it's there.
          • And yet, unicode support is still abysmal throughout the standard library. I don't disagree though.
        • Things like char32_t, std::u32string for storing UTF-32 characters.
      • And yet, none of them work with std::regex etc.
  • Great. C++20 has been my favorite and I was wasn't sure what the standards says since it's been a while. I'll be reading the C++26 standard soon
  • Dammit, it's been 28 years and they still haven't implemented my favorite C++ extension proposal, and its birthday is coming in a couple days -- it would be so much better now with all the emojis in unicode:

    Generalizing Overloading for C++2000

    https://www.stroustrup.com/whitespace98.pdf

  • Sadly, transparent hash strings for unordered_map are out.
    • It is annoying that they didn't just apply this to all containers
  • If you ask me (and why wouldn't you? :-)...) I really wish the C++ WG would do several things:

    1. Standardize a `restrict` keyword and semantics for it (tricky for struct/class fields, but should be done).

    2. Uniform Function Call Syntax! That is, make the syntax `obj.f(arg)` mean simply `f(obj, arg)` . That would make my life much easier, both as a user of classes and as their author. In my library authoring work particularly. And while we're at it, let us us a class' name as a namespace for static methods, so that Obj::f the static method is simply the method f in namespace Obj.

    3. Get compiler makers to have an ABI break, so that we can do things like passing wrapped values in registers rather than going through memory. See: https://stackoverflow.com/q/58339165/1593077

    4. Get rid of the current allocators in the standard library, which are type-specific (ridiculous) and return pointers rather than regions of memory. And speaking of memory regions (i.e. with address and size but no element type) - that should be standardized too.

    • 2 sounds good, but it will break a lot of existing code what suddenly does something different. At least so far every version of the rules someone has come up with has had a real world example of code that would be seriously broken if it was in place.
    • The C++ WG is like any other open source project, even when it doesn't look like it.

      Someone has to bring a written spec to WG21 meetings and push it through.

      And like in every open source project that doesn't go the way we like, the work is only done by those that show up.

      • > The C++ WG is like any other open source project, even when it doesn't look like it.

        In many ways, it isn't.

        > Someone has to bring a written spec to WG21 meetings and push it through.

        That is one way it is not like (most) other FOSS projects. In a typical FOSS project, there are bug reports and feature/change requests that people file. They don't have to write a full article merely for their idea to be given the time of day. Certainly not have to appear physically at meetings held elsewhere in the world. Of course, the question of the extent to which ideas and requests from the public are considered seriously and fairly is a spectrum - some FOSS projects give them more attention and consider them seriously, others do not. vis-a-vis WG21 the "public" is, to some extent: Compiler author teams, standard library author teams, national bodies, and large corporations using C++. This is perhaps not entirely avoidable, since there are millions of C++ users, but still.

        Anyway, what I described isn't just some personal ideas of mine, it is for the most part ideas which have been put forward before the committee, either directly in submitted papers or indirectly via public discussion in fora the committee is aware of.

        • They kind of do, otherwise those RFC, PIP, TIP, PEP, JSR,... die out.

          A pull request isn't enough, even if online collaboration is simpler than with ISO related meetings.

          • Those are examples of standardization processes, not FOSS projects.
            • Used by programming languages FOSS projects.
    • re 3, clang has [[trivial_abi]] (and I believe GCC is also implementing it. But it won't be applied to standard types by default, because of course is ABI breaking. You'll have to derive your own.
    • I think you cannot get an idea of in hiw many ways 2. can break...
    • 1. This seems like it's be far too tricky and make C++ even more footgunny, especially with references, move constructors, etc etc.

      2. Name lookup and overload resolution is already so complex though! This will likely never be added because it's so core c++ and would break so much. imo, it also blurs the line between what's your interface vs what I've defined.

      3. This is every junior c++ engineers suggestion. Having ABI breaks would probably kill c++, even though it would improve the language long term.

      4. Again, you make solid points and I think a lot of the committee would agree with you. However, the committee's job is to adapt C++ in a backwards supporting way, not to disrupt it's users and API each new release.

      There are definitely things to fix in c++ and every graduate engineer I've managed has had the same opinions of patching the standard, without considering the full picture.

      • Re (1.): Not-having-footguns is not a basic design principle of C++. But principles which it is supposed to adhere to include:

        * Don't pay for what you don't use;

        * Not leaving for another language between C++ and assembly (or to phrase it differently: "when you use an abstraction mechanism appropriately, you get at least as good performance as if you had hand-coded using lower-level constructs")

        and the lack of `restrict` breaks both of these, significantly. Because compilers are forced to implement even simple functions with repeated re-reading of data - due to the possibility of aliasing - which the software developer knows is entirely unnecessary, and would have avoided had he been writing the same function in, say, C (and of course compiler IR or assembly)

        Re (2): It's not really "core C++": It would not make any existing program non-well-formed, nor change its semantics, at all. But it's true that this would have an impact on how we design classes - and that's the exact intent. And it does far more than "blur the line between what's your interface vs what I've defined" - it deletes most of this line, ,and that is exactly the point. The line we should have is the line of acccess restriction: Does a method have access to the class' private data, or doesn't it. If it doesn't, then, there are simply functions which take an oject of the class; and it doesn't matter if the class author defined them or if someone else defined them.

        Re (3.): I didn't say lack of backwards compatibility, just that going forwards, ABIs would allow some things which are currently prevented [1]. I am not an ABI expert in the least, but IIUC, use of new ABI can be marked, so that nothing gets mixed up.

        I would also claim that ABI stability should cede to the design principles I mentioned above.

        [1]: https://cor3ntin.github.io/posts/abi/

  • I look forwards to getting to make use of this in 2040!

    Proper reflection is exciting.

    • GCC has it marked as 'RESOLVED FIXED' as of about a week and a half ago. So, it's coming.

      Also, useful: https://gcc.gnu.org/projects/cxx-status.html

      • Support in GCC isn't what limits my usage of latest C++ at work.
        • Clang also isn't too far off of GCC on support, so if you're not using either of those, my condolences. And if it's management mandate, god help us all.
  • I am curious what is their strategy to get language to the stage where the US government will make it cosher for new projects
    • No such strategy is necessary. That discourse was about not using C++ for applications where Java would work just as well.

      The US government still uses C++ widely for new projects. For some types of applications it is actually the language of choice and will remain so for the foreseeable future.

      • >"For some types of applications it is actually the language of choice..."

        Can you give an example please? And how does it correspond to government ONCD report and other government docs "recommending" "safe" languages like: Rust (noted for its ability to prevent memory-unsafe code), Go, Java, Swift, C#, Ruby, Ada

        Among other things I design and implement high performance C++ backends. for some I got SOCS2 Type II certification but I am curious about future. Do not give a flying fuck about what the criteria for military projects as I would not ever touch one even if given a chance.

        • It is the high-performance/high-scale data processing and storage engines for data-intensive applications, some of which are used in high-assurance environments. These are used outside of defense/intel (the data models are generic) but defense/intel tends to set the development standards for government since theirs are the strictest and most rigorous.

          An increasingly common requirement is the ability to robustly reject adversarial workloads in addition to being statically correct. Combined with the high-performance/high-scale efficiency requirements, this dictates what the software architecture can look like.

          There are a few practical reasons Rust is not currently considered an ideal fit for this type of development. The required architecture largely disables Rust's memory-safety advantages. Modern C++ has significantly better features and capabilities under these constraints, yielding a smaller, more maintainable code base. People worry about supply chain attacks but I don't think that is a major factor here.

          Less obvious, C++ has strong compile-time metaprogramming and execution features that can be used to extensively automate verification of code properties with minimal effort. This allows you to trivially verify many correctness properties of the code that Rust cannot. It ends up being a comparison of unsafe Rust versus verification maximalist C++20, which tilts the safety/reliability aspects pretty hard toward C++. Code designed to this standard of reliability has extremely low defect rates regardless of language but it is much easier in some languages than others. I even shipped Python once.

          A lot of casual C++ code doesn't bother with this level of verification, though they really should. It has been possible for years now. More casual applications also have more exposure to memory safety issues but those mostly use Java in my experience, at least in government.

          • > Less obvious, C++ has strong compile-time metaprogramming and execution features that can be used to extensively automate verification of code properties with minimal effort

            Would you be willing to share some more information about this? Interested in learning more since this sort of thing rarely seems to come up in typical situations I work in.

  • [flagged]
    • Please don't use Hacker News as a religious or ideological battleground. It tramples curiosity. Please don't pick the most religiously/ideologically provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead.
      • This is interesting! People ignoring this, I think, is also interesting on its own. I respect if other people disagree, but that's my 2c. I think our overton windows may not agree here, but I think this is part of the value of discussions with other humans.

        Are you a moderator? The directive tone of this post is as if from an authority figure, but, but I do not believe you are one.

        I do not believe there is anything about a religious or ideological background here. Could you please clarify?

        I also believe it is your post that could be more accurately described as trampling curiosity; I believe there is a role reversal, in that I think your comment is a better description for trampling curiosity than the post your are responding. I'm not trying to be snarky - I'm curious how you came to those conclusions.

        • The tone of the GP is such because it's a quote from the rules/guidelines. However, applying that rule to what you said makes no sense to me, fwiw.
  • "Japanese soldier who kept fighting 29 years after World War 2"
    • I watched a talk from Bjarne Stroustrup at CppCon about safety and it was pretty second hand embarrassing watching him try to pretend C++ has always been safe and safety mattered all along to them before Rust came along.
      • Well, there has been a long campaign against manual memory management - well before Rust was a thing. And along with that, a push for less use of raw pointers, less index loops etc. - all measures which, when adopted, reduce memory safety hazards significantly. Following the Core Guideliness also helps, as does using span's. Compiler warnings has improved, as has static analysis, also in a long process preceding Rust.

        Of course, this is not completely guaranteed safety - but safety has certainly mattered.

        • >Following the Core Guideliness also helps

          Yes, this what Stroustrup said and it makes me laugh. IIRC he phrased with a more of a 'we had safety before Rust' attitude. It also misses the point, safety shouldn't be opt-in or require memorising a rulebook. If safety is that easy in C++ why is everyone still sticking their hand in the shredder?

          • You're "moving the goal posts" of this thread. Safety has mattered - in C++ and in other languages as well, e.g. with MISRA C.

            As for the Core Guidelines - most of them are not about safety; and - they are not to be memorized, but a resource to consult when relevant, and something to base static analysis on.

  • I switched from C++ to Java/Python 20 years ago. I never really fit in, I just dont understand when people talk about the complicated frameworks to avoid multithreading/mutexes etc when basic C++ multi threading is much simpler than rxjava or async/await or whatever is flavor of the month.

    But C++ projects are usually really boring. I want to go back but glad I left. Has anyone found a place where C++ style programming is in fashion but isn't quite C++? I hope that makes sense.

  • Contracts feel like the right direction but the wrong execution timeline. The Ada/SPARK model shows how powerful contracts become when they feed into static verification — but that took decades of iteration on a language with far cleaner semantics. Bolting that onto C++ where UB is load-bearing infrastructure is a different beast entirely. The real risk isn't complexity for complexity's sake — it's that a "minimum viable" contracts spec gets locked in, and then the things that would actually make it useful for proof assistants become impossible to retrofit because they'd break the v1 semantics. Bjarne's concern about "incomplete" is more worrying to me than "bloated."
    • Nice try, clanker slop