• Whenever I start to feel like a real programmer making games and webapps and AI-enhanced ETL pipelines, I inevitably come across the blog post of a C++ expert and reminded that I am basically playing with legos and play-doh.
    • It's the other way around. You are the real programmer and the committee and the "modern C++" crowd are more interested playing with legos instead of shipping actual software.

      No way anything std::meta gets into serious production; too flexible in some ways, too inflexible in others, too much unpredictability, too high impact on compilation times - just like always with newer additions to the C++ standard. It takes one look at coding standards of real-world projects to see how irrelevant this stuff is.

      And like always, the problem std::meta is purported to solve has been solved for years.

      • The stream of modern C++ features have been a god-send for anyone that cares about high-performance, high-reliability software. Maybe that doesn’t apply to your use case but C++ is widely used in critical data infrastructure. For anyone that does care about things like performance and reliability, the changes to modern C++ have largely been obvious and immediately useful improvements. Almost all C++ projects I know in the high-performance data infrastructure space live as close to the bleeding edge of new C++ features as the compiler implementations make feasible.

        And no, reflection hasn’t “been solved for years” unless you have a very misleading definition of “solved”. A lot of the C++ code I work with is heavily codegen-ed via metaprogramming. Despite the relative expressiveness and flexibility of C++ metaprogramming, proper reflection will dramatically improve what is practical in a strict and type-safe way at compile-time.

        • I still have to learn C++20 concepts and now we have a full-fledged reflection system?

          Good, but I think what happens is there are people on the bleeding edge of C++, usually writing libraries that ship with new code. Each new feature is a godsend for them -- it's the reason why the features are proposed in the first place. It allows you to write libraries more simply, more generally, more safely, and more efficiently.

          The rest of us are dealing with old code that is a hodgepodge of older standards and toolchains, that has to run in multiple environments, mostly old ones. It's like yeah, this C++26 feature will come in handy for me someday, but if that day comes then it will be in 2036, and I might not be writing C++ by then.

        • You are sounding like rose tinted glasses are on. I think your glass is half full if you recheck actual versions and features. And mine is half empty in gamedev.

          Anecdata: A year or so ago I have been in discussion if beta features of C++20 on platforms are good to be used on large scale. It makes it not a sum but an intersection of partial implementations. Anyway it looked positive until we needed a pilot project to try. One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times'. After confirming it that it is indeed not an error on our side it was kinda obvious. Proportional increase of remote compilation cloud costs for few minor features is a 'no'. After a year the beta support is no longer beta but still partial on platforms and no improvements on build times in community. YMMV of course because gamedev mostly supports closed source platforms with closed set of build tools.

          • > One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times'.

            I think this just proves that your team is highly inexperienced in C++ projects, which you implicitly attest by admitting this was your first C++ upgrade you had to go through.

            Let me be very clear: there is never an upgrade of the C++ version targeted by a project that does not require full regression tests and a few bugs to squash. Why? Because even if the C++ side of things is perfectly fine, libraries often introduce all sorts of unexpected issues.

            For example, once I had to migrate a legacy project to C++14 and flipping the compiler flag to c++14 caused a wall of compiler errors. It turned out the C++ was perfectly fine, but a single library behaved very poorly with a constexpr constructor they enabled conditionally with C++14.

            You should understand that upgrades to the core language and standard libraries are exceptionally stable, and a clear focus of the standardization committee. But they only have a say in how the core language and standard libs should be. The bulk of the code any relatively complex project consumes is not core lang+ stdlib, but third-party libraries and frameworks. These often are riddled with flags to toggle whole components only in specific versions of the C++ language, mainly for backwards compatibility. Once you target a new version of C++, often that means you replace whole components of upstream dependencies. This often requires fixing your code. This happens very frequently, even with the likes of Boost.

            So, what you're complaining about is not C++ but your inexperience in software engineering in general. I mean, what is the rule of thumb about major version upgrades?

            • I am sorry for the confusion. It's fine to have some downvotes if its not what ppl like to see. I was not complaining. Message was purely informational from single point of view that a) game platforms have only partial C++20 support in 2025. b) there are features that are in C++ standard that do not fit description 'god-send'.
          • > One of the projects came back with 'just flipping C++20 switch with no changes causes significant regression on build times

            Given that C++20 introduced modules, which are intended to make builds faster, I think just flipping C++20 switch with no changes and checking build times should not be the end of checking whether C++20 is worth it for your setup.

            • > Given that C++20 introduced modules, which are intended to make builds faster

              Turning on modules effectively requires that all of your project dependencies themselves have turned on modules. Fail to do so, and a lot of the benefits start to become hindrances (Clang is currently debating going to 64-bit source locations because modularizing in this manner tends to exhaust the current 32-bit source locations).

        • I am interested; could you provide some links, articles, etc?
        • I am sorry, but this reads like GPT.

          Which projects? Which features? What exactly was the impact on performance and reliability, how and why? How did critical projects adopt the stream of features, considering nobody sane touches nothing in a new C++ standard for a decade, waiting for DRs to settle and for codegen stopping to suck?

          Reflection has been solved for years with custom codegen, including dimensions which std::meta cannot even touch such as stable cross-platform, cross-compiler ABI.

          • You sound like you subscribe to "Orthodox C++".

            Speaking seriously, I agree there's definitely a lot of bloat in the new C++ standards. E.g. I'm not a fan of the C++26 linalg stuff. But most performance-focused trading firms still use the latest standard with the latest compiler. Just a small example of new C++ features that are used every day in those firms:

            Smart pointers (C++11), Constexpr and consteval (all improvements since C++11), Concepts (C++20), Spans (C++20), Optional (C++17), String views (C++17)

            • > I'm not a fan of the C++26 linalg stuff.

              I don't agree at all. For most, linear algebra is the primary reason they pick up C++. Up until now, the best option C++ newbies had was to go through arcane processes to onboard a high performance BLAS implementation which then requires even more arcane steps such as tuning.

              With C++26, anyone can simply jump into implementing algorithms.

              If anything, BLAS support was conspicuously missing from C++ (and also C).

              This blend of comments is more perplexing given that a frequent criticism of C++ is its spartan standard lib, and how the selling point of some commercial software projects such as Matlab is that, unlike C++, linear algebra work is trivial.

      • Prediction: it will be used heavily for things like command line arg parsing, configuration files, deserialization, reflection into other languages. It will probably be somewhat a pain to use, but better than the current alternative mashup of macros/codegen/template metaprogramming that we have now for some of these solutions. It will likely mostly be used for library code, where someone defines some nice utilities for you, that do something useful, so that you don't have to worry about it. I don't think for the most part it has to hurt compile times - it might even be faster than the current mess, as well - less use of templates.

        I don't think the "legos" vs "shipping" debate here is really valid. One can write any type of code in any language. I'm a freak about C++, but if someone wants to ship in Python or JS, the more power to them - one can write code that's fast enough to not matter, but takes advantage of those languages' special features.

      • I know the trading firm I work at will be making heavy use of reflection the second it lands… we had a literal party when it made it into the standard.
        • sure, but instagram was created by a handful of people with python and got a billion dollar exit in 2012.
          • What has that to do with the topic? Warren Buffet made billions without any do knowledge about programming or deeper knowledge about computers.
          • > sure, but instagram was created by a handful of people with python and got a billion dollar exit in 2012.

            Facebook famously felt compelled to hire eminent C++ experts to help them migrate away from their PHP backend. I still recall reading posts on the Instagram Engineering blog on how and where they used C++.

          • And Youtube used Python almost exclusively at the start AFAIK.

            Then again Scott Meyers said he's never written a C++ program professionally.

            • > Then again Scott Meyers said he's never written a C++ program professionally.

              I think you're inadvertently misrepresenting Scott Meyers' claim.

              Cited from somewhere else:

              > I'll begin with what many of you will find an unredeemably damning confession: I have not written production software in over 20 years, and I have never written production software in C++. Nope, not ever.

              He went on to clarify that he made a living out of consultancy, not writing software. He famously retired from C++ in 2015, too.

          • What is this culture of judging everything by amount of money.

            No one needs a billion dollars, it is practically irrelevant unless you are running on greed

            • money is a proxy for value. the post i was responding to seemed to be pointing out how little value there is in something else.
      • I embrace Modern C++, but slower than the committee, when the big three have the feature.

        I really think reflection + annotations will give us the chance to have much better serialization and probably something more similar to Python decorators.

        That will be plenty useful and it is going to transform a part of C++ ecosystem, for example I am thinking of editors that need to reflect on data structures or web frameworks such as Crow or Drogon, Database access libraries...

      • > And like always, the problem std::meta is purported to solve has been solved for years.

        It is rare to read something more moronic than that

        The Rust equivalent of std::meta (procedural macros) are heavily used everywhere including in serialization framework, debugging and tracers.

        And that's not surprising at all: Compile time introspection is much more powerful and lightweight than codegen for exactly the same usage.

        • > It is rare to read something more moronic than that

          It's not actually wrong though is it - real codebases have been implementing reflection and introspection through macro magic etc. for decades at this point.

          I guess it's cool they want to fix it in the language, but as always, the approach is to make the language even more complex than it already is - e.g. two new operators (!) in the linked article

          • > been implementing reflection and introspection through macro magic etc. for decades at this point.

            Having a flaky pile of junk as an alternative is never been an excuse to not fix the problem properly.

            Every proper modern language (Rust, Kotlin, Zig, Swift, even freaking Golang) has a form of runtime reflection or static introspection.

            Only C++ does not. It was done historically with a mess of macros or a pre-compiler (qt-moc) that all have an entire pile of issue.

            > the approach is to make the language even more complex than it already is - e.g. two new operators

            The problem of rampant complexity in C++ is not so much about the new features when they bring something and make sense.

            It is about its inability to remove the old stuff even if it is consensual that it is garbage (e.g iostreams).

      • I bet CERN might eventually replace their Python based code generators with C++26 reflection.
        • Which problem would this solve for them?
          • It would standardize something they've done in an ad-hoc way for decades. They have a library called "reflex" [1] which adds some reflection, and which was (re)written by cannibalizing a lot of llvm code. They actually use the reflection to serialize a lot of the LHC data.

            It's kind of neat that it works. It's also a bit fidgety: the cannibalized code can cause issues (which, e.g. prevented C++11 adoption for a while in some experiments), and now CERN depends on bits of an old C++ compiler to read their data. Some may question the wisdom of making a multi-billion dollar dataset without a spec and dependent on internals of C++ classes (indeed experiments are slowly moving to formats with a clear spec), but for sure having a standard for reflection is better than the home-grown solution they rely on now.

            [1]: https://indico.cern.ch/event/408139/contributions/979831/att...

            • The library that you refer is not in use for a long time already. The document you pointed out is from 2006 (you can check the creation date).

              Since then, a lot has changed, and now it is all based on cling ( https://root.cern/cling/ ), that originates from clang and llvm. cling is responsible generates the serialization / reflection of the classes needed within the ROOT framework.

              • Good catch: I'm confusing reflex and the cling code that came later. All the issues I mentioned are still there in (or caused by) cling though. Either way standardization in reflection would help.
          • Two language problem, kind of well known issue in engineering tradeoffs.
            • As an example, most of the big js/ts ecosystem expansion to the server (RSC/Next/RR7/Expo/...) over the last few years is driven by the wish to have everything under one roof and one language.

              People just don't want to maintain two completely different stacks (one on the server, one on the client).

      • > No way anything std::meta gets into serious production

        Rust proc macros get used in serious production, even though they're quite slow to compile. Sure, std::meta is probably a bit clunkier, but that's expected from new C++ features as you say.

        • Sadly, Rust proc macros operate on tokens and any serious macro implementation needs third-party crates.

          Compile-time reflection, with good, built in API, akin to C# Roslyn would be a real boon.

          • Any serious anything needs third party crates. Coming from c++ this has been the most uncomfortable aspect of rust to me, but I am acclimating.
      • > the problem std::meta is purported to solve has been solved for years.

        What solution is that? A Python script that spits out C++ code?

      • Every problem is solved. We should stop making anything. Specially CRUD apps, because how is that even programming? What does it solve that hasn't been solved?

        This line of thinking is not productive. It is a mistake to see yourself as what you do, because then you're cornering yourself into defending it, no matter what.

      • Yeah, wait till you find out what's behind the curtain in your web engine and AI.

        Hint: it's C++, and yes, it will eventually use stuff like std::meta heavily.

        • If you would check my comments, you would see I am quite aware. And no, it will not, just like it was with streams, ranges and whatever else.
      • d_tr
        What's the solution that's been around for years?

        > ... just like always with newer additions to the C++ standard.

        This is objectively laughable.

        • I was literally running into something a couple of days ago on my toy C++ project where basic compile-time reflection would have been nice to have for some sanity checking.

          And even if it's true that some things can be done already with specific compilers and implementation-specific hacks, it would be really nice to be able to do those things more straightforwardly.

          My experience with C++ changes has been that the recent additions to compile-time metaprogramming operations is that they improve compile times rather than make it worse, because you don't have to do things like std::enable_if<> hacks and recursive templates to do things that a simple generic lambda or constexpr conditional will do, which are more difficult for both you and the compiler.

          • Constexpr if and fold expressions have been a god send!
          • The history of C++ has been one long loop of:

            1. So many necessary common practices of C++ are far too complicated!

            2. Std committee adds features to make those practices simpler.

            3. C++ keeps adding features. It’s too big. They should cut out the old stuff!

            4. The std committee points at the decade-long Python 3 fiasco.

            5. Repeat.

            • Do they point at python 3? They were committed to backward compatibility long before python3 happened.

              To me it feels like they have fleshed out key paradigms so that is not a mess anymore. They are not there yet with compile time evaluation (constexpr consteval,...), at least with C++20, not sure if it's mostly finished with C++23/26.

              The language itself and std is quite bloated but writing modern C++ isn't that complicated anymore in my experience.

            • It's pure Stockholm syndome. There's even a nice C++ committee paper that summarizes this as "Remember the Vasa!" https://open-std.org/JTC1/SC22/WG21/docs/papers/2018/p0977r0...
        • > What's the solution that's been around for years?

          Build tools to generate C++ code from some other tool. Interface description languages, for example, or something like (going back decades here) lex and yacc even.

          • d_tr
            Great. But you can do anything you want by generating code. Why not have a standard solution instead of everyone doing their own, possibly buggy thing complicating their build process even more?
            • Reframe it as "you can do precisely what you need by generating code" and there is your answer.

              Which is far better than to rely on a party which, as I said, has precisely nothing to do with what anyone needs. Which will inevitably produce solutions that can only partially (I am being generous here) be used in any particular situation.

              As for "possibly buggy" - look, I can whip up a solid *DL parser complete with a C++ code generator in what, a week? And then polish it from that.

              The committee will work for several years, settle on a barely working design, then it will take some years to land in major compilers, then it will turn out it is unusable because someone forgot a key API or it was unfeasible on VAX or something like that.

              And my build process is not complicated, and never will be. It can always accomodate another step. Mainly because I don't use CMake.

              • My perception is that C++XY features are wildly used in general. Of course there are some nobody uses, but that's not generally true. So your basic assumption is wrong.

                We are at C++20 and I wouldn't like to work for a company that uses an earlier standard.

                • Well, either you carefully vet which C++ features you use and my assumption still stands, or you don't - in which case I would rather not like to work in your company.
              • You can write a parser for an IDL, but you can’t reasonably write a parser for C++. So you have to move the definition of whatever types of methods or fields you want to reflect on into the IDL, instead of defining them natively in C++. (Or worse, have separate definitions in both the IDL and C++.) Which tends to be cumbersome – especially if you want to define a lot of generic types (since then the code generator can’t statically determine the full list of types). It can work, but there’s a reason I rarely see anyone using this approach.
                • Why would I want to write a C++ parser?

                  IDL/DDL is the source of truth, moving the type definitions there is the whole point. There is only one definition for each type, which is in the *DL, corresponding C++ headers are generated and everything is statically known.

          • Debugging/modifying code generated from someone's undocumented c++ code generator is pretty close to the top of my list of unpleasant things to do. Yes, you can eventually figure out what to do by looking at the generated code and taking apart the code generator and figuring out how it all works but I'll take built-in language features any day.
          • I've been down this road. I ended up with a config YAML (basically - an IDL) that goes into a pile of jinja files and C++ templates - and it always ended up better and easier to read to minimize the amount of jinja (broken syntax highlighting, the fact that you are writing meta meta code, it's a hot mess). I'd much prefer to generate some bare structs with some minimal additional inline metadata than to generate both those structs and an entire separate set of structs describing the first ones. std::meta lets me do the former, the latter is what's possible right now.
        • For example, boost library's "describe" and similar macro based solutions. Been using this for many years.
        • Whip up some kind of in-house IDL/DDL parser, codegen from that.

          Which, precisely, additions do not fit my points?

          • Completely inadequate for many use cases. IDL/DDL is one of the least interesting things you could do with reflection in C++. You can already do a lot of that kind of thing with existing metaprogramming facilities.
            • Which use cases? What exactly you can do with "existing metaprogramming facilities"?
          • Most of the time, I will prefer standard C++ over a full hand made layer of complexity that needs maintenance.
      • > It's the other way around. You are the real programmer and the committee and the "modern C++" crowd are more interested playing with legos instead of shipping actual software.

        I think this is the most clueless comment I ever read in HN. I hope the site is not being hit with it's blend of September.

        I was going to explain to you how fundamentally wrong your comment was, but it's better to just kindly ask you to post in Reddit instead.

    • I would argue that C++ expertise doesn't necessarily correlate to the complexity of the software being developed. Although I do try to learn the fancy new features I know many developers who even though they are still only using C++11 features they are creating some very complex and impactful pieces of software.
      • I definitely think that’s not a coincidence. C++11 is where you get the most useful feature tradeoffs with reasonable costs.

        Smart pointers being a great example. Shared ptr has its issues, it isn’t the most performant choice in most cases, but it by far reduces more footguns than it introduces.

        Compared to something like std::variant in the C++17 standard that comes with poor enough performance issues that it’s rarely ever a good fit.

        • C++11 was for me the first version of C++ where the expressiveness justified the extra complexity relative to C. It was when I finally committed to using C++ instead of C for systems code. In the same sense, C++20 is qualitatively better than C++11 in every way and dramatically reduces the complexity of C++11 while adding many features C++11 needed.
        • Just because someone didn't bother to learn anything past C++11 doesn't mean C++11 is some sort of performance sweet spot.
    • I'm not a C++ developer at all, but unless I'm missing something this didn't seem terribly difficult?

      This isn't meant to make myself seem smart or to try and make you seem dumb, I'm just curious what was confusing about this even from a high-level perspective. It felt like a clever but not too atypical metaprogramming thing.

      Maybe I've just done too much Clojure.

    • Library development and application development are activities of a different kind entirely.
    • I know your comment was meant as a tongue in cheek funny one but people should not be intimidated/overawed by the size of the C++ feature set. You don't need to know nor use all of them but can pick and choose based on your needs and how you model your problem. Also much of the complexity is perceived rather than real since it takes time for one to understand and assimilate new concepts. You can program very effectively and productively using just C++98 features (along with C if needed) with no hint of "Modern C++" (never mind the fanbois :-) What this gives you is the ability to use a single language to tackle small constrained microcontrollers with very limited toolchain support all the way to using the latest and the greatest toolchain on top-of-line processors.
      • Much of the complexity may be perceived, but much is also real, because of the commitment to backwards compatibility and non-breakage, plus poor default behavior of many things, often due to the C legacy, sometimes due to inopportune choices in earlier versions of the standard. Just think of things like variable initialization with () and/or {} ; or various kinds of implicit casts ; the hoops you need to go through to work with variants; etc.

        But I agree that one doesn't have to learn everything, or nearly-everything, to write decent-to-good modern-C++ code.

        • The problem is that many confuse C++ Language expertise (often snarkily called a "language lawyer") with C++ Programming expertise. A famous example is Scott Meyers who is squarely in the first camp and who has publicly stated as not having written any sizeable C++ programs. Given that C++ is quite a baroque language it is important for programmers to focus on the second aspect and slowly build up their knowledge of the first aspect over time (most experienced programmers tend to do this in any language).
    • [dead]
  • I had to do a UML thing for the first time in years for a class a few weeks ago[2].

    I'm not 100% convinced that UML is actually useful at all. Obviously if you find value from it, don't let me take that from you, by all means keep doing it, but all it seemed to provide was boxes pointing to other boxes for stuff that really wasn't unclear from looking directly at the code anyway. It's really not that hard to look directly at the class and look directly at the "extends" keyword (or the equivalent for whatever language you're using) and then follow from there. Maybe if you had like ten layers of inheritance it could be valuable, but if you're doing ten layers of inheritance there's a good chance that your code will be incomprehensible regardless.

    I'm not against visual diagrams for code, I draw logic out with Draw.io all the time and I've been hacking on the RoboTool [1] toolkit a bit in my free time, but what UML offers always felt more masturbatory than useful.

    Maybe I'm wrong, it certainly wouldn't be the first time, but every time I've tried to convince myself to like it I've left a little disappointed. It always kind of felt like stuff the enterprise world does to look like they're working hard and creating value.

    [1] https://robostar.cs.york.ac.uk/robotool/

    ETA:

    [2] By "class", I meant like an education class, not a Java class.*

    • For many of us UML has been completely irrelevant for decades. If you're deep down the OOP rabbit hole, then UML can have it's place in helping you keep track of your hierarchies. If you use it then I'd assume that getting your process of keeping it updated as automated as possible would be a high priority, unless you want it to rod in some ivory tower.

      Personally I view architecture in UML, ArchiMate or draw.io rather than being build with something similar icepanel.io to be a complete waste of my time. But that's just me.

    • UML diagrams are the only pictures that DON’T paint a thousand words
    • You have to rethink your view and understanding of UML - https://en.wikipedia.org/wiki/Unified_Modeling_Language

      It is not just drawing boxes but a visual modeling language providing both static/structural and dynamic/behavioural views of a complete system. You will only understand its value when you actually deal with large systems consisting of many interconnected modules with dependencies. In such large codebases it is almost impossible to understand all structural/behavioural aspects by browsing code whereas a tool like Doxygen generating UML diagrams from code becomes a godsend. You can map from UML to Code or Code to UML. As with any language you don't have to know all of it but can focus only on what you need eg. Class diagram/Activity diagram/Statemachine diagram are the ones i have found most useful.

      Finally, UML is now being used as a modeling/specification language frontend to Formal Methods which is the ultimate proof of its usefulness.

      • In wider practice, UML (class diagrams) is never used by working software developers as a frontend to formal methods.

        It got pushed on everyone, so there could be a layer of "software architects" who didn't have to know how to code and could have endless meetings where the final product was a Bayeux Tapestry of UML.

        UML captures inheritance and composition well, but a program is more than the sum of its schema. Also, real programming languages all have their idioms, and using UML as the design space creates a significant impedance mismatch.

      • [dead]
  • Reflection really was the missing piece, it's one of the things that are so nice in Java. Being able to serialize/deserialize a struct to JSON fully dynamically saves a lot of code.
    • Nb. This is fully static reflection, not runtime like in Java.
  • Oh man, some of the code in the linked proposal:

    Old:

        template<class...> struct list {};
    
        using types = list<int, float, double>;
    
        constexpr auto sizes = []<template<class...> class L, class... T>(L<T...>) {
          return std::array<std::size_t, sizeof...(T)>{{ sizeof(T)... }};
        }(types{});
    
    New:

        constexpr std::array types = {^^int, ^^float, ^^double};
        constexpr std::array sizes = []{
          std::array<std::size_t, types.size()> r;
          std::ranges::transform(types, r.begin(), std::meta::size_of);
          return r;
        }();
    
    I'm so tired of parameter packs, as useful as they are. Just give me a regular range based for loop or something similar like this. Thank you, this can't come soon enough.
    • This is when I switch to a programming language that doesn't block me from compiling and running just because I forgot some intricate detail. Ironically, I often find assembly programming much friendly.

      BTW, I continue to maintain some C++ software, and I like cryptopp [1]. I know people now use libsodium.

      [1] https://github.com/weidai11/cryptopp

      • I continue to maintain robotics software that nobody uses, such is life. :)
    • But the standard library should have had things so that we can write:

          constexpr std::array types = {^^int, ^^float, ^^double};
          auto sizes = std::whatever::transform(types, std::meta::size_of);
      
      which would have been even nicer.
      • It would have been, but it looks like the difficulty there is that the type of `sizes` must be compile time known, but `std::transform` and friends don't really know about fixed sizes. Depending on the context, one can do `auto sizes = types | std::views::transform(std::meta::size_of);`, the difficulty comes in materializing at the end.
  • If you’re like me and haven’t read much about this feature, here’s a link to the committee’s paper:

    https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p29...

    The examples section was pretty helpful for me.

  • Still waiting for IBMi to support C++11.
  • Meta: why does c++ feel almost political on this forum?
  • This is interesting because it interacts with consteval. It would be cool if the standards committee could so somehow figure out how to do codegen from consteval. Then we'd be kinda close to the promised land of procedural macros written in real C++.
    • A lot of the stuff they are working on for c++29 is exactly what you are wishing for (me too by the way).