• pron
    Yes!

    To me, the uniqueness of Zig's comptime is a combination of two things:

    1. comtpime replaces many other features that would be specialised in other languages with or without rich compile-time (or runtime) metaprogramming, and

    2. comptime is referentially transparent [1], that makes it strictly "weaker" than AST macros, but simpler to understand; what's surprising is just how capable you can be with a comptime mechanism with access to introspection yet without the referentially opaque power of macros.

    These two give Zig a unique combination of simplicity and power. We're used to seeing things like that in Scheme and other Lisps, but the approach in Zig is very different. The outcome isn't as general as in Lisp, but it's powerful enough while keeping code easier to understand.

    You can like it or not, but it is very interesting and very novel (the novelty isn't in the feature itself, but in the place it has in the language). Languages with a novel design and approach that you can learn in a couple of days are quite rare.

    [1]: In short, this means that you get no access to names or expressions, only the values they yield.

    • I was a bit confused by the remark that comptime is referentially transparent. I'm familiar with the term as it's used in functional programming to mean that an expression can be replaced by its value (stemming from it having no side-effects). However, from a quick search I found an old related comment by you [1] that clarified this for me.

      If I understand correctly you're using the term in a different (perhaps more correct/original?) sense where it roughly means that two expressions with the same meaning/denotation can be substituted for each other without changing the meaning/denotation of the surrounding program. This property is broken by macros. A macro in Rust, for instance, can distinguish between `1 + 1` and `2`. The comptime system in Zig in contrast does not break this property as it only allows one to inspect values and not un-evaluated ASTs.

      [1]: https://news.ycombinator.com/item?id=36154447

      • Yes, I am using the term more correctly (or at least more generally), although the way it's used in functional programming is a special case. A referentially transparent term is one whose sub-terms can be replaced by their references without changing the reference of the term as a whole. A functional programming language is simply one where all references are values or "objects" in the programming language itself.

        The expression `i++` in C is not a value in C (although it is a "value" in some semantic descriptions of C), yet a C expression that contains `i++` and cannot distinguish between `i++` and any other C operation that increments i by 1, is referentially transparent, which is pretty much all C expressions except for those involving C macros.

        Macros are not referentially transparent because they can distinguish between, say, a variable whose name is `foo` and is equal to 3 and a variable whose name is `bar` and is equal to 3. In other words, their outcome may differ not just by what is being referenced (3) but also by how it's referenced (`foo` or `bar`), hence they're referentially opaque.

      • Those are equivalent, I think. If you can replace an expression by its value, any two expressions with the same value are indistinguishable (and conversely a value is an expression which is its own value).
    • It's not novel. D pioneered compile time function execution (CTFE) back around 2007. The idea has since been adopted in many other languages, like C++.

      One thing it is used for is generating string literals, which then can be fed to the compiler. This takes the place of macros.

      CTFE is one of D's most popular and loved features.

      • pron
        It is novel to the point of being revolutionary. As I wrote in my comment, "the novelty isn't in the feature itself, but in the place it has in the language". It's one thing to come up with a feature. It's a whole other thing to position it within the language. Various compile-time evaluations are not even remotely positioned in D, Nim, or C++ as they are in Zig. The point of Zig's comptime is not that it allows you to do certain computations at compile-time, but that it replaces more specialised features such as templates/generics, interfaces, macros, and conditional compilation. That creates a completely novel simplicity/power balance.

        If the presence of features is how we judge design, then the product with the most features would be considered the best design. Of course, often the opposite is the case. The absence of features is just as crucial for design as their presence. It's like saying that a device with a touchscreen and a physical keyboard has essentially the same properties as a device with only a touchscreen.

        If a language has a mechanism that can do exactly what Zig's comptime does but it also has generics or templates, macros, and/or conditional compilation, then it doesn't have anything resembling Zig's comptime.

        • > Various compile-time evaluations are not even remotely positioned in D, Nim, or C++ as they are in Zig.

          See my other reply. I don't understand your comment.

          https://news.ycombinator.com/item?id=43748490

          • pron
            The revolution in Zig isn't in what the comptime mechanism is able to do, but how it allows the language to not have other features, which is what gives that language its power to simplicity ratio.

            Let me put it like this: Zig's comptime is a general compilation time computation mechanism that has introspection capabilities and replaces generics/templates, interfaces/typeclasses, macros, and conditional compilation.

            It's like that the main design feature of some devices is that they have a touchscreen but not a keyboard. The novelty isn't the touchscreen; it's in the touchscreen eliminating the keyboard. The touchscreen itself doesn't have to be novel; the novelty is how it's used to eliminate the keyboard. If your device has a touchscreen and a keyboard, then it does not have the same design feature.

            Zig's novel comptime is a mechanism that eliminates other specialised features, and if these features are still present, then your language doesn't have Zig's comptime. It has a touchscreen and a keyboard, whereas Zig's novelty is a touchscreen without a keyboard.

            • The example of a comptime parameter to a function is a template, whether you call it that or not :-/ A function template is a function with compile time parameters.

              The irony here is back in the 2000's, many programmers were put off by C++ templates, and found them to be confusing. Myself included. But when I (belatedly) realized that function templates were functions with compile time parameters, I had an epiphany:

              Don't call them templates! Call them functions with compile time parameters. The people who were confused by templates understood that immediately. Then later, after realizing that they had been using templates all along, became comfortable with templates.

              BTW, I wholeheartedly agree that it is better to have a small set of features that can do the same thing as a larger set of features. But I'm not seeing how comptime is accomplishing that.

              • pron
                > But I'm not seeing how comptime is accomplishing that.

                Because Zig does the work of C++'s templates, macros, conditional compilation, constexprs, and concepts with one relatively simple feature.

                • From the article:

                      fn print(comptime T: type, value: T) void {
                  
                  That's a template. In D it looks like:

                      void print(T)(T value) {
                  
                  which is also a template.
                  • I think another way to put it is that the fact that Zig reuses the keyword "comptime" to denote type-level parameters and to denote compile-time evaluation doesn't mean that there's only one feature. There are still two features (templates and CTFE), just two features that happen to use the same keyword.
                    • pron
                      Maybe you can insist that these are two features (although I disagree), but calling one of them templates really misses the mark. That's because, at least in C++, templates have their own template-level language (of "metafunctions"), whereas that's not the case in Zig. E.g. that C++'s `std::enable_if` is just the regular `if` in Zig makes all the difference (and also shows why there may not really be two features here, only one).
                      • std::enable_if is not the correct comparison, I think you mean "if constexpr"

                        enable_if is mostly deprecated, and was used for overloading not branching, you can use concepts now instead

                      • Agreed. Zig's approach re-uses the existing machinery of the language far more than C++ templates do. Another example of this is that Zig has almost no restrictions on what kinds of values can be `comptime` parameters. In C++, "non-type template parameters" are restricted to a small subset of types (integers, enums, and a few others). Rust's "const generics" are even more restrictive: only integers for now.

                        In Zig I can pass an entire struct instance full of config values as a single comptime parameter and thread it anywhere in my program. The big difference here is that when you treat compile-time programming as a "special" thing that is supported completely differently in the language, you need to add these features in a painfully piecemeal way. Whereas if it's just re-using the machinery already in place in your language, these restrictions don't exist and your users don't need to look up what values can be comptime values...they're just another kind of thing I pass to functions, so "of course" I can pass a struct instance.

                        • > Zig has almost no restrictions on what kinds of values can be `comptime` parameters.

                          Neither does D. The main restriction is the CTFE needs to be pure. I.e. you cannot call the operating system in CTFE (this is a deliberate restriction, mainly to avoid clever malware).

                          CTFE isn't "special" in D, either. CTFE is triggered for any instance of a "constant expression" in the grammar, and doesn't require a keyword.

                      • std::enable_if exists to disable certain overloads during overload resolution. Zig has no overloading, so it has no equivalent.
                        • I'd flip it over and say that C++ has overloading&SFINAE to enable polymorphism which it otherwise can't express.
                          • Such as? The basic property of overloading is it's open. Any closed set of overloads can be converted to a single function which does the same dispatch logic with ifs and type traits (it may not be very readable).
                    • They are the same thing though. Conceptually there's a partial evaluation pass whose goal is to eliminate all the comptimes by lowering them to regular runtime values. The apparent different "features" just arise from its operation on the different kinds of program constructs. To eliminate a expression, it evaluates the expression and replaces it with its value. To eliminate a loop, it unrolls it. To eliminate a call to a function with comptime arguments, it generates a specialized function for those arguments and replaces it with a call to the specialized function.
      • If I understand TFA correctly, the author claims that D’s approach is actually different: https://matklad.github.io/2025/04/19/things-zig-comptime-won...

        “In contrast, there’s absolutely no facility for dynamic source code generation in Zig. You just can’t do that, the feature isn’t! [sic]

        Zig has a completely different feature, partial evaluation/specialization, which, none the less, is enough to cover most of use-cases for dynamic code generation.”

        • The partial evaluation/specialization is accomplished in D using a template. The example from the link:

              fn f(comptime x: u32, y: u32) u32 {
                  if (x == 0) return y + 1;
                  if (x == 1) return y * 2;
                  return y;
              }
          
          and in D:

              uint f(uint x)(uint y) {
                  if (x == 0) return y + 1;
                  if (x == 1) return y * 2;
                  return y;
              }
          
          The two parameter lists make it a function template, the first set of parameters are the template parameters, which are compile time. The second set are the runtime parameters. The compile time parameters can also be types, and aliased symbols.
          • Here is, I think, an interesting example of the kind of thing TFA is talking about. In case you’re not already familiar, there’s an issue that game devs sometimes struggle with, where, in C/C++, an array of structs (AoS) has a nice syntactic representation in the language and is easy to work with/avoid leaks, but a struct of arrays (SoA) has a more compact layout in memory and better performance.

            Zig has a library to that allows you to have an AoS that is laid out in memory like a SoA: https://zig.news/kristoff/struct-of-arrays-soa-in-zig-easy-i... . If you read the implementation (https://github.com/ziglang/zig/blob/master/lib/std/multi_arr...) the SoA is an elaborately specialized type, parameterized on a struct type that it introspects at compile time.

            It’s neat because one might reach for macros for this sort of the thing (and I’d expect the implementation to be quite complex, if it’s even possible) but the details of Zig’s comptime—you can inspect the fields of the type parameter struct, and the SoA can be highly flexible about its own fields—mean that you don’t need a macro system, and the Zig implementation is actually simpler than a macro approach probably would be.

            • D doesn't have a macro system, either, so I don't understand what you mean.
              • IIUC, it does have code generation—the ability to generate strings at compile-time and feed them back into the compiler.

                The argument that the author of TFA is making is that Zig’s comptime is a very limited feature (which, they argue, is good. It restricts users from introducing architecture dependencies/cross-compilation bugs, is more amenable to optimization, etc), and yet it allows users to do most of the things that more general alternatives (such as code generation or a macro system) are often used for.

                In other words, while Zig of course didn’t invent compile-time functions (see lisp macros), it is notable and useful from a PL perspective if Zig users are doing things that seem to require macros or code generation without actually having those features. D users, in contrast, do have code generation.

                Or, alternatively, while many languages support metaprogramming of some kind, Zig’s metaprogramming language is at a unique maxima of safety (which macros and code generation lack) and utility (which e.g. Java/Go runtime reflection, which couldn’t do the AoS/SoA thing, lack)

                Edit Ok, I think Zig comptime expressions are just like D templates, like you said. The syntax is nicer than C++ templates. Zig’s “No host leakage” (to guarantee cross-compile-ability) looks like the one possibly substantively different thing.

                • > Zig’s “No host leakage” (to guarantee cross-compile-ability) looks like the one possibly substantively different thing.

                  That is a good idea, but could be problematic if one relies on size_t, which changes in size from 32 to 64 bit. D's CTFE adds checks for undefined behavior, such as shifting by more bits than are in the type being shifted. These checks are not done at runtime for performance reasons.

                  D's CTFE also does not allow calling the operating system, and only works on functions that are "pure".

                  • Because Zig supports cross-compilation, what you care about isn't the host -- the machine that runs the compiler -- but the target, which is not (necessarily) the same as the host. While information about the host isn't made available, information about the compilation target is: https://ziglang.org/documentation/master/#Compile-Variables
          • Using a different type vs. a different syntax can be an important usability consideration, particularly since D also has templates and other features, where Zig provides only the comptime type for all of them. Homogeneity can also be a nice usability win, though there are downsides as well.
            • Zig's use of comptime in a function argument makes it a template :-/

              I bet if you use such a function with different comptime arguments, compile it, and dump the assembler you'll see that function appearing multiple times, each with somewhat different code generated for it.

              • > Zig's use of comptime in a function argument makes it a template :-/

                That you can draw an isomorphism between two things does not mean they are ergonomically identical.

                • When we're responding to quite valid points about other languages having essentially the same features as Zig with subjective claims about ergonomics, the idea that Zig comptime is "revolutionary" is looking awfully flimsy. I agree with Walter: Zig isn't doing anything novel. Picking some features while leaving others out is something that every language does; if doing that is enough to make a language "revolutionary", then every language is revolutionary. The reality is a lot simpler and more boring: for Zig enthusiasts, the set of features that Zig has appeals to them. Just like enthusiasts of every programming language.
                  • > Picking some features while leaving others out is something that every language does; if doing that is enough to make a language "revolutionary", then every language is revolutionary.

                    Picking a set of well motivated and orthogonal features that combine well in flexible ways is definitely enough to be revolutionary if that combination permits expressive programming in ways that used to be unwieldy, error-prone or redundant, eg. "redundant" in the sense that you have multiple ways of expressing the same thing in overlapping but possibly incompatible ways. It doesn't follow that every language must be revolutionary just because they pick features too, there are conditions to qualify.

                    For systems programming, I think Zig is revolutionary. I don't think any other language matches Zig's cross-compilation, cross-platform and metaprogramming story in such a simple package. And I don't even use Zig, I'm just a programming language theory enthusiast.

                    > I agree with Walter: Zig isn't doing anything novel.

                    "Novel" is relative. Anyone familiar with MetaOCaml wouldn't have seen Zig as particularly novel in a theoretical sense, as comptime is effectively a restricted multistage language. It's definitely revolutionary for an industry language though. I think D has too much baggage to qualify, even if many Zig expressions have translations into D.

                  • > Picking some features while leaving others out is something that every language does; if doing that is enough to make a language "revolutionary", then every language is revolutionary.

                    You can say that about the design of any product. Yet, once in a while, we get revolutionary designs (even if every feature in isolation is not completely novel) when the choice of what to include and what to leave out is radically different from other products in the same category in a way that creates a unique experience.

                  • >for Zig enthusiasts, the set of features that Zig has appeals to them. Just like enthusiasts of every programming language.

                    I find it rather amusing that it's a Java and a Rust enthusiast who are extolling Zig approach here! I am not particularly well read with respect to programming languages, but I don't recall many languages which define generic pair as

                        fn Pair(A: type, B: type) type {
                            return struct { fst: A, snd: B };
                        }
                    
                    The only one that comes to mind is 1ML, and I'd argue that it is also revolutionary.
                    • Well, if you strip away the curly braces and return statement, that's just a regular type definition. Modeling generic types as functions from types to types is just System F, which goes back to 1975. Turing-complete type-level programming is common in tons of languages, from TypeScript to Scala to Haskell.

                      I think the innovation here is imperative type-level programming--languages that support type-level programming are typically functional languages, or functional languages at the type level. Certainly interesting, but not revolutionary IMO.

                      • The thing is, this is not type-level programming, this is term-level programming. That there's no separate language of types is the feature. Functional/imperative is orthogonal. You can imagine functional Zig which writes

                            Pair :: type -> type -> type
                            let Pair a b = product a b 
                        
                        This is one half of the innovation, dependent-types lite.

                        The second half is how every other major feature is expressed _directly_ via comptime/partial evaluation, not even syntax sugar is necessary. Generic, macros, and conditional compilation are the three big ones.

                        • > This is one half of the innovation, dependent-types lite.

                          But that's not dependent types. Dependent types are types that depend on values. If all the arguments to a function are either types or values, then you don't have dependent types: you have kind polymorphism, as implemented for example in GHC extensions [1].

                          > The second half is how every other major feature is expressed _directly_ via comptime/partial evaluation, not even syntax sugar is necessary. Generic, macros, and conditional compilation are the three big ones.

                          I'd argue that not having syntactic sugar is pretty minor, but reasonable people can differ I suppose.

                          [1]: https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/poly...

                          • > Dependent types are types that depend on values.

                            Like this?

                                fn f(comptime x: bool) if (x) u32 else bool {
                                    return if (x) 0 else false;
                                }
                            • That's still just a function of type ∀K∀L.K → L with a bound on K. From a type theory perspective, a comptime argument, when the function is used in such a way as to return a type, is not a value, even though it looks like one. Rather, true or false in this context is a type. (Yes, really. This is a good example of why Zig reusing the keyword "comptime" obscures the semantics.) If comptime true or comptime false were actually values, then you could put runtime values in there too.
                            • No, dependent types depend on runtime values.
                              • Yeah, that one Zig can not do, hence "-lite".
                                • The point is that comptime isn't dependent types at all. If your types can't depend on runtime values, they aren't dependent types. It's something more like kind polymorphism in GHC (except more dynamically typed), something which GHC explicitly calls out as not dependent types. (Also it's 12 years old [1]).

                                  [1]: https://www.seas.upenn.edu/~sweirich/papers/fckinds.pdf

                    • I might be misunderstanding something, but this is how it works in D:

                          struct Pair(A, B) { A fst; B snd; }
                      
                          Pair!(int, float) p; // declaration of p as instance of Pair
                      
                      It's just a struct with the addition of type parameters.
                  • pron
                    I'm sorry, but not being able to see that a design that uses a touchscreen to eliminate the keyboard is novel despite the touchscreen itself having been used elsewhere alongside a keyboard, shows a misunderstanding of what design is.

                    Show me the language that used a general purpose compile-time mechanisms to avoid specialised features such as generics/templates, interfaces/typeclasses, macros, and conditional compilation before Zig, then I'll say that language was revolutionary.

                    I also find it hard to believe that you can't see how replacing all these features with a single one (that isn't AST macros) is novel. I'm not saying you have to think it's a good idea -- that's a matter of personal taste (at least until we can collect more objective data) -- but it's clearly novel.

                    I don't know all the languages in the world and it's possible there was a language that did that before Zig, but none of the languages mentioned here did. Of course, it's possible that no other language did that because it's stupid, but that doesn't mean it's not novel (especially as the outcome does not appear stupid on the face of it).

                    • But Zig's comptime only approximates the features you mentioned; it doesn't fully implement them. Which is what the original article is saying. To use your analogy, using a touchscreen to eliminate a keyboard isn't very impressive if your touchscreen keyboard is missing keys.

                      If you say that incomplete implementations count, then I could argue that the C preprocessor subsumes generics/templates, interfaces/typeclasses†, macros, and conditional compilation.

                      †Exercise for the reader: build a generics system in the C preprocessor that #error's out if the wrong type is passed using the trick in [1].

                      [1]: https://stackoverflow.com/a/45450646

                      • pron
                        > But Zig's comptime only approximates the features you mentioned; it doesn't fully implement them

                        That's like saying that a touchscreen device without a keyboard only approximates a keyboard but doesn't fully implement one. The important thing is that the feature performs the duty of those other features.

                        > If you say that incomplete implementations count, then I could argue that the C preprocessor subsumes generics/templates, interfaces/typeclasses†, macros, and conditional compilation.

                        There are two problems with this, even if we assumed that the power of C's preprocessor is completely equivalent to Zig's comptime:

                        First, C's preprocessor is a distinct meta-language; one major point of Zig's comptime is that the metalanguage is the same language as the object language.

                        Second, it's unsurprising that macros -- whether they're more sophisticated or less -- can do the role of all those other features. As I wrote in my original comment (https://news.ycombinator.com/item?id=43745438) one of the exciting things about Zig is that a feature that isn't macros (and is strictly weaker than macros, as it's referentially transparent) can replace them for the most part, while enjoying a greater ease of understanding.

                        I remember that one of my first impressions of Zig was that it evoked the magic of Lisp (at least that was my gut feeling), but in a completely different way, one that doesn't involve AST manipulation, and doesn't suffer from many of the problems that make List macros problematic (i.e. creating DSLs with their own rules). I'm not saying it may not have other problems, but that is very novel.

                        I hadn't seen any such fresh designs in well over a decade. Now, it could be that I simply don't know enough languages, but you also haven't named other languages that work on this design principle, so I think my excitement was warranted. I'll let you know if I think that's not only a fresh and exciting design but also a good one in ten years.

                        BTW, I have no problem with you finding Zig's comptime unappealing to your tastes or even believing it suffers from fundamental issues that may prove problematic in practice (although personally I think that, when considering both pros and cons of this design versus the alternatives, there's some promise here). I just don't understand how you can say that the design isn't novel while not naming one other language with a similar core design: a mechanism for partial evaluation of the object language (with access to additional reflective operations) that replace those other features I mentioned (by performing their duty, if not exactly their mode of operation).

                        For example, I've looked at Terra, but it makes a distinction between the meta language and the object (or "runtime") language.

                        • > The important thing is that the feature performs the duty of those other features.

                          Zig's comptime doesn't do everything that Rust (or Java, or C#, or Swift, etc.) generics do, and I know you know this given your background in type theory. Zig doesn't allow for the inference and type-directed method resolution that Rust or the above languages do, because the "generics" that you create using Zig comptime aren't typechecked until they're instantiated. You can improve the error messages using "comptime if" or whatever Zig calls it (at the cost of a lot of ergonomics), but the compiler still can't reliably typecheck the bodies of generic functions before the compiler does the comptime evaluation.

                          Now I imagine you think that this feature doesn't matter, or at least doesn't matter enough to be worth the complexity it adds to the compiler. (I disagree, of course, because I find reliable IDE autocomplete and inline error messages to be enormously useful when writing generic Rust functions.) But that's the entire point: Zig comptime is not performing the duty of generics; it's approximating generics in a way that offers a tradeoff.

                          When I first looked at Zig comptime, it didn't evoke the "magic of Lisp" at all in me (and I do share an appreciation of simplicity in programming languages, though I feel like Scheme offers more of that than Lisp). Rather, my reaction was "oh, this is basically just what D does", having played with D a decent amount in years prior. Nothing I've seen in the intervening years has changed that impression. Zig's metaprogramming features are a spin on metaprogramming facilities that D thoroughly explored over a decade before Zig came on the scene.

                          Edit: Here's an experiment. Start with D and start removing features: GC, the class system, exceptions, etc. etc. Do you get to something that's more or less Zig modulo syntax? From what I can tell, you do. That's what I mean by "not revolutionary".

                          • pron
                            > Zig doesn't allow for the inference and type-directed method resolution that Rust or the above languages do

                            Well, but Zig also doesn't allow for overloads and always opts for explicitness regardless of comptime, so I would say that's consonant with the rest of the design.

                            > Now I imagine you think that this feature doesn't matter, or at least doesn't matter enough to be worth the complexity it adds to the compiler.

                            I don't care too much about the complexity of the compiler (except in how compilation times are affected), but I do care about the complexity of the language. And yes, there are obviously tradeoffs here, but they're not the same tradeoffs as C++ templates and I think it's refreshing. I can't yet tell how "good" the tradeoff is.

                            > Here's an experiment. Start with D and start removing features: GC, the class system, exceptions, etc. etc. Do you get to something that's more or less Zig modulo syntax?

                            I don't know D well enough to tell. I'd probably start by looking at how D would do this [1]: https://ziglang.org/documentation/master/#Case-Study-print-i...

                            For instance, the notion of a comptime variable (for which I couldn't find an analogue in D) is essential to the point that the "metalanguage" and the object language are pretty much the same language.

                            Interestingly, in Zig, the "metalanguage" is closer to being a superset of the object language whereas in other languages with compile-time phases, the metalanguage, if not distinct, is closer to being a subset. I think Terra is an interesting point of comparison, because there, while distinct, the metalanguage is also very rich.

                            [1] which, to me, gives the "magical Lisp feeling" except without macros.

                            • > the notion of a comptime variable (for which I couldn't find an analogue in D)

                              A comptime variable in D would look like:

                                  enum v = foo(3);
                              
                              Since an enum initialization is a ConstExpression, it's initialization must be evaluated at compile time.

                              A comptime function parameter in D looks like:

                                  int mars(int x)(int y) { ... }
                              
                              where the first parameter list consists of compile time parameters, and the second the runtime parameters.

                              D does not have a switch-over-types statement, but the equivalent can be done with a sequence of static-if statements:

                                  static if (is(T == int)) { ... }
                                  else static if (is(T == float)) { ... }
                              
                              Static If is always evaluated at compile time. The IsExpression does pattern matching on types.
                              • A comptime variable in Zig isn't a constant whose value is computed at compile time (that would just be a Zig constant) but rather variable that's potentially mutable by comptime: https://ziglang.org/documentation/master/#Compile-Time-Varia...

                                This is one of the things that allow the "comptime language" to just be Zig, as in this example: https://ziglang.org/documentation/master/#Case-Study-print-i...

                                • You can mutate variables at compile time in D. See the compile time Newton's method example: https://tour.dlang.org/tour/en/gems/compile-time-function-ev...
                                  • I don't think that's the same thing (rather, it's more like ordinary Zig variables in code that's evaluated at compile-time), as there's no arbitrary mixing of compile-time and runtime computation. Again, compare with https://ziglang.org/documentation/master/#Case-Study-print-i...

                                    Anyway, I found this article that concludes that D's compile time evaluation is equivalent in power to Zig's, although it also doesn't cover how comptime variables can be used in Zig: https://renato.athaydes.com/posts/comptime-programming

                                    However, as I've said many times, knowing about the theoretical power of partial evaluation, what excites me in Zig isn't what comptime can do (although I am impressed with the syntactic elegance of the mechanism) but how it is used to avoid adding other features.

                                    A phone with a touchscreen is evolutionary; a phone without a keypad is revolutionary. The revolution is in the unique experience of using "just comptime" for many things.

                                    It is, of course, a tradeoff, and whether or not that tradeoff is "good" remains to be seen, but I think this design is one of the most novel designs in programming languages in many, many years.

                        • >I'm not saying it may not have other problems, but that is very novel.

                          Just to explicitly acknowledge this, it inherits the C++ problem that you don't get type errors inside a function until you call the function and, when that happens, its not always immediately obvious whether the problem is in the caller or in the callee.

        • that's a comically archaic way of using the verb 'to be', not a grammatical error. you see it in phrases like "to be or not to be", or "i think, therefore i am". "the feature isn't" just means it doesn't exist.
        • Sure, CTFE can be used to generate strings, then later "mixed-in" as source code, but also can be used to execute normal functions and then the result can be stored in a compile-time constant (in D that's the `enum` storage class), for example generating an array using a function literal called at compile-time:

             enum arr = { return iota(5).map!(i => i * 10).array; }();
             static assert(arr == [0,10,20,30,40]);
        • > the feature isn’t! [sic]

          To be, or not to be... The feature is not.

          (IOW, English may not be the author's native language. I'm fairly sure it means "The feature doesn't exist".)

      • A little bit out of context, I just want to thank you and all the contributors for the D programming language.
      • > D pioneered compile time function execution (CTFE) back around 2007

        Pioneered? Forth had that in the 1970s, lisp somewhere in the 1960s (I’m not sure whether the first versions of either had it, so I won’t say 1970 respectively 1960), and there may be other or even older examples.

        • True, but consider that Forth and Lisp started out as interpreted languages, meaning the whole thing can be done at compile time. I haven't seen this feature before in a language that was designed to be compiled to machine code, such as C, Pascal, Fortran, etc.

          BTW, D's ImportC C compiler does CTFE, too!! CTFE is a natural fit for C, and works like a champ. Standard C should embrace it.

          • Nitpick: Lisp didn’t start out as an interpreted language. It started as an idea from a theoretical computer scientist, and wasn’t supposed to be implemented. https://en.wikipedia.org/wiki/Lisp_(programming_language)#Hi...:

            "Steve Russell said, look, why don't I program this eval ... and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bugs, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today”

      • You're missing the point. If anything D is littered with features and feature bloat (CTFE included). Zig (as the author of the blog mentions) is more than somewhat defined by what it can't do.
        • I fully agree that the difference is a matter of taste.

          All living languages accrete features over time. D started out as a much more modest language. It originally eschewed templates and operator overloading, for example.

          Some features were abandoned, too, like complex numbers and the "bit" data type.

    • Comptime is often pushed as being something extraordinarily special, when it's not. Many other languages have similar. Jai, Vlang, Dlang, etc...

      What could be argued, is if Zig's version of it is comparatively better, but that is a very difficult argument to make. Not only in terms of how different languages are used, but something like an overall comparison of features looks to be needed in order to make any kind of convincing case, beyond hyping a particular feature.

      • You didn't read the article because that's the argument being made (whether you think these points have merit) :

        > My understanding is that Jai, for example, doesn’t do this, and runs comptime code on the host.

        > Many powerful compile-time meta programming systems work by allowing you to inject arbitrary strings into compilation, sort of like #include whose argument is a shell-script that generates the text to include dynamically. For example, D mixins work that way:

        > And Rust macros, while technically producing a token-tree rather than a string, are more or less the same

        • The comment made by me, is a reply to another reader, not of the article directly. The push back was on the nature of their comment.

          > the uniqueness of Zig's comptime... > You can like it or not, but it is very interesting and very novel...

          While true, such features in Zig can be interesting, they are not particularly novel (as other highly knowledgeable readers have pointed out). Zig's comptime is often marketed or hyped as being special, while overlooking that other languages often do similar, but have their own perspectives and reasoning on how metaprogramming and those type of features fit into their language. Not to mention, metaprogramming has its downsides too. It's not all roses.

          The article does seek to make comparisons with other languages, but arguably out of context, as to what those languages are trying to achieve with their feature sets. Comptime should not be looked at in a bubble, but as part of the language as a whole.

          A language creator with an interesting take on metaprogramming in general, is Ginger Bill (of Odin). Who often has enthusiasts attempt to pressure him into making more extensive use of it in his language, but he pushes back because of various problems it can cause, and has argued he often comes up with optimal solutions without it. There are different sides to the story, in regards to usage and goals, relative to the various languages being considered.

    • Regarding 2. How are comptime values restricted to total computations? Is it just by the fact that the compiler actually finished, or are there any restrictions on comptime evaluations?
      • Yes, comptime evaluation is restricted to a configurable number of back-branches. 1000 by default, I think.
      • They don't need to be restricted to total computation to be referentially transparent. Non-termination is also a reference.
    • I’ve never managed to understand your year-long[1] manic praise over this feature. Given that you’re a language implementer.

      It’s very cool to be able to just say “Y is just X”. You know in a museum. Or at a distance. Not necessarily as something you have to work with daily. Because I would rather take something ranging from Java’s interface to Haskell’s typeclasses since once implemented, they’ll just work. With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

      That’s not something I want. I just want generics or parametric polymorphism or whatever it is to work once it compiles. If there’s a <T> I want to slot in T without any surprises. And whether Y is just X is a very distant priority at that point. Another distant priority is if generics and whatever else is all just X undernea... I mean just let me use the language declaratively.

      I felt like I was on the idealistic end of the spectrum when I saw you criticizing other languages that are not installed on 3 billion devices as too academic.[2] Now I’m not so sure?

      [1] https://news.ycombinator.com/item?id=24292760

      [2] But does Scala technically count since it’s on the JVM though?

      • pron
        My "manic praise" extends to the novelty of the feature as Zig's design is revolutionary. It is exciting because it's very rare to see completely novel designs in programming languages, especially in a language that is both easy to learn and intended for low-level programming.

        I wait 10-15 years before judging if a feature is "good"; determining that a feature is bad is usually quicker.

        > With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

        But the point is that all that is done at compile time, which is also the time when all more specialised features are checked.

        > That’s not something I want. I just want generics or parametric polymorphism or whatever it is to work once it compiles.

        Again, everything is checked at compile-time. Once it compiles it will work just like generics.

        > I mean just let me use the language declaratively.

        That's fine and expected. I believe that most language preferences are aesthetic, and there have been few objective reasons to prefer some designs over others, and usually it's a matter of personal preference or "extra-linguistic" concerns, such as availability of developers and libraries, maturity, etc..

        > Now I’m not so sure?

        Personally, I wouldn't dream of using Zig or Rust for important software because they're so unproven. But I do find novel designs fascinating. Some even match my own aesthetic preferences.

        • > But the point is that all that is done at compile time, which is also the time when all more specialised features are checked.

          > ...

          > Again, everything is checked at compile-time. Once it compiles it will work just like generics.

          No. My compile-time when using a library with a comptime type in Zig is not guaranteed to work because my user experience could depend on if the library writer tested with the types (or compile-time input) that I am using.[1] That’s not a problem in Java or Haskell: if the library works for Mary it will work for John no matter what the type-inputs are.

          > That's fine and expected. I believe that most language preferences are aesthetic, and there have been few objective reasons to prefer some designs over others, and usually it's a matter of personal preference or "extra-linguistic" concerns, such as availability of developers and libraries, maturity, etc..

          Please don’t retreat to aesthetics. What I brought up is a concrete and objective user experience tradeoff.

          [1] based on https://strongly-typed-thoughts.net/blog/zig-2025#comptime-i...

          • pron
            > No. My compile-time when using a library with a comptime type in Zig is not guaranteed to work because my user experience could depend on if the library writer tested with the types (or compile-time input) that I am using.[1] That’s not a problem in Java or Haskell: if the library works for Mary it will work for John no matter what the type-inputs are.

            What you're saying isn't very meaningful. Even generics may impose restrictions on their type parameters (e.g. typeclasses in Zig or type bounds in Java) and don't necessarily work for all types. In both cases you know at compile-time whether your types fit the bounds or not.

            It is true that the restrictions in Haskell/Java are more declarative, but the distinction is more a matter of personal aesthetic preference, which is exactly what's expressed in that blog post (although comptime is about as different from C++ templates as it is from Haskell/Java generics). Like anything, and especially truly novel approaches, it's not for everyone's tastes, but neither are Java, Haskell, or Rust, for that matter. That doesn't make Zig's approach any less novel or interesting, even if you don't like it. I find Rust's design unpalatable, but that doesn't mean it's not interesting or impressive, and Zig's approach -- again, like it or not -- is even more novel.

            • > What you're saying isn't very meaningful. Even generics may impose restrictions on their type parameters (e.g. typeclasses in Zig or type bounds in Java) and don't necessarily work for all types. In both cases you know at compile-time whether your types fit the bounds or not.

              Java type-bounds is what I mean with declarative. The library author wrote them, I know them, I have to follow them. It’s all spelled out. According to the link that’s not the case with the Zig comptime machinery. It’s effectively duck-typed from the point of view of the client (declaration).

              I also had another source in mind which explicitly described how Zig comptime is “duck typed” but I can’t seem to find it. Really annoying.

              > It is true that the restrictions in Haskell/Java are more declarative, but the distinction is more a matter of personal aesthetic preference, which is exactly what's expressed in that blog post (although comptime is about as different from C++ templates as it is from Haskell/Java generics).

              It’s about as aesthetic as having spelled out reasons (usability) for preferring static typing over dynamic typing or vice versa. It’s really not. At all.

              > , but that doesn't mean it's not interesting or impressive, and Zig's approach -- again, like it or not -- is even more novel.

              I prefer meaningful leaps forward in programming language usability over supposed most-streamlined and clever approaches (comptime all the way down). I guess I’m just a pragmatist in that very narrow area.

              • > According to the link that’s not the case with the Zig comptime machinery. It’s effectively duck-typed from the point of view of the client (declaration).

                It is "duck-typed", but it is checked at compile time. Unlike ducktyping in JS, you know whether or not your type is a valid argument just as you would for Java type bounds -- the compiler lets you know. Everything is also all spelled out, just in a different way.

                > It’s about as aesthetic as having spelled out reasons (usability) for preferring static typing over dynamic typing or vice versa. It’s really not. At all.

                But everything is checked statically, so all the arguments of failing fast apply here, too.

                > I prefer meaningful leaps forward in programming language usability over supposed most-streamlined and clever approaches (comptime all the way down). I guess I’m just a pragmatist in that very narrow area.

                We haven't had "meaningful leaps forward in programming language usability" in a very long time (and there are fundamental reasons for that, and indeed the situation was predicted decades ago). But if we were to have a meaningful leap forward, first we'd need some leap forward and then we could try learning how meaningful it is (which usually takes a very long time). I don't know that Zig's comptime is a meaningful leap forward or not, but as one of the most novel innovations in programming languages in a very long time, at least it's something that's worth a look.

                • > It is "duck-typed", but it is checked at compile time. Unlike ducktyping in JS, you know whether or not your type is a valid argument just as you would for Java type bounds -- the compiler lets you know. Everything is also all spelled out, just in a different way.

                  At this point I will have to defer to Zig users.

                  But the wider point stands whether I am correct about Zig usability or not (mostly leaning on the aforelinked URLs). Plenty of things can be compile-time and yet have widely different usability. Something that relies on unconstrained build-time code generation can be much harder to use than macros, which in turn can be harder to use than something like “constant expressions”, and so on.

      • > Because I would rather take something ranging from Java’s interface to Haskell’s typeclasses since once implemented, they’ll just work. With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

        This was perhaps a bad comparison and I should have compared e.g. Java generics to Zig’s comptime T.

      • Do you have a source for "criticizing other languages not installed on 3 billion devices as too academic" ?

        Without more context, this comment sounds like rehashing old (personal?) drama.

        • pron has been posting about programming languages for years and years, here, in public, for all to see. I guess reading them makes it personal? (We don’t know each other)

          The usual persona is the hard-nosed pragmatist[1] who thinks language choice doesn’t matter and that PL preference is mostly about “programmer enjoyment”.

          [1] https://news.ycombinator.com/item?id=16889706

          Edit: The original claim might have been skewed. Due to occupation the PL discussions often end up being about Java related things, and the JVM language which is criticized has often been Scala specifically. Here he recommends Kotlin over Scala (not Java): https://news.ycombinator.com/item?id=9948798

      • I'm sorry but I don't understand what you're complaining about comptime. All the stuff you said you wanted to work (generic, parametric polymorphism, slotting <T>, etc) just work with comptime. People are praising about comptime because it's a simple mechanism that replacing many features in other languages that require separate language features. Comptime is very simple and natural to use. It can just float with your day to day programming without much fuss.
        • comptime can’t outright replace many language features because it chooses different tradeoffs to get to where it wants. You get a “one thing to rule all” at the expense of less declarative use.

          Which I already said in my original comment. But here’s a source that I didn’t find last time: https://strongly-typed-thoughts.net/blog/zig-2025#comptime-i...

          Academics have thought about evaluating things at compile time (or any time) for decades. No, you can’t just slot in eval at a weird place that no one ever thought of (they did) and immediately solve a suite of problems that other languages use multiple discrete features for (there’s a reason they do that).

          • > comptime can’t outright replace many language features because it chooses different tradeoffs to get to where it wants.

            You're missing the point. I don't have any theory to qualify this, but:

            I've worked in a language with lisp-ey macros, and I absolutely hate hate hate when people build too-clever DSLs that hide a lot of weird shit like creating variable names or pluralizing database tables for me, swapping camel-case and snake case, creating a ton of logic under the hood that's hard to chase.

            Zig's comptime for the most part shys you away from those sorts of things. So yes, it's not fully feature parity in the language theory sense, but it really blocks you or discourages you away from shit you don't need to do, please for the love of god don't. Hard to justify theoretically. it's real though.

            It's just something you notice after working with it for while.

            • No, you are clearly missing the point because I laid out concrete critiques about how Zig doesn’t replace certain concrete language features with One Thing to Rule Them All. All in reply to someone complimenting Zig on that same subject.

              That you want to make a completely different point about macros gone wild is not my problem.

    • Has anyone grafted Zig style macros into Common Lisp?
      • That wouldn't be very meaningful. The semantics of Zig's comptime is more like that of subroutines in a dynamic language - say, JavaScript functions - than that of macros. The point is that it's executed, and yields errors, at a different phase, i.e. compile time.
      • The Scopes language might be similar to what you're asking about. Its notion of "spices" which complement the "sugars" feature is a similar kind of constant evaluation. It's not a Common Lisp dialect, though, but it is sexp based.
      • Isn’t this kind of thing sort of the default thing in Lisp? Code is data so you can transform it.
        • There are no limitations on the transformations in lisp. That can make macros very hard to understand. And hard for later program transformers to deal with.

          The innovation in Zig is the restrictions that limit the power of macros.

        • Lisp is so powerful, but without static types you can't even do basic stuff like overloading, and have to invent a way to even check the type(for custom types) so you can branch on type.
          • > Lisp is so powerful, but <tired old shit from someone who's never used Lisp>.

            You use defmethod for overloading. Types check themselves.

            • And a modern compiler will jmp past the type checks if the inferencer OKs it!
          • > but without static types

            So add static types.

            https://github.com/coalton-lang/coalton

          • No need for overloading when you have CLOS and multi-method dispatch.
      • There isn't really as clear of a distinction between "runtime" and "compile time" in Lisp. The comptime keyword is essentially just the opposite of quote in Lisp. Instead of using comptime to say what should be evaluated early, you use quote to say what should be evaluated later. Adding comptime to Lisp would be weird (though obviously not impossible, because it's Lisp), because that is essentially the default for expressions.
        • The truth of this varies between Lisp based languages.
  • Zig has a completely different feature, partial evaluation/specialization, which, none the less, is enough to cover most of use-cases for dynamic code generation.

    These kinds of insights are what I love about Zig. Andrew Kelley just might be the patron saint of the KISS principle.

    A long time ago I had an enlightenment experience where I was doing something clever with macros in F#, and it wasn't until I had more-or-less finished the whole thing that I realized I could implement it in a lot less (and more readable) code by doing some really basic stuff with partial application and higher order functions. And it would still be performant because the compiler would take care of the clever bits for me.

    Not too long after that, macros largely disappeared from my Lisp code, too.

  • zig's comptime has some (objectively: debatable? subjectively: definite) shortcomings that the zig community then overcomes with zig build to generate code-as-strings to be lateron @imported and compiled.

    Practically, "zig build"-time-eval. As such there's another 'comptime' stage with more freedom, unlimited run-time (no @setEvalBranchQuota), can do IO (DB schema, network lookups, etc.) but you lose the freedom to generate zig types as values in the current compilation; instead of that you of course have the freedom to reduce->project from target compiled semantic back to input syntax down to string to enter your future compilation context again.

    Back in the day, where I had to glue perl and tcl via C at one point in time, passing strings for perl generated through tcl is what this whole thing reminds me of. Sure it works. I'm not happy about it. There's _another_ "macro" stage that you can't even see in your code (it's just @import).

    The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

    • > The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

      Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

      That should be 100% the job of a build system.

      Now, you can certainly argue that generating a text file may or may not be the best way to reify the result back into the compiler. However, what the compiler gets and generates should be completely deterministic.

      • > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

        What is "itself" here, please? Access a static 'external' source? Access a dynamically generated 'external' source? If that file is generated in the build system / build process as derived information, would you put it under version control? If not, are you as nuts as I am?

        Some processes require sharp tools, and you can't always be afraid to handle one. If all you have is a blunt tool, well, you know how the saying goes for C++.

        > However, what the compiler gets and generates should be completely deterministic.

        The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

        Why would it be. Generating an interface is something that you want to be part of a streamline process. Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

        • > The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

          I know of no build system that is completely deterministic unless you go through the process of very explicitly pinning things. Whereas practically every compiler is deterministic (gcc, for example, would rebuild itself 3 times and compare the last two to make sure they were byte identical). Perhaps there needs to be "zigmeson" (work out and generate dependencies) and "zigninja" (just call compiler on static resources) to set things apart, but it doesn't change the fact that "zig build" dispatches to a "build system" and "zig"/"zig cc" dispatches to a "compiler".

          > Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

          That's a completely different issue, but it illustrates the problem perfectly.

          The problem is that @cImport() can be called from two different modules on the same file. What about if there are three? What about if they need different versions? What happens when a previous @cImport modifies how that file translates. How do you do link time optimization on that?

          This is exactly why your compiler needs to run on static resources that have already been resolved. I'm fine with my build system calling a SAT solver to work out a Gordian Knot of dependencies. I am not fine with my compiler needing to do that resolution.

        • > What is "itself"

          If I understand correctly the zig compiler is sandboxed to the local directory of the project's build file. Except for possibly c headers.

          The builder and linker can reach out a bit.

          • at "build time", the default language's build tool, a zig program, can reach anywhere and everywhere. To build a zig project, you'd use a zig program to create dependencies and invoke the compiler, cache the results, create output binaries, link them, etc.

            Distinguishing between `comptime` and `build time` is a distinction from the ivory tower. 'zig build' can happily reach anywhere, and generate anything.

            • Its not just academic, because if you try to @include something from out of path in your code you'll not be happy. Moreover, 'zig build' is not the only tool in the zig suite, there's individual compilation commands too. So there are real implications to this.

              It is also helpful for code/security review to have a one-stop place to look to see if anything outside of the git tree/submodule system can affect what's run.

      • > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

        It’s not the compiler per se.

        Let’s say you want a build system that is capable of generating code. Ok we can all agree that’s super common and not crazy.

        Wouldn’t it be great if the code that generated Zig code also be written in Zig? Why should codegen code be written in some completely unrelated language? Why should developers have to learn a brand new language to do compile time code Gen? Why yes Rust macros I’m staring angrily at you!

      • > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

        Why though? F# has this feature called TypeProviders where you can emit types to the compiler. For example, you can do do:

           type DbSchema = PostgresTypeProvider<"postgresql://postgres:...">
           type WikipediaArticle = WikipediaTypeProvider<"https://wikipedia.org/wiki/Hello">
        
        
        and now you have a type that references that Article or that DB. You can treat it as if you had manually written all those types. You can fully inspect it in the IDE, debugger or logger. It's a full type that's autogenerated in a temp directory.

        When I first saw it, I thought it was really strange. Then thought about it abit, played with it, and thought it was brilliant. Literally one of the smartest ideas ever. It's first class codegen framework. There were some limitations, but still.

        After using it in a real project, you figure out why it didn't catch on. It's so close, but it's missing something. Just one thing is out of place there. The interaction is painful for anything that's not a file source, like CsvTypeProvider or a public internet url. It does also create this odd dependenciey that your code has that can't be source controlled or reproduced. There were hacks and workarounds, but nothing felt right for me.

        It was however, the best attempt at a statically typed language trying to imitate python or javascript scripting syntax. Where you just say put a db uri, and you start assuming types.

      • >Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

        In gamedev code is small part of the end product. "Data-driven" is the term if you want to look it up. Doing an optimization pass that will partially evaluate data+code together as part of the build is normal. Code has like 'development version' that supports data modifications and 'shipping version' that can assume that data is known.

        The more traditional example of PGO+LTO is just another example how code can be specialized for existing data. I don't know a toolchain that survives change of PGO profiling data between builds without drastic changes in the resulting binary.

        • Is the PGO data not a static file which is then fed into the compiler? That still gives you a deterministic compiler, no?
      • > Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

        Yeah, although so can build.rs or whatever you call in your Makefile. If something like cargo would have built-in sandboxing, that would be interesting.

        • You can run cargo in a sandbox.
          • Yeah, but I want cargo to do that for me. And tell me if any build.rs does something it shouldn't.
      • > That should be 100% the job of a build system.

        What is the primary difference between build system and compiler in your mind? Why not have the compiler know how to build things, and so compile-time codegen you want to put in the build system, happens during compilation?

      • They are not advocating for IO in the compiler, but everything else that other languages can do with macros: run commands comptime, generate code, read code, modify code. It's proven to be very useful.
        • I'm going to make you defend that statement that they are "useful". I would counter than macros are "powerful".

          However, "macros" are a disaster to debug in every language that they appear. "comptime" sidesteps that because you can generally force it to run at runtime where your normal debugging mechanisms work just fine (returning a type being an exception).

          "Macros" generally impose extremely large cognitive overhead and making them hygienic has spawned the careers of countless CS professors. In addition, macros often impose significant compiler overhead (how many crates do Rust's proc-macros pull in?).

          It is not at all clear that the full power of general macros is worth the downstream grief that they cause (I also hold this position for a lot of compiler optimizations, but that's a rant for a different day).

          • > However, "macros" are a disaster to debug in every language that they appear.

            I have only used proper macros in Common Lisp, but at least there they are developed and debugged just like any other function. You call `macroexpand` in the repl to see the output of the macro and if there's an error you automatically get thrown in the same debugger that you use to debug other functions.

            • So, for debugging, we're already in the REPL--which means an interactive environment and the very significant amount of overhead baggage that goes with that (heap allocation, garbage collection, tty, interactive prompt, overhead of macroexpand, etc.).

              At the very least, that places you outside the boundary of a lot of the types of system programming that languages like C, C++, Rust, and Zig are meant to do.

      • Personally, I find the idea of needing something called a "build system" completely terrifying.
    • I actually like build-time code generation MUCH MORE than, let's say, run-time JVM bytecode patching. Using an ORM in Java is like playing with magic, you never know what works or how. Using an ORM with code generation is much nicer, suddenly my IDE can show me what each function does, I can debug them and reason about them.
    • You're complaining about generating code...

      While I agree that's typically a bad idea, this seems to have nothing to do specifically with zig.

      I get how you start with the idea that there's something deficient in zig's comptime causing this, but... what?

      I also have some doubts about how commonly used free-form code generation is with zig.

    • I consider it a feature as similar feature in csharp requires me to dabble in msbuild props and target, which are very unfriendly. Moreover, this kind of support is what makes js special and js ecosystem innovative
    • Learning XS (maybe with Swig?) was a great way to actually understand Perl.
    • The zig community cares about compilation speed. Unrestricted comptime would be quite disasterous for that.
      • I feel that's such a red herring.

        You can @setEvalBranchQuota essentially as big as you want, @embedFile an XML file, comptime parse it and generate types based on that (BTDT). You can slow down compilation as much as you want to already. Unrestricting the expressiveness of comptime has as much to do with compile times, as much as the restricted amount, and perceived entanglement of zig build and build.zig has to do with compile times.

        The knife about unrestricted / restricted comptime cuts both ways. Have you considered stopping using comptime and generate strings for cachable consumption of portable zig code for all the currently supported comptime use-cases right now? Why wouldn't you? What is it that you feel is more apt to be done at comptime? Can you accept that others see other use-cases that don't align with andrewrk's (current) vision? If I need to update a slow generation at 'project buildtime' your 'compilation speed' argument tanks as well. It's the problem space that dictates the minimal/optimal solution, not the programming language designer's headspace.

    • It does have share a lot of it with other communities like Odin, Go, Jai,...

      Don't really get it, lets go back to the old days because it is cool, kind of vibe.

      Ironically nothing this matters in the long term, as eventually LLMs will be producing binaries directly.

  • The quote in Spanish about a Norse god is from a story by Jorge Luis Borges, here's an English translation: https://biblioklept.org/2019/04/02/the-disk-a-very-short-sto...
    • If you have read the story and, like me, are still wondering which part of the story is the quote at the top of the post:

      "It's Odin's Disc. It has only one side. Nothing else on Earth has only one side."

      • A mobius strip does!
        • A mobius strip made out of paper has 2 sides, the usual one and the edge.
          • How about things like Klein bottles that have no edges? (Although I guess that unlike a Mobius strip it's not possible to make a real one here on Earth so the quote from OP still holds)
    • And in Spanish here: https://www.poeticous.com/borges/el-disco?locale=es

      (Not having much Spanish, I at first thought "Odin's disco(teque)" and then "no, that doesn't make sense about sides", but then, surely primed by English "disco", thought "it must mean Odin's record/lp/album".)

      • Odin's records have no B-sides, because everything Odin writes is fire!
        • Back when things really had A and B sides, it was moderately common for big artists to release a "Double A" in which both titles were heavily promoted, e.g. Nirvana's "All Apologies" and "Rape Me" are a double A, the Beatles "Penny Lane" and "Strawberry Fields Forever" likewise.
    • The story is indeed very short, but hits hard. Odin reveals himself and his mystical disc that he states makes him king as long as he holds it. The Christian hermit (by circumstance) who had previously received him told him he didn't worship Him, that he worshiped Christ instead, and then murdered him for the disc in the hopes he could sell it for a bunch of money. He dumped Odin's body in the river and never found the disc. The man hated Odin to this day for not just handing over the disc to him.

      I wonder if there's some message in here. As a modern American reader, if I believed the story was contemporary, I'd think it's making a point about Christianity substituting honor for destructive greed. That a descendant of the wolves of Odin would worship a Hebrew instead and kill him for a bit of money is quite sad, but I don't think it an inaccurate characterization. There's also the element of resentment towards Odin for not just handing over monetary blessings. That's sad to me as well. Part of me hopes that one day Odin isn't held in such contempt.

  • What makes comptime really interesting is how fluid it is as you work.

    At some point you realize you need type information, so you just add it to your func params.

    That bubbles all the way up and you are done. Or you realize in certain situation it is not possible to provide the type and you need to solve a arch/design issue.

    • If the type that you're passing as an argument is the type of another argument, you can keep the API simpler by just using @TypeOf(arg) internally in the function instead.
  • > When you execute code at compile time, on which machine does it execute? The natural answer is “on your machine”, but it is wrong!

    I don’t understand this.

    If I am cross-compiling a program is it not true that comptime code literally executes on my local host machine? Like, isn’t that literally the definition of “compile-time”?

    If there is an endian architecture change I could see Zig choosing to emulate the target machine on the host machine.

    This feels so wrong to me. HostPlatform and TargetPlatform can be different. That’s fine! Hiding the host platform seems wrong. Can aomeone explain why you want to hide this seemingly critical fact?

    Don’t get me wrong, I’m 100% on board the cross-compile train. And Zig does it literally better than any other compiled language that I know. So what am I missing?

    Or wait. I guess the key is that, unlike Jai, comptime Zig code does NOT run at compile time. It merely refers to things that are KNOWN at compile time? Wait that’s not right either. I’m confused.

    • The point is that something like sizeof(pointer) should have the same value in comptime code that it has at runtime for a given app. Which, yes, means that the comptime interpreter emulates the target machine.

      The reason is fairly simple: you want comptime code to be able to compute correct values for use at runtime. At the same time, there's zero benefit to not hiding the host platform in comptime, because, well, what use case is there for knowing e.g. the size of pointer in the arch on which the compiler is running?

      • > Which, yes, means that the comptime interpreter emulates the target machine.

        Reasonable if that’s how it works. I had absolutely no idea that Zig comptime worked this way!

        > there's zero benefit to not hiding the host platform in comptime

        I don’t think this is clear. It is possibly good to hide host platform given Zig’s more limited comptime capabilities.

        However in my $DayJob an extremely common and painful source of issues is trying to hide host platform when it can not in fact be hidden.

        • Can you give an example of a use case where you wouldn't want comptime behavior to match runtime, but instead expose host/target differences?
          • Let’s pretend I was writing some compile-time code that generates code. For example maybe I’m generating serde code. Or maybe I’m generating bindings for C, Python, etc.

            My generation code is probably going to allocate some memory and have some pointers and do some stuff. Why on earth would I want this compile-time code to run on an emulated version of the target platform? If I’m on a 64-bit platform then pointers are 8-bytes why would I pretend they aren’t? Even if the target is 32-bit?

            Does that make sense? If the compiletime code ONLY runs on the host platform then you plausibly need to expose both host and target.

            I’m pretty sure I’m thinking about zig comptime all wrong. Something isn’t clicking.

            • It sounds like the sort of compile-time code that you're talking about is closer to "buildtime" code in Zig, that is Zig code compiled for the host platform and executed by the build system to generate code (or data) to be used when compiling for the target system. As it stands now, there's absolutely nothing special about buildtime code in Zig other than Zig's build system providing good integration.

              On the other hand, "comptime" is actually executed within the compiler similar to C++'s `consteval`. There's no actual "emulation" going on. The "emulation" is just ensuring that any observable characteristic of the platform matches the target, but it's all smoke and mirrors. You can create pointers to memory locations, but these memory locations and pointers are not real. They're all implemented using the same internal mechanisms that power the rest of the compilation process. The compiler's logic to calculate the value of a global constant (`const a: i32 = 1 + 2;`) is the "comptime" that allows generic functions, ORMs, and all these other neat use cases.

            • > Why on earth would I want this compile-time code to run on an emulated version of the target platform? If I’m on a 64-bit platform then pointers are 8-bytes why would I pretend they aren’t? Even if the target is 32-bit? Does that make sense?

              Nope, sorry, to me it doesn't. If you're cross-compiling for some other platform, then yes, I'd think you want the generated binary to be compatible with the target platform. And in order to verify that that binary code is correct for that target platform, you need to "allocate some memory and have some pointers and do some stuff" as you do on that platform.

              So why on Earth would you want stuff -- like pointer sizes and whatnot -- to not be compatible with the target platform, but with whatever you happen to be compiling on? What good is pointer size compatibility with your compiling platform to a user of your end-result binary on the target platform? Looks like the mother of all it-worked-on-my-machine statements: "Whaddaya mean it has memory allocation errors on your machine? I ran it as if for my totally-different machine at compile-time, so of course it works on yours!"

  • > Zig’s comptime feature is most famous for what it can do: generics!, conditional compilation!, subtyping!, serialization!, ORM! That’s fascinating, but, to be fair, there’s a bunch of languages with quite powerful compile time evaluation capabilities that can do equivalent things.

    I'm curious what are these other languages that can do these things? I read HN regularly but don't recall them. Or maybe that's including things like Java's annotation processing which is so clunky that I wouldn't classify them to be equivalent.

    • Yeah, I'm not a big fan of annotation processing either. It's simultaneously heavyweight and unwieldy, and yet doesn't do enough. You get all the annoyance of working with a full-blown AST, and none of the power that comes with being able to manipulate an AST.

      Annotations themselves are pretty great, and AFAIK, they are most widely used with reflection or bytecode rewriting instead. I get that the maintainers dislike macro-like capabilities, but the reality is that many of the nice libraries/facilities Java has (e.g. transparent spans), just aren't possible without AST-like modifications. So, the maintainers don't provide 1st class support for rewriting, and they hold their noses as popular libraries do it.

      Closely related, I'm pretty excited to muck with the new class file API that just went GA in 24 (https://openjdk.org/jeps/484). I don't have experience with it yet, but I have high hopes.

      • pron
        Java's annotation processing is intentionally limited so that compiling with them cannot change the semantics of the Java language as defined by the Java Language Specification (JLS).

        Note that more intrusive changes -- including not only bytecode-rewriting agents, but also the use of those AST-modifying "libraries" (really, languages) -- require command-line flags that tell you that the semantics of code may be impacted by some other code that is identified in those flags. This is part of "integrity by default": https://openjdk.org/jeps/8305968

        • Just because something mucks with a program's AST doesn't mean that it's introducing a new "language". You wouldn't call using reflection, "creating a new language", either, and many of these libraries can be implemented either way. (Usually a choice between adding an additional build step, runtime overhead, and ease of implementation). It just really depends upon the details of the transform.

          The integrity by default JEPs are really about trying to reduce developers depending upon JDK/JRE implementation details, for example, sun.misc.Unsafe. From the JEP:

          "In short: The use of JDK-internal APIs caused serious migration issues, there was no practical mechanism that enabled robust security in the current landscape, and new requirements could not be met. Despite the value that the unsafe APIs offer to libraries, frameworks, and tools, the ongoing lack of integrity is untenable. Strong encapsulation and the restriction of the unsafe APIs — by default — are the solution."

          If you're dependent on something like ClassFileTransformer, -javaagent, or setAccessible, you'll just set a command-line flag. If you're not, it's because you're already doing this through other means like a custom ClassLoader or a build step.

          • pron
            > Just because something mucks with a program's AST doesn't mean that it's introducing a new "language".

            That depends on the language specification. The Java spec dictates what code a Java compiler must accept and must reject. Any "mucking with AST" that changes that is, by definition, not Java. For example, many Lombok programs are clearly not written in Java because the Java spec dictates that a Java compiler (with or without annotation processors) must reject them.

            In Scheme or Clojure, user-defined AST transformations are very much part of the language.

            > The integrity by default JEPs are really about trying to reduce developers depending upon JDK/JRE implementation details

            I'm one of the JEP's authors, and it concerns multiple things. In general, it concerns being able to make guarantees about certain invariants.

            > If you're not, it's because you're already doing this through other means like a custom ClassLoader or a build step.

            Custom class loaders fall within integrity by default, as their impact is localised. Build step transforms also require an explicit run of some executable. The point of integrity by default is that any possibility of breaking invariants that the spec wishes to enforce must require some visible, auditable step. This is to specifically exclude invariant-breaking operations by code that appears to be a regular library.

            • Thanks for clarifying your role in the JEP.

              I feel like we're talking right past one another. The ultimate reality is that annotation processors are pretty terrible for implementing functionality that a lot of Java developers depend upon. You could say annotation processors "weren't designed for that", but then you're just agreeing with me. This is sad, because arguably something quite similar to annotation processors could make the jobs of all of these developers a lot easier, instead of having them falling back to other mechanisms.

              If your concern is integrity by default, why not just add yet another flag for can-muck-with-the-ast-annotation-processors? Or we can continue with the status quo.

              • pron
                > If your concern is integrity by default, why not just add yet another flag for can-muck-with-the-ast-annotation-processors?

                There is such a flag (or, rather, a set of flags), and that's exactly what the Lombok compiler uses to change javac to compile Lombok sources rather than Java sources.

                However, we think there are much better solutions to the problem those languages try to solve than allowing AST manipulation.

                • You've referenced Lombok a lot here, and some Google searches later, I can see that you're in conversations all over the internet re: Lombok (and similar projects like Manifold). Their purpose is to extend the Java language. The class of code I'm referring to is more like those you already mention in your JEP: logging, tracing, profiling, serialization, authn/authz, mocking, ffi, and so on. I would describe all of those as fitting under the umbrella of "cross-cutting" and needing a "meta-programming" facility.

                  > However, we think there are much better solutions

                  I'd like to hear more. Can I discuss this further with you in a more appropriate venue than this forever thread?

                  • > The class of code I'm referring to is more like those you already mention in your JEP: logging, tracing, profiling, serialization, authn/authz, mocking, ffi, and so on. I would describe all of those as fitting under the umbrella of "cross-cutting" and needing a "meta-programming" facility.

                    Those are traditionally offered in Java in the form of bytecode transformation rather than AST transformations, as the notion of "compile time" in Java is not as clear as it is in, say, Zig; Project Leyden will make it even more vague, as it will allow caching JIT output from one run to the next.

                    > Can I discuss this further with you in a more appropriate venue than this forever thread?

                    Sure, you can email me at the email address I use on the JDK mailing lists (e.g. loom-dev).

                    • > Those are traditionally offered in Java in the form of bytecode transformation

                      And we've come full circle. I think they're traditionally written as bytecode transformations, because the entire pipeline for both writing and using many kinds of program transformations in bytecode is far simpler, more accessible, and more performant than implementing and executing a source-to-source compiler that feeds into another java compiler.

                      That said, there are also times you wish to perform transforms on programs for which you don't have access to source, in which case your hand is forced. Ideally, you would be able to write many classes of transforms agnostic to that context.

                      > Sure

                      Thanks!

    • Rust, D, Nim, Crystal, Julia
      • Definitely, you can do most of those things in Nim without macros using templates and compile time stuff. It’s preferable to macros when possible. Julia has fantastic compile time abilities as well.

        It’s beautiful to implement an incredibly fast serde in like 10 lines without requiring other devs to annotate their packages.

        I wouldn’t include Rust on that list if we’re speaking of compile time and compile time type abilities.

        Last time I tried it Rust’s const expression system is pretty limited. Rust’s macro system likewise is also very weak.

        Primarily you can only get type info by directly passing the type definition to a macro, which is how derive and all work.

        • Rust has two macro systems, the proc macros are allowed to do absolutely whatever they please because they're actually executing in the compiler.

          Now, should they do anything they please? Definitely not, but they can. That's why there's a (serious) macro which runs your Python code, and a (joke, in the sense that you should never use it, not that it wouldn't work) macro which replaces your running compiler with a different one so that code which is otherwise invalid will compile anyway...

        • > Rust’s macro system likewise is also very weak.

          How so? Rust procedural macros operate on token stream level while being able to tap into the parser, so I struggle to think of what they can't do, aside from limitations on the syntax of the macro.

          • Rust macros don't really understand the types involved.

            If you have a derive macro for

                #[derive(MyTrait)]
                struct Foo {
                    bar: Bar,
                    baz: Baz,
                }
            
            then your macro can see that it references Bar and Baz, but it can't know anything about how those types are defined. Usually, the way to get around it is to define some trait on both Bar and Baz, which your Foo struct depends on, but that still only gives you access to that information at runtime, not when evaluating your macro.

            Another case would be something like

                #[my_macro]
                fn do_stuff() -> Bar {
                    let x = foo();
                    x.bar()
                }
            
            Your macro would be able to see that you call the functions foo() and Something::bar(), but it wouldn't have the context to know the type of x.

            And even if you did have the context to be able to see the scope, you probably still aren't going to reimplement rustc's type inference rules just for your one macro.

            Scala (for example) is different: any AST node is tagged with its corresponding type that you can just ask for, along with any context to expand on that (what fields does it have? does it implement this supertype? are there any relevant implicit conversions in scope?). There are both up- and downsides to that (personally, I do quite like the locality that Rust macros enforce, for example), but Rust macros are unquestionably weaker.

            • Thanks, that’s exactly what I was referencing. In lisp the type doesn’t matter as much, just the structure, as maps or other dynamic pieces will be used. However in typed languages it matters a lot.
          • Rust macros are a mutant foreign language.

            A much much better system would be one that lets you write vanilla Rust code to manipulate either the token stream or the parsed AST.

            • ...? Proc macros _are_ vanilla Rust code written to manipulate a token stream.
              • You’re right. I should have said I want vanilla Rust code for vanilla macros and I want to manipulate the AST not token streams.

                Token manipulation code is frequently full of syn! macro hell. So even token manipulation is only kind of normal Rust code.

          • It doesn't have access to the type system, for example. It just sees it's input as what you typed in the code. It wouldn't be able to see through aliases.
      • Perl BEGIN blocks
        • PPR + keyword::declare (shame that Damien didn't actually call it keyword::keyword).
    • well, the lisp family of languages surely can do all of that, and more. Check out, for example, clojure's version of zig's dropped 'async'. It's a macro.
  • This is a very educational blog post. I knew ‘comptime for’ and ‘inline for’ were comptime related, but didn’t know the difference. The post explains the inline version only knows the length at comptime. I guess it’s for loop unrolling.
    • The normal use case for `inline for` is when you have to close over something only known at compile time (like when iterating over the fields of a struct), but when your behavior depends on runtime information (like conditionally assigning data to those fields).

      Unrolling as a performance optimization is usually slightly different, typically working in batches rather than unrolling the entire thing, even when the length is known at compile time.

      The docs suggest not using `inline` for performance without evidence it helps in your specific usage, largely because the bloated binary is likely to be slower unless you have a good reason to believe your case is special, and also because `inline` _removes_ optimization potential from the compiler rather than adding it (its inlining passes are very, very good, and despite having an extremely good grasp on which things should be inlined I rarely outperform the compiler -- I'm never worse, but the ability to not have to even think about it unless/until I get to the microoptimization phase of a project is liberating).

  • I like the Zig language and tooling. I do wish there was a safety mode that give the same guarantees as Rust, but it’s a huge step above C/C++. I am also extremely impressed with the Zig compiler.

    Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible

    • ksec
      >but I’d love the best of both worlds if it were possible

      I am just going to quote what pcwalton said the other day that perhaps answer your question.

      >> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.

      > That exists; it's called garbage collection.

      >If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.

      [1] https://news.ycombinator.com/item?id=43726315

      • Maybe this is a bad place to ask, but: Those experienced in manual-memory langs: What in particular do you find cumbersome about the borrow system? I've hit some annoyances like when splitting up struct fields into params where more than one is mutable, but that's the only friction point that comes to mind.

        I ask because I am obvious blind to other cases - that's what I'm curious about! I generally find the &s to be a net help even without mem safety ... They make it easier to reason about structure, and when things mutate.

        • I imagine a large part is just how one is used to doing stuff. Not being forced to be explicit about mutability and lifetimes allows a bunch of neat stuff that does not translate well to Rust, even if the desired thing in question might not be hard to do in another way. (but that other way might involve more copies / indirections, which users of manually-memory langs would (perhaps rightfully, perhaps pointlessly) desire to avoid if possible, but Rust users might just be comfortable with)

          This separation is also why it is basically impossible to make apples-to-apples comparisons between languages.

          Messy things I've hit (from ~5KLoC of Rust; I'm a Rust beginner, I primarily do C) are: cyclical references; a large structure that needs efficient single-threaded mutation while referenced from multiple places (i.e. must use some form of cell) at first, but needs to be sharable multithreaded after all the mutating is done; self-referential structures are roughly impossible to move around (namely, an object holding &-s to objects allocated by a bump allocator, movable around as a pair, but that's not a thing (without libraries that I couldn't figure out at least)); and refactoring mutability/lifetimes is also rather messy.

        • Lifetime annotations can be burdensome when trying to avoid extraneous copies and they feel contagious (when you add a lifetime annotation to a frequently used type, it bubbles out to anything that uses that type unless you're willing to use unsafe to extend lifetimes). The solutions to this problem (tracking indices instead of references) lose a lot of benefits that the borrow checker provides.

          The aliasing rules in Rust are also pretty strict. There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code. This usually indicates a design issue in your program but sometimes you just want to throw together some code to solve an immediate problem. The extra friction from the borrow checker makes it less attractive to use Rust for these kinds of programs.

          • >There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code.

            You could do that using Cell or RefCell. I agree that it makes it more cumbersome.

        • rc00
          > What in particular do you find cumbersome about the borrow system?

          The refusal to accept code that the developer knows is correct, simply because it does not fit how the borrow checker wants to see it implemented. That kind of heavy-handed and opinionated supervision is overhead to productivity. (In recent times, others have taken to saying that Rust is less "fun.")

          When the purpose of writing code is to solve a problem and not engage in some pedantic or academic exercise, there are much better tools for the job. There are also times when memory safety is not a paramount concern. That makes the overhead of Rust not only unnecessary but also unwelcome.

          • Isn't the persistent failure of developers to "know" that their code is correct the entire point? Unless you have mechanical proof, in the aggregate and working on any project of non-trivial size "knowing" is really just "assuming." This isn't academic or pedantic, it's a basic epistemological claim with regard to what writing software actually looks like in practice. You, in fact, do not know, and your insistence that you do is precisely the reason that you are at greater risk of creating memory safety vulnerabilities.
          • Ygg2
            > The refusal to accept code that the developer knows is correct,

            How do you know it is correct? Did you prove it with pre-condition, invariants and post-condition? Or did you assume based on prior experience.

            • One example is a function call that doesn't compile, but will if you inline the function body. Compilation is prevented only by the insufficient expressiveness of the function signature.
            • Writing correct code did not start after the introduction of the rust programming language
              • Nope, but claims of knowing to write correct code (especially C code) without borrow checker sure did spike with its introduction. Hence, my question.

                How do you know you haven't been writing unsafe code for years, when C unsafe guidelines have like 200 entries[1].

                [1]https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definit... (Annex J.2 page 490)

                • It's not difficult to write a provably correct implementation of doubly linked list in C, but it is very painful to do in Rust because the borrow checker really hates this kind of mutually referential objects.
                  • Hard part of writing actually provable code isn't the code. It's the proof. What are invariants of double linked list that guarantee safety?

                    Writing provable anything is hard because it forces you to think carefully about that. You can no longer reason by going into flow mode, letting fast and incorrect part of the brain take over.

            • Rust prevents classes of bugs by preventing specific patterns.

              This means it rejects, by definition alone, bug-free code because that bug free code uses a pattern that is not acceptable.

              IOW, while Rust rejects code with bugs, it also rejects code without bugs.

              It's part of the deal when choosing Rust, and people who choose Rust know this upfront and are okay with it.

              • > This means it rejects, by definition alone, bug-free code because that bug free code uses a pattern that is not acceptable.

                That is not true by definition alone. It is only true if you add the corollary that the patterns which rustc prevents are sometimes bug-free code.

                • > That is not true by definition alone. It is only true if you add the corollary that the patterns which rustc prevents are sometimes bug-free code.

                  That corollary is only required in the cases that a pattern is unable to produce bug-free code.

                  In practice, there isn't a pattern that reliably, 100% of the time and deterministically produces a bug.

          • Thank you for the answer! Do you have an example? I'm having a fish-doesn't-know-water problem.
            • Basically anything that involves objects mutually referencing each other.
              • Oh, that does sound tough in rust! I'm not even sure how to approach it; good to know it's a useful pattern in other langs.
                • Well, one can always write unsafe Rust.

                  Although the more usual pattern here is to ditch pointers and instead have a giant array of objects referring to each other via indices into said array. But this is effectively working around the borrow checker - those indices are semantically unchecked references, and although out-of-bounds checks will prevent memory corruption, it is possible to store index to some object only for that object to be replaced with something else entirely later.

                  • > it is possible to store index to some object only for that object to be replaced with something else entirely later.

                    That's what generational arenas are for, at the cost of having to check for index validity on every access. But that cost is only in comparison to "keep a pointer in a field" with no additional logic, which is bug-prone.

                  • >unsafe rust Which is worse than C
        • Lifetimes add an impending sense of doom to writing any sort of deeply nested code. You get this deep without writing a lifetime... uh oh, this struct needs a reference, and now you need to add a generic parameter to everything everywhere you've ever written and it feels miserable. Doubly so when you've accidentally omitted a lifetime generic somewhere and it compiles now but then you do some refactoring and it won't work anymore and you need to go back and re-add the generic parameter everywhere.
          • There is a stark contrast in usability of self-contained/owning types vs types that are temporary views bound by a lifetime of the place they are borrowing from. But this is an inherent problem for all non-GC languages that allow saving pointers to data on the stack (Rust doesn't need lifetimes for by-reference heap types). In languages without lifetimes you just don't get any compiler help in finding places that may be affected by dangling pointers.

            This is similar to creating a broadly-used data structure and realizing that some field has to be optional. Option<T> will require you to change everything touching it, and virally spread through all the code that wanted to use that field unconditionally. However, that's not the fault of the Option syntax, it's the fault of semantics of optionality. In languages that don't make this "miserable" at compile time, this problem manifests with a whack-a-mole of NullPointerExceptions at run time.

            With experience, I don't get this "oh no, now there's a lifetime popping up everywhere" surprise in Rust any more. Whether something is going to be a temporary view or permanent storage can be known ahead of time, and if it can be both, it can be designed with Cow-like types.

            I also got a sense for when using a temporary loan is a premature optimization. All data has to be stored somewhere (you can't have a reference to data that hasn't been stored). Designs that try to be ultra-efficient by allowing only temporary references often force data to be stored in a temporary location first, and then borrowed, which doesn't avoid any allocations, only adds dependencies on external storage. Instead, the design can support moving or collecting data into owned (non-temporary) storage directly. It can then keep it for an arbirary lifetime without lifetime annotations, and hand out temporary references to it whenever needed. The run-time cost can be the same, but the semantics are much easier to work with.

          • I guess the dodge on this one is not using refs in structs. This opens you up to index errors though because it presumably means indexing arrays etc. Is this the tradeoff. (I write loads of rusts in a variety of domains, and rarely need a manual lifetime)
            • And those index values are just pointers by another name!
              • It's not "just pointers", because they can have additional semantics and assurances beyond "give me the bits at this address". The index value can be tied to a specific container (using new types for indexing so tha you can't make the mistake of getting value 1 from container A when it represents an index from container B), can prevent use after free (by embedding data about the value's "generation" in the key), and makes the index resistant to relocation of the values (because of the additional level of indirection of the index to the value's location).
                • Yes, but like raw pointers, they lack lifetime guarantees and invite use after free vulnerabilities
      • Yes, but I’m not hoping for that. I’m hoping for something like a scripting language with simpler lifetime annotations. Is Rust going to be the last popular language to be invented that explores that space? I hope not.
        • I was quite impressed with Austral[0], which used Linear Types and avoids the whole Rust-like implementation in favour of a more easily understandable system, albeit slightly more verbose.

          [0]https://borretti.me/article/introducing-austral

          • Austra's concept are interesting but the introduction doesn't show how to handle correctly errors in this language..
            • Austral's specification is one of the most beautiful and well-written pieces of documentation I have ever found. It's section on error handling in Austral[0] cover everything from rationale and alternatives to concrete examples of how exceptions should be handled in conjunction with linear types.

              https://austral-lang.org/spec/spec.html#rationale-errors

        • > Is Rust going to be the last popular language to be invented that explores that space? I hope not.

          Seeing how most people hate the lifetime annotations, yes. For the foreseeable future.

          People want unlimited freedom. Unlimited freedom rhymes with unlimited footguns.

          • There is Mojo and Vale (which was created by a now Mojo core contributor)
        • You may be interested in https://dada-lang.org/, which is not ready for public consumption, but is a language by one of Rust's designers that aims to be higher-level while still keeping much of the goodness from Rust.
          • The first and last blog post was in 2021. Looks like it’s still active on Github, though?
      • With Java ZGC the performance aspect has been fixed (<1ms pause times and real world throughput improvement). Memory usage though will always be strictly worse with no obvious way to improve it without sacrificing the performance gained.
        • IMO the best chance Java has to close the gap on memory utilisation is Project Valhalla[1] which brings value types to the JVM, but the specifics will matter. If it requires backwards incompatible opt-in ceremony, the adoption in the Java ecosystem is going to be an uphill battle, so the wins will remain theoretical and be unrealised. If it is transparent, then it might reduce the memory pressure of Java applications overnight. Last I heard was that the project was ongoing, but production readiness remained far in the future. I hope they pull it off.

          1: https://openjdk.org/projects/valhalla/

          • Agree, been waiting for it for almost a decade.
      • I have zero issue with needing runtime GC or equivalent like ARC.

        My issue is with ergonomics and performance. In my experience with a range of languages, the most performant way of writing the code is not the way you would idiomatically write it. They make good performance more complicated than it should be.

        This holds true to me for my work with Java, Python, C# and JavaScript.

        What I suppose I’m looking for is a better compromise between having some form of managed runtime vs non managed

        And yes, I’ve also tried Go, and it’s DX is its own type of pain for me. I should try it again now that it has generics

        • Using spans, structs, object and array pools is considered fairly idiomatic C# if you care about performance (and many methods now default to just spans even outside that).

          What kind of idiomatic or unidiomatic C# do you have in mind?

          I’d say if you are okay with GC side effects, achieving good performance targets is way easier than if you care about P99/999.

    • I like Zig as a replacement for C, but not C++ due to its lack of RAII. Rust on the other hand is a great replacement for C++. I see Zig as filling a small niche where allocation failures are paramount - very constrained embedded devices, etc... Otherwise, I think you just get a lot more with Rust.
      • Compile times and painful to refactor codebase are rust’s main drawbacks for me though.

        It’s totally subjective but I find the language boring to use. For side projects I like having fun thus I picked zig.

        To each his own of course.

        • > refactor codebase are rust’s main drawbacks

          Hard disagree about refactoring. Rust is one of the few languages where you can actually do refactoring rather safely without having tons of tests that just exist to catch issues if code changes.

          • Lifetimes and generic tend to leak so you have to modify your code all around the place when you touch them though.
      • Even better than RAII would be linear types, but it would require a borrow checker to track the lifetimes of objects. Then you would get a compiler error if you forget to call a .destroy() method
    • in principle it should be doable, possibly not in the language/compiler itself, there was this POC a few months ago:

      https://github.com/ityonemo/clr

    • I wish for “strict” mode as well. My current thinking:

      TypeScript is to JavaScript

      as

      Zig is to C

      I am a huge TS fan.

      • rc00
        Is Zig aiming to extend C or extinguish it? The embrace story is well-established at this point but the remainder is often unclear in the messaging from the community.
        • It's improved C.

          C interop is very important, and very valuable. However, by removing undefined behaviours, replacing macros that do weird things with well thought-through comptime, and making sure that the zig compiler is also a c compiler, you get a nice balance across lots of factors.

          It's a great language, I encourage people to dig into it.

        • Zig is open source, so the analogy to Microsoft's EEE [0] seems misplaced.

          [0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...

          • rc00
            Open source or not isn't the point. The point is the mission and the ecosystem. Some of the Zig proponents laud the C compatibility. Others are seeking out the "pure Zig" ecosystem. Curious onlookers want to know if the Zig ecosystem and community will be as hostile to the decades of C libraries as the Rust zealots have been.

            To be fair, I don't believe there is a centralized and stated mission with Zig but it does feel like the story has moved beyond the "Incrementally improve your C/C++/Zig codebase" moniker.

            • > Curious onlookers want to know if the Zig ecosystem and community will be as hostile to the decades of C libraries as the Rust zealots have been.

              Definitely not the case in Zig. From my experience, the relationship with C libraries amounts to "if it works, use it".

              • Are you referring to static linking? Dynamic linking? Importing/inclusion? How does this translate (no pun intended) when the LLVM backend work is completed? Does this extend to reproducible builds? Hermetic builds?

                And the relationship with C libraries certainly feels like a placeholder, akin to before the compiler was self-hosted. While I have seen some novel projects in Zig, there are certainly more than a few "pure Zig" rewrites of C libraries. Ultimately, this is free will. I just wonder if the Zig community is teeing up for a repeat of Rust's actix-web drama but rather than being because of the use of unsafe, it would be due to the use of C libraries instead of the all-Zig counterparts (assuming some level of maturity with the latter). While Zig's community appears healthier and more pragmatic, hype and ego have a way of ruining everything.

                • > static linking?

                  Yes

                  > Dynamic linking?

                  Yes

                  > Importing/inclusion?

                  Yes

                  > How does this translate (no pun intended) when the LLVM backend work is completed?

                  I'm not sure what you mean. It sounds like you think they're working on being able to use LLVM as a backend, but that has already been supported, and now they're working on not depending on LLVM as a requirement.

                  > Does this extend to reproducible builds?

                  My hunch would be yes, but I'm not certain.

                  > Hermetic builds?

                  I have never heard of this, but I would guess the same as reproducible.

                  > While I have seen some novel projects in Zig, there are certainly more than a few "pure Zig" rewrites of C libraries.

                  It's a nice exercise, especially considering how close C and Zig are semantically. It's helpful for learning to see how C things are done in Zig, and rewriting things lets you isolate that experience without also being troubled with creating something novel.

                  For more than a few not rewrites, check out https://github.com/allyourcodebase, which is a group that repackages existing C libraries with the Zig package manager / build system.

            • zig's C compat is being lowered from 'comptime' equivalent status to 'zig build'-time equivalent status. When you'll need to put 'extern "C"' annotations on any import/export to C, it'll have gone full-circle to C++ C compat, and thus be none the wiser.

              andrewrk's wording towards C and its main ecosystem (POSIX) is very hostile, if that is something you'd like to go by.

        • The goal rather explicitly seems to be to extinguish it - the idea being that if you've got Zig, there should be no reason to need to write new code in C, because literally anything possible in C should be possible (and ideally done better) in Zig.

          Whether that ends up happening is obviously yet to be seen; as it stands there are plenty of Zig codebases with C in the mix. The idea, though, is that there shouldn't be anything stopping a programmer from replacing that C with Zig, and the two languages only coexist for the purpose of allowing that replacement to be gradual.

    • Most of Zig's safety was already available in 1978's Modula-2, but apparently languages have to come in curly brackets for adoption.
      • languages have to come in curly brackets for adoption

        Python and Ruby are two very popular counterexamples.

        • Not really, Ruby has plenty of curly brackets, e.g. 5.times { puts "hello!" }.

          In both cases, while it wasn't curly brackets that drove their adoption, it was unavoidable frameworks.

          Most people only use Ruby when they have Rails projects, and what made Python originally interesting was Zope CMS.

          And nowadays AI/ML frameworks, that are actually written in C, C++ and Fortran, making Python relevant because scientists decided on picking Python for their library bindings, it could have been Tcl just as well, as choices go.

          So yeah, maybe not always curly brackets, but definitly something that makes it unavoidable, sadly Modula-2 lacked that, an OS vendor pushing it no matter what, FAANG style.

          • Which AI/ML frameworks are written in Fortran?
            • Probably none, it was more a kind of expression, given the tradition of "Python" libraries, that are actually bindings to C, C++, Fortran libraries.
  • This is honestly really cool! I've heard praises about Zig's comptime without really understanding what makes it tick. It initially sounds like Rust's constant evaluation which is not particularly capable. The ability to have types represented as values at compilation time, and _only_ at compile time, is clearly very powerful. It approximates dynamic languages or run-time reflection without any of the run-time overhead and without opening the Pandora's box that is full blown macros as in Lisp or Rust's procedural macros.
  • [dead]
  • [dead]
  • Cool!