• I'm using oxc_traverse and friends to implement on-the-fly JS instrumentation for https://github.com/antithesishq/bombadil and it has been awesome. That in combination with boa_engine lets me build a statically linked executable rather than a hodgepodge of Node tools to shell out to. Respect to the tools that came before but this is way nicer for distribution. Good times for web tech IMO.
  • All the Void Zero projects are super cool although I still wonder how they’re going to monetize all this.
    • rk06
      they are going to use vite plus for monetization
      • The vite plus idea is that you'll pay for visual tools. What's odd to me is it makes their paid product kind of a bet against their open product. If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.

        The paradox gains another layer when you consider that their whole mission is to build tools for the JavaScript ecosystem, yet by moving to Rust they are betting that JS-the-language is so broken that it cannot even host its own tools. And because JS is still a stronger language for building UIs in than Rust, their business strategy now makes them hard-committed to their bet that JS tools in JS are a dead end.

        • > it cannot even host its own tools

          You say this like this is the basic requirement for a language. But languages make tradeoffs that make them more appropriate for some domains and not others. There's no shade if a language isn't ideal for developer tools, just like there's no shade if a language isn't perfect for web frontends, web backends, embedded development, safety critical code (think pacemakers), mobile development, neural networks and on and on.

          Seriously, go to https://astral.sh and scroll down to "Linting the CPython code base from scratch". It would be easy to look at that and conclude that Python's best days are behind it because it's so slow. In reality Python is an even better language at its core domains now that its developer tools have been rewritten in Rust. It's the same excellent language, but now developers can iterate faster.

          It's the same with JavaScript. Just because it's not the best language for linters and formatters doesn't mean it's broken.

        • > The vite plus idea is that you'll pay for visual tools.

          From what I understand, Vite+ seems like an all-in-one toolchain. Instead of maintaining multiple configurations with various degrees of intercompatibility, you maintain only one.

          This has the added benefit that linters and such can share information about your dependency graph, and even ASTs, so your tools doesn't have to compute them individually. Which has a very decent potential of improving your overall pre-merge pipeline. Then, on top of that, caching.

          The focus here is of course enterprise customers and looks like it is supposed to compete with the likes of Nx/Moonrepo/Turborepo/Rush. Nx and Rush are big beasts and can be somewhat unwieldy and quirky. Nx lost some trust with its community by retracting some open-source features and took a very long time to (partially) address the backlash.

          Vite+ has a good chance to be a contender on the market with clearer positioning if it manages to nail monorepo support.

        • rk06
          > they are betting that JS-the-language is so broken that it cannot even host its own tools.

          Evan wallace proved it by building esbuild. this is no longer bet.

          > If their open platform were as powerful as it should be, it would be easy to use it to recreate the kinds of experiences they propose to sell.

          you would be surprised to know that tech companies may find it cheaper to pay money than developer bandwidth for stuff beyong their core compentency.

          dropbox was also considered to be trivially implementable, but end users rarely try to re-invent it.

          • > esbuild

            Another example is the TypeScript compiler being rewritten in Go instead of self-hosting. It's an admission that the language is not performant enough, and more, it can never be enough for building its own tooling. It might be that the tooling situation is the problem, not the language itself, though. I do see hopeful signs that JavaScript ecosystem is continuing to evolve, like the recent release of MicroQuickJS by Bellard, or Bun which is fast(er) and really fun to use.

            • I don't think that's necessarily a bad thing, though. JavaScript isn't performant enough for its own tooling, but that's just one class of program that can be written. There are plenty of other classes of program where JavaScript is perfectly fast enough, and the ease of e.g. writing plugins or having a fast feedback loop outweighs the benefits of other languages.

              I quite like Roc's philosophy here: https://www.roc-lang.org/faq#self-hosted-compiler. The developers of the language want to build a language that has a high performance compiler, but they don't want to build a language that one would use to build a high performance compiler (because that imposes a whole bunch of constraints when it comes to things like handling memory). In my head, JavaScript is very similar. If you need a high performance compiler, maybe look elsewhere? If you need the sort of fast development loop you can get by having a high performance compiler, then JS is just the right thing.

              • True, I agree. It's a good thing to accept a language's limitations and areas of suitability, without any judgement about whether the language is good for all purposes - which is likely not a good goal for a language to have anyway. I like that example of Roc, how it's explicitly planned to be not self-hosting. It makes sense to use different languages to suit the context, as all tools have particular strengths and weaknesses.

                Off topic but I wonder if this applies to human languages, whether some are more suited for particular purposes - like German to express rigorous scientific thinking with compound words created just-in-time; Spanish for romantic lyrical situations; or Chinese for dense ideographs. People say languages can expand or limit not only what you can express but what you can think. That's certainly true of programming languages.

            • Which also proves the point that not everything needs to be Rust.
              • I agree and foresee a future, maybe a decade from now, when the trend shifts to everyone rewriting all the Rust written or generated in the meantime to something else, a newer hopefully simpler language that accomplishes the same thing.
                • Just wait when Zig reaches 1.0
        • I don't see the idea is visual tools, I never even heard somebody to talk about it like that. The plan is to target enterprise customers with advanced features. I feel like you should just go and watch some interviews or something where talk about their plan, Evan You was recently on a few podcasts mentioning their plans.

          Also, the paradox is not really even there. JS ecosystem largely gave up on JS tools long time ago already. Pretty much all major build tools are migrating to native or already migrated, at least partially. This has been going on for last 4 years or something.

          But the key to all of this is that most of these tools are still supporting JS plugins. Rolldown/Vite is compatible with Rollup JS plugins and OXLint has ESLint compatible API (it's in preview atm). So it's not really even a bet at all.

      • Yes but is that going to be enough?

        Doesn’t look super interesting to me tbh.

      • There will be more
    • Why should it be monetized?
    • attention is money
      • in the beginning yes, but VCs want to cash out eventually. Look at mongodb, redis and whatnot that did everything to get money at a certain point. For VCs open source is a vehicle to get relevant in a space you would never be relevant if you won't do open source.
  • I thought oxfmt would just be a faster drop-in replacement for "biome format"... It wasn't.

    Let this be a warning: running oxfmt without any arguments recursively scans directory tree from the current directory for all *.js and *.ts files and silently reformats them.

    Thanks to that, I got a few of my Allman-formatted JavaScript files I care about messed up with no option to format them back from K&R style.

    • > running oxfmt without any arguments recursively scans directory tree from the current directory for all .js and .ts files and silently reformats them

      I've got to say this is what I would have expected and wanted to happen. I'd say it is wise to not run tools designed to edit files on files you don't have a backup for (like Git) without doing a dry-run or a small scope experiment first.

      • While I can get behind things such as "use version control," "use backups", etc. this is definitely not what I'd expect from a program run without arguments, especially when it will go and change stuff.
        • What? The very first page of documentation tells you this. The help screen clearly shows a `--check` argument. This is a formatter and uses the same arguments as many others - in particular Prettier, the most popular formatter in the ecosystem.

          How were you not expecting this? Did you not bother to read anything before installing and running this command on a sensitive codebase?

          • I do usually run new tools from somewhere harmless, like ~/tmp, just in case they do something unexpected.

            But most formatters I'm used to absolutely don't do this. For example, `rustfmt` will read input from stdin if no argument is given. It can traverse modules in a project, but it won't start modifying everything under your CWD.

            Most unix tools will either wait for some stdin or dump some kind of help when no argument is given. Hell, according to this tool's docs, even `prettier` seems to expect an argument:

                > Running oxfmt without arguments formats the current directory (*equivalent to prettier --write .*)
            
            I'm not familiar with prettier, so I may be wrong, but from the above, I understand that prettier doesn't start rewriting files if no argument is given?

            Looking up prettier's docs, they have this to say:

                > --write
                This rewrites all processed files in place. *This is comparable to the eslint --fix* workflow.
            
            So eslint also doesn't automatically overwrite everything?

            So yeah, I can't say this is expected behaviour, even if it's documented.

            • a more related tool would be prettier, which also has a --write option
    • > with no option to format them back

      Try git reset --hard, that should work.

    • These files are under version control, right? Or backed up. Right?
    • This is user error. oxfmt did what you asked it to do.
      • rk06
        I don't think so. If someone runs a tool without args, the tool should do equivalent of "tool --help"

        It is bad ux.

        • I expect a file formatter to format the files when I call it. Anything else would be surprising to me.
          • rk06
            a new user should not expected to know whether to use "--info", "--help", or "-info" or "/info"

            A power user can just pass the right params. Besides, it is not that hard to support "--yolo" parameter for that use case

            • Would you enjoy writing `rm --yolo file` instead of `rm file` every time?
              • No, but we're not talking about `oxfmt file` here, but `oxfmt` with no argument.

                I don't expect `rm` with no argument to trash everything in my CWD. Which it doesn't, see sibling's comment.

              • In this case, "file" is the arg, not --yolo. `rm` without any args returns `` rm: missing operand Try 'rm --help' for more information. ```

                `oxfmt` should have done the same and `oxfmt .`, with the desired dir ".", should have been the required usage.

                • I expect invoking a command-line tool without any arguments to perform the most common action. Displaying the help should only be a fallback if there is no most common action. For example, `git init` modifies the current directory instead of asking you, because that’s what you want to do most of the time.
              • Not taking a position but the design of rm strengthens the position that recursive by default without flags isn’t ok. rm makes you confirm when you want changes to recurse dirs.
            • I know feels aren't the objective truth but I feel like most people would default to running "new-cli-tool --help" first thing as a learned (defensive) habit. After all quite a bit of stuff that runs in a terminal emulator does something when ran without arguments or flags.
    • I assume you mean what’s more properly called Java style [1], where the first curly brace is on the same line as the function declaration (or class declaration, but if you’re using Allman style you’re probably not using classes; no shade, I’m a JS class hater myself) [2] or control statement [3], the elses (etc) are cuddled, and single statement blocks are enclosed in curly braces. Except I also assume that oxfmt’s default indentation is 2 spaces, following Prettier [4], whereas Java style specified 4.

      So maybe we should call it JavaScript style? Modern JS style? Do we have a good name for it?

      Also, does anyone know when and why “K&R style” [5] started being used to refer to Java style? Meaning K&R statement block style (“Egyptian braces” [6]) being used for all braces and single statement blocks getting treated the same as multi-statement blocks. Setting aside the eternal indentation question.

      1: https://en.wikipedia.org/wiki/Indentation_style#Java

      2: https://www.oracle.com/java/technologies/javase/codeconventi...

      3: https://www.oracle.com/java/technologies/javase/codeconventi...

      4: https://prettier.io/docs/options#tab-width

      5: https://ia903407.us.archive.org/35/items/the-ansi-c-programm...

      6: https://en.wikipedia.org/wiki/Indentation_style#Egyptian_bra...

    • You couldn't waterboard this outta me
    • Well, one way to boost this command to fame is to open an issue on the repo and crash out.[0]

      [0] https://github.com/microsoft/vscode/issues/32405

    • Do you not use a VCS?
    • Git undo?
      • If only. But `jj undo`?
  • It always comes as a surprise to me how the same group of people who go out of their way to shave off the last milliseconds or microseconds in their tooling care so little about the performance of the code they ship to browsers.

    Not to discredit OP's work of course.

    • People shaving off the last milliseconds or microseconds in their tooling aren't the same people shipping slow code to browsers. Say thanks to POs, PMs, stakeholders, etc.
      • Sometimes they are the same person.

        It just take someone to have poor empathy towards your users to ship slow software that you don't use.

        • I've never met a single person obsessed with performance who goes half the way. You either have a performance junkie or a slob who will be fine with 20 minutes compile times.
          • I have. They cared a lot about performance for them because they hated waiting, but gave literally no shit about anyone else.
    • TBH I don't know how to do that work. If I'm in the backend it's very easy for me. I can think about allocations, I can think about threading, concurrency, etc, so easily. In browser land I'm probably picking up some confusing framework, I don't have any of the straightforward ways to reason about performance at the language level, etc.

      Maybe once day we can use wasm or whatever and I can write fast code for the frontend but not today, and it's a bit unsurprising that others face similar issues.

      Also, if I'm building a CLI, maybe I think that 1ms matters. But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".

      • It's not too diffcut in the browser either. Consider how often you're making copies of your data and try to reduce it. For example:

        - for loops over map/filter

        - maps over objects

        - .sort() over .toSorted()

        - mutable over immutable data

        - inline over callbacks

        - function over const = () => {}

        Pretty much, as if you wrote in ES3 (instead of ES5/6)

        • Yes but it's not really fair to expect me to know how to do that. Just because I know how to do it for backend code, where it's often a lot easier to see those copies, doesn't mean I'm just a negligent asshole for not doing it on the frontend. I don't know how, it's a different skillset.
      • The work is largely the same.

        You think about allocations: JS is a garbage collected language and allocations are "cheap" so extremely common. GC is powerful and in most JS engines quite fast but not omniscient and sometimes needs a hand. (Just like reasoning with any GC language.) Of course the easiest intervention to allocations is to remove allocations entirely; just because it is cheap to over-allocate, and the GC will mostly smooth out the flaws with such approaches, doesn't mean ignoring the memory complexity of the chosen algorithms. Most browser dev tools today have allocation profilers equal or better to their backend cousins.

        You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers). On the flipside, JS is a little harder to reason about threading than many backend languages because it is extensively cooperatively threaded. Code has to yield to other code frequently and regularly. Shaving milliseconds off a routine yields more time to other things that need to happen (browser events, user input, etc). That starts to add up. JS encourages you to do things in short, tight "bursts" rather than long-running algorithms. Here again, most browser dev tools today have strong stack trace/flame chart profilers that equal or exceed backend cousins. Often in JS "tall" flames are fine but "wide" flames are things to avoid/try to improve. (That's a bit reversed from some backend languages where shallow is overall less overhead and long-running tasks are sometimes better amortized than lots of short ones.)

        > But someone browsing my webpage one time ever? That might matter a lot less to me, you're not "browsing in a hot loop".

        The heavily event-driven architecture of the browser often means that just sitting on a webpage is "browsing in a hot loop". Browsers have gotten better and better at sleeping inactive tabs and multi-threading tabs to not interfere with each other, but things are still a bit of a "tragedy of the commons" that the average performance of a website still directly and indirectly drags everyone else down. It might not matter to you that your webpage is slow because you only expect a user to visit it once, but you also aren't taking into account that is probably not the only website that user is browsing at that moment. Smart users do directly and indirectly notice when the bad performance of one webpage impacts their experiences of other web pages or crashes their browser. Depending on your business model and what the purpose of that webpage is for, that can be a bad impression that leads to things like lost sales/customers.

        • I don't think it's the same tbh. In Rust I can often just `rg '\.clone'` and immediately see wins. Allocations are far easier to track statically. I don't have a good sense for "seeing" allocations when I look at JS, it feels like it's unfair to expect me to have that tbh. As for profilers, yes I could see things like "this code is allocating a lot" but JS hardly feels like a language where it's smooth to then fix that, and again, frameworks are so common that I doubt I'd be in a position to do so. This is really in contrast to systems languages again where I also have profilers but fixing the problem is often trivial.

          > You think about threading, concurrency, etc: JS is even a little easier than many backend languages because it is (almost excessively) single-threaded. A lot of concurrency issues cannot exist in current JS designs unless you add in explicit IPC channels to explicitly "named" other threads (Service Workers and Web Workers).

          My issue isn't with being able to write concurrent code that has no bugs, my issue is having access to primitives where I have tight control over concurrency and parallelism. The primitives in JS do not provide that control and are often very heavy in and of themselves.

          I think it's perhaps worth noting that I am not saying "it's impossible to write fast code for the browser", I'm saying it is not surprising that people who have developed skillsets for optimizing backend code in languages designed to be fast are not in a great position to do the same for a website.

    • I personally met a lot of folks who care about both quite a bit.

      But to be fair, besides the usual patterns like tree-shaking and DCE, "runtime performance" is really tricky to measure or optimize for

    • While using Electron in the process.
  • I wrote a simple multi threaded transpiler to transpile TypeScript to JavaScript using oxc in Rust. It could transpile 100k files in 3 seconds.

    It's blisteringly fast

    • sounds impossible to even index and classify files so fast. What hardware?
      • Let's say 100k files is 300k syscalls, at ~1-2us per syscall. That's 300ms of syscalls. Then assume 10kb per file, that's 1GB of file, easily done in a fraction of a second when the cache is warm (it'll be from scanning the dir). That's like 600ms used up and plenty left to just parse and analyze 100k things in 2s.
      • I’m assuming they meant 100kloc rather than 100,000 files of arbitrary size (how could we even tell how impressive that is without knowing how big the files are?)
  • I'm surprised to see it's that much faster than SWC. Does anyone have any general details on how that performance is achieved?
    • arena allocation is a big part of it, but also oxc benefits from not having to support the same breadth of legacy transforms that swc accumulated over time. swc has a lot of surface area from being the go-to babel replacement -- oxc could design the AST shape from scratch with allocation patterns in mind. the self-hosting trap (writing js tooling in js) set a performance ceiling for so long that when you finally drop down to Rust and rethink the data layout, the gains feel almost unfair
    • One thing worth noting: beyond raw parse speed, oxc's AST is designed to be allocation-friendly with arena allocation. SWC uses a more traditional approach. In practice this means oxc scales better when you're doing multiple passes (lint + transform + codegen) on the same file because you avoid a ton of intermediate allocations.

      We switched a CI pipeline from babel to SWC last year and got roughly 8x improvement. Tried oxc's transformer more recently on the same codebase and it shaved off another 30-40% on top of SWC. The wins compound when you have thousands of files and the GC pressure from all those AST nodes starts to matter.

    • They wrote a post (https://oxc.rs/docs/learn/performance) but it doesn't include direct comparisons to SWC.
      • Their main page says 3x fast than SWC
        • Yeah, but not how their implementation techniques differ from SWC's to produce those results.
  • I wonder why did it take so long for someone to make something(s) this fast when this much performance was always available on the table. Crazy accomplishment!
    • Because Rust makes developers excited in a way that C/C++ just doesn't.
      • Yeah, it is as if there were never other compiled languages before to rewrite JavaScripting tooling.
        • The word 'excited' in GP's post isn't decorative.
          • I am fully aware of it, there have been many 'excited' posts in HN history about various programming languages, with related rewrite X in Y, the remark still stands.
        • Why do people get so mad that other people enjoy a language? If I’m more likely to rewrite some tooling because of the existence of a programming language and it’s more performant, isn’t that good for everyone?

          We are programmers we are supposed to like programming. These rust haters are intolerable.

          • Because it gets tiring to have all those Rewrite X in Y, as if X was the very first language where that is possible.
            • Is anyone forcing you to rewrite anything in rust?
      • C++ is pure trash

        C is fine but old

        • You dont need C(++) for building performant software.
      • We had many languages that are faster that are not c/c++.

        Compare Go (esbuild) to webpack (JS), its over 100x faster easily.

        For a dev time matters, but is relative, waiting 50sec for a webpack build compared to 50ms with a Go toolchain is life changing.

        But for a dev waiting 50ms or 20ms does not matter. At all.

        So the conclusion is javascript devs like hype, and flooded Rust and built tooling for JS in Rust. They could have used any other compiled languge and get near the same peformance computer-time-wise, or the exact same time human-timewise.

    • I believe it goes back a few years to originally being just oxlint, and then recently Void Zero was created to fund the project. One of the big obstacles I can imagine is that it needs extensive plugin support to support all the modern flavours of TypeScript like React, Vue, Svelte, and backwards compatibility with old linting rules (in the case of oxlint, as opposed to oxc which I imagine was a by-product).
    • For a couple of reasons:

      * You need have a clean architecture, so starting "almost from scratch" * Knowledge about performance (for Rust and for build tools in general) is necessary * Enough reason to do so, lack of perf in competition and users feeling friction * Time and money (still have to pay bills, right?)

    • Fractured ecosystem. Low barrier to entry, so loads of tooling.
    • It takes a good programmer to write it, and most good programmers avoid JavaScript, unless forced to use it for their day job. in that case, there is no incentive to speed up the part of the job that isn't writing JavaScript.
      • Some of us, already have all the speed we need with Java and .NET tooling, don't waste our time rewriting stuff, nor need to bother with borrow checker, even if it isn't a big deal to write affine types compliant code.

        And we can always reach out to Scala or F# if feeling creating to play with type systems.

      • > It takes a good programmer to write it, and most good programmers avoid JavaScript, unless forced to use it for their day job.

        Nonsense.

  • - seeing this oxlint and oxfmt a lot lately

    - how does it compare to biome?

    - also biome does all 3 , linting, formatting and sorting, why do you want 3 libraries to do the job biome does alone?

  • Does oxc-parser make it easy to remove comments from JavaScript?

    In other words does it treat comments as syntactic units, or as something that can be ignored wince they are not needed by the "next stage"?

    The reason to find out what the comments are is of course to make it easy to remove them.

    • Should be easy with any standard parser. See astexplorer.net
  • So uv for JavaScript? Nice.
    • No, that would probably be pnpm, even thought it's not nearly as fast because it's written in JS.
      • I thought it's mainly written in Rust https://github.com/oxc-project/oxc . Which oxc project is written in JS ?
        • They are talking about pnpm (which they said would be the uv equivalent for node, though I disagree given that what pnpm brings on top of npm is way less than the difference between uv and the status quo in Python).
  • I expected a coparison to `bun build` in the transformer TS -> JS part.

    But I guess it wouldn't be an apples to apples com parison because Bun can also run typescript directly.

    • You can find a comparison with `bun build` on Bun's homepage. It hasn't been updated in a little while, but I haven't heard that the relative difference between Bun and Rolldown has changed much in the time since (both have gotten faster).

      In text form:

      Bundling 10,000 React components (Linux x64, Hetzner)

        Bundler                    Version                  Time
        ─────────────────────────────────────────────────────────
        Bun                        v1.3.0               269.1 ms
        Rolldown                   v1.0.0-beta.42       494.9 ms
        esbuild                    v0.25.10             571.9 ms
        Farm                       v1.0.5             1,608.0 ms
        Rspack                     v1.5.8             2,137.0 ms
  • zdw
    This compiles to native binaries, as opposed to deno which is also in rust but is more an interpreter for sandboxed environments?
    • Oxc is not a JavaScript runtime environment; it's a collection of build tools for JavaScript. The tools output JavaScript code, not native binaries. You separately need a runtime environment like Deno (or a browser, depending on what kind of code it is) to actually run that code.
    • Deno is a native implementation of a standard library, it doesn't have language implementation of its own, it just bundles the one from Safari (javascriptcore).

      This is a set of linting tools and a typestripper, a program that removes the type annotations from typescript to make turn it into pure javascript (and turn JSX into document.whateverMakeElement calls). It still doesn't have anything to actually run the program.

      • Deno uses V8, which is from Chrome. Bun uses JavaScriptCore.
      • I'm going to call it: a Rust implementation of JavaScript runtime (and TypeScript compiler) will eventually overtake the official TypeScript compiler now being rewritten in Go.
        • ? Most JavaScript runtimes are already C++ and are already very fast. What would rewriting in Rust get us?
          • Nothing, but it will happen anyway. Maybe improved memory safety and security, at least as a plausible excuse to get funding for it. Perhaps also improved enthusiasm of developers, since they seem to enjoy the newness of Rust over working with an existing C++ codebase. Well there are probably many actual advantages to "rewrite it in Rust". I'm not in support or against it, just making an observation that the cultural trend seems to be moving that way.
        • In popularity or actually take over control of the language?
          • Eventually I imagine a JS/TS runtime written in Rust will be mainstream and what everyone uses.
    • If you want native binaries from typescript, check my project: https://tsonic.org/

      Currently it uses .Net and NativeAOT, but adding support for the Rust backend/ecosystem over the next couple of months. TypeScript for GPU kernels, soon. :)

    • No, it it a suite of tools to handle Typescript (and Javascript as its subset). So far it's a parser, a tool to strip Typescript declarations and produce JS (like SWC), a linter, and a set of code transformation tools / interfaces, as much as I can tell.
  • whats the point of writing rust memory safe for js if js is already memory safe, ant u just write it in js???
    • Too slow. Different people implemented linter, bundler, ts compiler in JS. That means three different parsers and ASTs, which is inefficient. These guys want a grand unified compiler to rule them all.
  • Thought this was something related to Oxide Computer - they might want to be careful with that branding.
    • There are like 50 rust projects named by oxidation puns. This is hardly the first
  • I've played with all of these various formatters/linters in my workflow. I tend to save often and then have them format my code as I type.

    I hate to say it, but biome just works better for me. I found the ox stuff to do weird things to my code when it was in weird edge case states as I was writing it. I'd move something around partially correct, hit save to format it and then it would make everything weird. biome isn't perfect, but has fewer of those issues. I suspect that it is hard to even test for this because it is mostly unintended side effects.

    ultracite makes it easy to try these projects out and switch between them.

    • oxc formatter is still alpha, give it some time
      • sure, but biome just works today... ¯\_(ツ)_/¯... i don't understand why we need 10 (or even 2) different rust based formatters... people need to just work together a bit more imho.
  • For the love of god, please stop naming Rust projects with "corrosion" and "oxidation" and the cute word pwns related to Rust because they are currently overplayed.
    • what next, stop using py prefix?
      • I said nothing about the rs prefix. But making oxide ferrous, Fe203 or whatever your whole shtick tells me nothing about your package and the pwn space is so so so very crowded at this point it just makes for a bad naming scheme.
  • [dead]
    • Oxc is not the first Rust-based product on the market that handles JS, there is also SWC which is now reasonably mature. I maintain a reasonably large frontend project (in the 10s of thousands of components) and SWC has been our default for years. SWC has made sure that there is actually a very decent support for JS in the Rust ecosystem.

      I'd say my biggest concern is that the same engineers who use JS as their main language are usually not as adept with Rust and may experience difficulties maintaining and extending their toolchain, e.g. writing custom linting rules. But most engineers seem to be interested in learning so I haven't seen my concern materialize.

      • It's not like JS isn't already implemented in a language that's a lot more similar to Rust anyhow though. When the browser or Node or whatever other runtime you're using is already in a different language out of necessity, is it really that weird for the tooling to also optimize for the out-of-the-box experience rather than people hacking on them?

        Even as someone who writes Rust professionally, I also wouldn't necessarily expect every Rust engineer to be super comfortable jumping into the codebase of the compiler or linter or whatever to be able to hack on it easily because there's a lot of domain knowledge in compilers and interpreters and language tooling, and most people won't end up needing experience with implementing them. Honestly, I'd be pretty strongly against a project I work on switching to a custom fork of a linting tool because a teammate decided they wanted to add extra rules for it or something, so I don't see it as a huge loss that it might end up being something people will need to spend personal time on if they want to explore.

    • The goal is for Vite to transition to tooling built on Oxc. They’ve been experimenting with Rolldown for a while now (also by voidzero and uses oxc) - https://vite.dev/guide/rolldown
    • Depends on how conservative their minifier is. The more aggressive, the more likely bugs are. esbuild still hits minifier bugs regularily.
      • Over-minifying is kind of pointless, just do a basic minify and then gzip and call it a day.
  • [dead]
  • oxidation is a chemical process where a substance loses electrons, often by reacting with oxygen, causing it to change. What does it have to do with JavaScript?
    • Oxidation of iron produces rust. Rust is the language of implementation of that compiler, and of the entire Oxc suite.
      • But rust is named after a mushroom?
        • Rust is the layer in immediate contact with the metal :) That's what the official version says, at least.
        • :shush:
    • It is written in Rust…