- Taking the question of whether this would be a useful addition to Node.js core or aside, it must be noted that this 19k LoC PR was mostly generated by Claude Code and manually reviewed by the submitter which in my opinion is against the spirit of the project and directly violates the terms of Developer's Certificate of Origin set in the project's CONTRIBUTING.md
- Fully disagree with this take. Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.
Note aside, OpenJS executive director mentioned it's ok to use AI assistance on Node.js contributions:
[1]: https://github.com/nodejs/node/pull/61478#issuecomment-40772...I checked with legal and the foundation is fine with the DCO on AI-assisted contributions. We’ll work on getting this documented.- I appreciate hearing your point of view on this. In my opinion the future of Open Source and AI assisted coding is a much bigger issue, and different people have different levels of confidence in both positive and negative outcomes of LLM impact on our industry.
It is great to have a legal perspective on compliance of LLM generated code with DCO terms, and I feel safer knowing that at least it doesn't expose Node.js to legal risk. However it doesn't address the well known unresolved ethical concerns over the sourcing of the code produced by LLM tooling.
- > Not allowing AI assistance on PRs will likely decimate the project in the future, as it will not allow fast iteration speeds compared to other alternatives.
It's not an AI issue. Node.js itself is lots of legacy code and many projects depend on that code. When Deno and Bun were in early development, AI wasn't involved.
Yes, you can speed up the development a bit but it will never reach the quality of newer runtimes.
It's like comparing C to C++. Those languages are from different eras (relatively to each other).
- Large PRs could follow the practices that the Linux kernel dev lists follow. Sometimes large subsystem changes could be carried separately for a while by the submitter for testing and maintenance before being accepted in theory, reviewed, and if ready, then merged.
While the large code changes were maintained, they were often split up into a set of semantically meaningful commits for purposes of review and maintenance.
With AI blowing up the line counts on PRs, it's a skill set that more developers need to mature. It's good for their own review to take the mass changes, ask themselves how would they want to systematically review it in parts, then split the PR up into meaningful commits: e.g. interfaces, docs, subsets of changed implementations, etc.
- Nobody wants to review AI-generated code (unless we are paid for doing so). Open source is fun, that's why people do it for free... adding AI to the mix is just insulting to some, and boring to others.
Like, why on earth would I spent hours reviewing your PR that you/Claude took 5 minutes to write? I couldn't care less if it improves (best case scenario) my open source codebase, I simply don't enjoy the imbalance.
- > With AI blowing up the line counts on PRs,
Well, the process you’re describing is mature and intentionally slows things down. The LLM push has almost the opposite philosophy. Everyone talks about going faster and no one believes it is about higher quality.
- Go slow to go fast. Breaking up the PR this way also allows later humans and AI alike to understand the codebase. Slowing down the PR process with standards lets the project move faster overall.
If there is some bug that slips by review, having the PR broken down semantically allows quicker analysis and recovery later for one case. Even if you have AI reviewing new Node.js releases for if you want to take in the new version - the commit log will be more analyzable by the AI with semantic commits.
Treating the code as throwaway is valid in a few small contexts, but that is not the case for PRs going into maintained projects like Node.js.
- TBF, most of the AI code I've reviewed isn't significantly different than code I've seen from people... in fact, I've seen significantly worse from real people.
The fact is, it's useful as a tool, but you still should review what's going on/in. That isn't always easy though, and I get that. I've been working on a TS/JS driver for MS-SQL so I can use some features not in other libraries, mostly bridging a Rust driver (first Tiberious, then mssql-client), the clean abstraction made the switch pretty quick... a fairly thorough test suite for Deno/Node/Bun kapt the sanity in check. Rust C-style library with FFI access in TS/JS server environment.
My hardest part, is actually having to setup a Windows Server to test the passswordless auth path (basically a connection string with integrated windows auth). I've got about 80 hours of real time into this project so far. And I'll probably be doing 2 followups.. one with be a generic ODBC adapter with a similar set of interfaces. And a final third adapter that will privide the same methods, but using the native SQLite underneath but smothing over the differences.
I'm leveraging using/dispose (async) instead of explicit close/rollback patterns, similar to .Net as well as Dapper-like methods for "Typed" results, though no actual type validation... I'd considered trying to adapt Zod to check at least the first record or all records, and may still add the option.
All said though, I wouldn't have been able to do so much with so relatively little time without the use of AI. You don't have to sacrifice quality to gain efficiency with AI, but you do need to take the time to do it.
Go Fast And Break Things was considered a virtue in the JavaScript community long before LLMs became widely available.> Everyone talks about going faster and no one believes it is about higher quality.
- Do as I say, not as I do.
On a more serious note, I think that this will be thoroughly reviewed before it gets merged and Node has an entire security team that overviews these.
- As someone who was a part of the aforementioned security team I'm not sure I'd be interested in reviewing such volume of machine generated code, expecting trap at every corner. The implicit assumption that I observed at many OSS projects I've been involved with is that first time contributions are rarely accepted if they are too large in volume, and "core contributor" designation exists to signal "I put effort into this code, stand by it, and respect everyone's time in reviewing it". The PR in the post violates this social contract.
- For free, you can decide to do what you want, if it's your job, it's a bit different and you may have to do so, especially considering Collina, is one of the largest contributors of the project and member of the technical committee.
- > if it's your job, it's a bit different and you may have to do so
Oh I'd use an llm to generate large amounts of feedback and request changes!
- [dead]
- How exactly does it violate the Developer's Certificate of Origin clause?
- The submitted code must adhere to either of (a), (b), (c), and separately a (d) clause of: https://github.com/nodejs/node/blob/main/CONTRIBUTING.md#dev...
If submitter picks (a) they assert that they wrote the code themselves and have right to submit it under project's license. If (b) the code was taken from another place with clear license terms compatible with the project's license. If (c) contribution was written by someone else who asserted (a) or (b) and is submitted without changes.
Since LLM generated output is based on public code, but lacks attribution and the license of the original it is not possible to pick (b). (a) and (c) cannot be picked based on the submitter disclaimer in the PR body.
- Not sure if you are intentionally misrepresenting (a), but here is the full text
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
- If there's a "the original" the LLM is copying then there's a problem.
If there isn't, then (b) works fine, the code is taken from the LLM with no preexisting license. And it would be very strange if a mix of (a) and (b) is a problem; almost any (b) code will need some (a) code to adapt it.
- To many, it qualifies under either A or B, and therefore C as well. Under A, you can think of the LLM as augmenting your own intelligence. Under B, the license terms of LLM output are essentially that you can do whatever you want with it. The alternative is avoiding use of AI because of copyright or plagiarism concerns.
- It would be considered (a) since the author would own the copyright on the code.
- Owning copyright of something and writing it are very different things
- Citation needed.
Whether AI output can fall under copyright at all is still up for debate - with some early rulings indicating that the fact that you prompted the AI does not automatically grant you authorship.
Even if it does, it hasn't been settled yet what the impact of your AI having been trained on copyrighted material is on its output. You can make a not-completely-unreasonable argument that AI inference output is a derivative work of AI training input.
Fact is, the matter isn't settled yet, which means any open-source project should assume the worst possible outcome - which in practice means a massive AI-generated PR like this should be treated like a nuke which could go off at any moment.
- Why write open-source software at all, when the government could outlaw open-source entirely? What if an asteroid destroys Earth and there are no humans left to enjoy your work? At some point, you have to agree that a risk isn't worth worrying about. And your "worst possible outcome" is just the arbitrary outcome that you think has some subjective risk threshold. And it's certainly not one I agree with. Furthermore, calling it a "nuke" is a bad analogy because that implies that it can't be put back in the bottle once opened. In reality, we're dealing with legal definitions, which can be redefined as easily as defined.
- The two main points are that:
1. Copyright cannot be assigned to an AI agent.
2. Copyrighted works require human creativity to be applied in order to be copyrighted.
For point 2 this would apply to times were AI one shots a generic prompt. But for these large PRs where multiple prompts are used and a human has decided what the design should be and how the API should look you get the human creativity required for copyright.
In regards to being a derivative work I think it would be hard to argue that an LLM is copying or modifying an existing original work. Even if it came up with an exact duplicate of a piece of code it would be hard to prove that it was a copy and not an independent recreation from scratch.
>the worst possible outcome
The worst possible outcome is they get sued and Anthropic defends them from the copyright infringement claim due to Anthopic's indemnity clause when using Claude Code.
- That indemnity clause is only for Team, Enterprise and API users. Do you know what was used here?
Also the commercial version is limited to “…Customer and its personnel, successors, and assigns…”. I am very much not a lawyer and couldn’t find definitions of these in the agreement but I am not sure how transferable this indemnity would be to an open source project.
- I reviewed it and it looks like personal Claude Code subscriptions are not covered, so it's riskier than I claimed.
- I'm not convinced that allowing Node to import "code generated at runtime" is actually a good thing. I think it should have to go through the hoops to get loaded, for security reasons.
I like the idea of it mocking the file system for tests, but I feel like that should probably be part of the test suite, not Node.
The example towards the end that stores data in a sqlite provider and then saves it as a JSON file is mind-boggling to me. Especially for a system that's supposed to be about not saving to the disk. Perhaps it's just a bad example, but I'm really trying to figure out how this isn't just adding complexity.
or more to the pointnode -e "new Function('console.log(\"hi\")')()"
that one is particularly bad, because umd messes with the global object - so this worksnode -e "fetch('https://unpkg.com/cowsay/build/cowsay.umd.js').then((r) => r.text()).then(c => new Function(c + 'console.log(exports.say({ text: \"like this\"}))')())"node -e "fetch('https://unpkg.com/cowsay/build/cowsay.umd.js').then((r) => r.text()).then(c => new Function(c)()).then(() => console.log(exports.say({ text: 'oh no'})))"- Well there you have it.
I had to laugh, because the post you're replying to STRONGLY reminds me of this story, https://news.ycombinator.com/item?id=31778490 , in which some people on the GNOME project objected to thumbnails in the file-open dialog box because it might be a "Security issue" (even though thumbnails were available in the normal file browser, something those commenters probably should have known about, but didn't, but they just had to chime in anyway).
- But then you go "hang on, doesn't ESM exist?" and you realize that argument 4 isn't even true. You can literally do what this argument says you can't, by creating a blob instead of "writing a temp file" and then importing that using the same dynamic import we've had available since <checks his watch> 2020.
- A virtual filesystem makes it possible for the ESM you import to statically import other files in the virtual filesystem, which isn't possible by just dynamically importing a blob. Anything your blob module imports has to be updated to dynamically import its dependencies via blobs.
- There's also a module expression proposal, that would remove the need to use blob imports.
- Using Claude for code you use yourself or at your own company internally is one thing, but when you start injecting it into widely-shared projects like this (or, the linux kernel, or Debian, etc) there will always be a lingering feeling of the project being tainted.
Just my opinion, probably not a popular one. But I will be avoiding an upgrade to Node.js after 24.14 for a while if this is becoming an acceptable precedent.
- one of the reasons I prefer deno is the availability of indexeddb (and all the other great stuff that comes with it out of the box)
- How about trying to reduce dependencies? 11ty is going in correct direction, dropping significant chunk of various dependencies or replacing them with packages with no dependencies or using platform features, that becomes readily available.
- Would be nice if node packages could be packed up in ZIP files so to avoid the security/metadata tax for small file access on Windows.
- The number of files in the node modules folder is crazy, any amount of organization that can tame that chaos is welcomed.
- And if you thought malware hiding in a mess of files was bad, just wait till you see it in two layers of container files.
- Or worse yet, the performance load of anti-malware software that has to look inside ZIP files.
Look, most of us realized around 2004 or so that if you had a choice between Norton and the virus you would pick the virus. In the Windows world we standardized around Defender because there is some bound on how much Defender degrades the performance of your machine which was not the case with competitive antivirus software.
I've done a few projects which involved getting container file formats like ZIP and PDF (e.g. you know it's a graph of resources in which some of those resources are containers that contain more resources, right?) and now that I think of it you ought to be able to virus scan ZIP files quickly and intelligently but the whole problem with the antivirus industry is that nobody ever considers the cost.
- Now we'll have to encrypt the files to prevent the performance hit of antivirus peeking inside.
Oh, wait...
- There are alternative package managers like Yarn that use zip files as a way to store each Node package.[0]
- Strong recommendation to use PNPM instead of yarn or npm. IME (webdev since 1998) it's the only sane tool for stewardship of an npm dependency graph.
See https://pnpm.io/motivation
Also, while popularity isn't necessarily a great indicator of quality, a quick comparison shows that the community has decided on pnpm:
- yarn with zero-installs removes an awful lot of pain present in npm and pnpm. Its practically the whole point of yarn berry.
Firstly - with yarn pnp zero-installs, you don't have to run an `install` every time you switch branch, just in case a dep changed. So much dev time is wasted due to this.
Secondly - "it worked on my machine" is eliminated. CI and deploy use the exact same files - this is particularly important for deeply nested range satisfied dependencies.
Thirdly - packages committed to the repo allows for meaningful retrospectives and automated security reviews. When working in ops, packages changing is hell.
All of this is facilitated by the zip files that the comment you replied to was discussing, that you tangented away from.
The graph you have linked is fundamentally odd. Firstly - there is no good explanation of what it is actually showing. I've had claude spin on it and it reckons its npm download counts. This leads to it being a completely flawed graph! Yarn berry is typically installed either via corepack or bootstrapped via package.json and the system yarn binary. Yarn even saves itself into your repo. pnpm is never (I believe) bundled with the system node, wheras yarn and npm typically are.
Your graph doesn't show what you claim it does.
- ... and of course JAR files in Java are just ZIP files with a little extra metadata and the JVM can unpack them in realtime just fine.
- When npm decided to have per-project node_modules (rather than shared like ruby and others) and human readable configs and library files I think the goal was to be a developer friendly and highly configurable, which it is. And package.json became a lot more than that as a result, it’s been a great system IMO.
Combined with a hackable IDE like Atom (Pulsar) made with the same tech it’s a pretty great dev exp for web devs
- I remember when Firefox started putting everything into jars for similar reasons.
https://web.archive.org/web/20161003115800/https://blog.mozi...
- Would accessing deps directly from a zip really be faster? I'd be a little surprised but not terribly, given that it's readonly on an fs designed for RW. If not, maybe just tar?
- You just cat the exe with the zip file, then it is all loaded into memory at the same time on process init. This is how e.g. LÖVE does game code packaging. (It can't be tar, because this trick only works because the PKZIP descriptor is at the end of the file.)
- You can always use virtualized Linux to avoid the NTFS penalty (WSL2, VS Code dev containers, etc.)
- Moving your whole workflow into WSL or nested containers just to dodge NTFS is a band-aid. Then you get flaky file watchers, odd perms, and a dev setup that feels like a workaround piled on top of another workaround. A fast Node VFS would remove a lot of this nonsense.
- Oh it's a workaround for sure, didn't mean to suggest otherwise.
- It’s insane to me that node works how it does. Zip files make so much more sense, I really liked that about Yarn.
- Would it work to run a bundler over your code, so all (static) imports are inlined and tree shaken?
You can convert it into a data url and import that, can't you?You can’t import or require() a module that only exists in memory.- What happens to relative imports?
- Yeah but Claude didn't suggest that when it wrote this blog post and did all the work so...
- Most of the 4 justifications mentioned sound like mitigations of otherwise bad design decisions. JavaScript in the browser went down this path for the longest time where new standards were introduced only to solve for stupid people instead of actually introducing new capabilities that were otherwise unachievable.
I do see some original benefits to a VFS though, bad application decisions aside, but they are exceedingly minor.
As an aside I think JavaScript would benefit from an in-memory database. This would be more of language enhancement than a Node.js enhancement. Imagine the extended application capabilities of an object/array store native to the language that takes queries using JS logic to return one or more objects/records. No SQL language and no third party databases for stuff that you don't want to keep in offline storage on a disk.
- Why would you want a language enhancement for that, rather than just writing it in JS code? (or perhaps WASM)
That database would probably look a lot like a JSON object. What are you suggesting, that a global JSON object does not solve?> I think JavaScript would benefit from an in-memory database.- Whether it is an object, array, something else, or a combination thereof is a design decision. It is not so much about the design of the structure, which should be determined by execution performance considerations, but how information is added, removed and retrieved. Gathering one or more records from a JSON object, or array index, by value of some child property somewhere in a descendant structure of the instance index always feels like a one-off based upon the shape of the data. That could just be a query which is more elegant to read and yet still achieves superior execution performance compared to a bunch of nested loops or string of function array methods.
The more structures you have in a given application and the larger those structures become in their schemas the more valuable a uniform storage and retrieval solution becomes.
- sorted maps with log(n) access.
- > As an aside I think JavaScript would benefit from an in-memory database.
isn't that just global state, or do you mean you want that to be persistent?
- Funnily enough, we just released Edge.js, which uses Wasmer under the hood for sandboxing Node.js apps.
With it, you have a virtual fs automatically, just by using the `node:fs` package (or any other filesystem calls!)
We wrote about this in depth here: https://wasmer.io/posts/edgejs-safe-nodejs-using-wasm-sandbo...
- Yarn, pnpm, webpack all have solutions for this. Great to see this becoming a standard. I have a project that is severely handicapped due to FS. Running 13k tests takes 40 minutes where a virtual file system that Node would just work with it would cut the run time to 3 minutes. I experimented with some hacks and decided to stay with slow but native FS solution.
What I really want is a way of swapping FS with VFS in a Node.js program harness. Something like
So basically Node never touches the disk and load everything from the memorynode --use-vfs --vfs-cache=BIG_JSON_FILE- The way to do this today is to do it outside of node. Using an overlay fs with the overlay being a ramfs. You can even chroot into it if you can't scope the paths you need to be just downstream from some directory. Or, just use docker.
- making that work cross platform is pure pain
- yes and no. Waiting 40mins for every test run is pure pain, platform specific ramfs type mounting is quite scriptable. Yes some devs might need to install a dependency, but its not a complex script.
- Why do people keep reinventing OS features?
There's Docker, OverlayFS, FUSE, ZFS or Btrfs snapshots?
Do you not trust your OS to do this correctly, or do you think you can do better?
A lot of this stuff existed 5, 10, 15 years ago...
Somehow there's been a trend for every effing program to grow and absorb the features and responsibilities of every other program.
Actually, I have a brilliant idea, what if we used nodejs, and added html display capabilities, and browser features? After all Cursor has already proven you can vibecode a browser, why not just do it?
I'm just tired at this point
- This exact thing solves a huge problem with SEA binaries as he points out in his post. You can include complicated assets easily and skip an ugly unpack step entirely. This is very useful.
- One of the worst is media players that all insist on grafting their own "library" on top of my already-working OS filesystem. So I can't just run the media player and play files. No, that would be too simple. I have to first "import" my media into a "library" abstraction and then store that library somewhere else on my filesystem. Terrible!
- Don't all projects eventually grow to encompass service discovery?
- yarn pnp is currently broken on Node v25.7+;
- https://github.com/yarnpkg/berry/issues/7065
- https://github.com/nodejs/node/issues/62012
This is because yarn patches fs in order to introduce virtual file path resolution of modules in the yarn cache (which are zips), which is quite brittle and was broken by a seemingly unrelated change in 25.7.
The discussion in issue 62012 is notable - it was suggested yarn just wait for vfs to land. This is interesting to me in two ways: firstly, the node team seems quite happy for non-trivial amounts of the ecosystem to just be broken, and suggests relying on what I'm assuming will be an experimental API when it does land; secondly, it implies a lot of confidence that this feature will land before LTS.
- Strong rec to choose PNPM over yarn. I just posted this in a peer comment: https://news.ycombinator.com/item?id=47415173
Not spamming, not affiliated, just trying to help others avoid so much needless suffering.
- This is quite spammy; you could mitigate it by explaining what you think the "needless suffering" is. Having been using npm, pnpm, and yarn for many years the only benefit I find with pnpm is a little bit of speed when using the cli, but not enough that I notice; I've outlined the major yarn benefit to me 'in a peer comment' (which I didn't realise was you when I answered) https://news.ycombinator.com/item?id=47415660
I expect yarn to have a real competitor sooner rather than later that will replace it; and I do wonder if it is this vfs module that will enable it.
- I just use npm because I like to stay as vanilla as possible. Glad that alternatives exist though.
- This can't be overstated. The main benefit with yarn berry (v4+) is being able to commit the dependencies to the repo - I have yarn based tools that I wrote years ago that just work wheras I frequently find npm and python tools are broken due to version changes. However this benefit comes at a setup cost and a lot more on disk complexity - one off tools are just npm and done.
- I could see something like this being useful if it could be passed to workers to replace any fs access inside the worker.
- Can you dynamically load code via eval?
(I know, I know, it's ugly and has its own set of problems)
- I'm not convinced this needs to be in core Node, but being able to have serverless functions access a file system without providing storage would definitely have some use cases. Had some fun with video processing recently that this would be perfect for.
- How does electron do this with its packaged files? I suppose it does not work with module resolution?
- Separate the valid critiques on other comments, Go's io.FS interface is really nice for making these sorts of things. Is there something like this in Node already? (with base implementations like host and in memory)
- > You can’t import or require() a module that only exists in memory.
Sure you can. Function() exists and require.cache exists. This is _intentionally_ exploitable.
- Yeah. That’s what we need. More Node.
- [flagged]
- Is node::vfs the new solution for JupyterLite filesystems?
From https://github.com/jupyterlite/jupyterlite/issues/949#issuec... :
> Ideally, the virtual filesystem of JupyterLite would be shared with the one from the virtual terminal.
emscripten-core/emscripten > "New File System Implementation": https://github.com/emscripten-core/emscripten/issues/15041#i... :
> [ BrowserFS, isomorphic-git/lightningfs, ]
pyodide/pyodide: "Native file system API" #738: https://github.com/pyodide/pyodide/issues/738 re: [Chrome,] Filesystem API :
> jupyterlab-git [should work with the same VFS as Jupyter kernels and Terminals]
pyodide/pyodide: "ENH Add API for mounting native file system" #2987: https://github.com/pyodide/pyodide/pull/2987
- >Let me be honest: a PR that size would normally take months of full-time work. This one happened because I built it with Claude Code.
The node.js codebase and standard library has a very high standard of quality, hope that doesn't get washed out by sloppy AI-generated code.
OTOH, Matteo is an excellent engineer and the community owes a lot to him. So I guess the code is solid :).
- [dead]
- [dead]
- [dead]
- [flagged]
- [dead]
- [flagged]
- Are people still building new projects on Node.js? I would have thought the ecosystem was moving to deno or bun now
- I don't really understand what the value proposition of Bun and Deno is. And I see huge problems with their governance and long-term sustainability.
Node.js on the other hand is not owned or controlled by one entity. It is not beholden to the whims of investors or a large corporation. I have contributed to Node.js in the past and I was really impressed by its rock-solid governance model and processes. I think this an under-appreciated feature when evaluating tech options.
- Deno has some pretty nice unique features like sandboxing that, afaik, don't exist in other runtimes (yet). It's enough of a draw that it's the recommended runtime for projects like yt-dlp: https://github.com/yt-dlp/yt-dlp/issues/14404
- Node has sandboxing these days: https://nodejs.org/api/permissions.html
- No it doesn't, unfortunately.
> The permission model implements a "seat belt" approach, which prevents trusted code from unintentionally changing files or using resources that access has not explicitly been granted to. It does not provide security guarantees in the presence of malicious code. Malicious code can bypass the permission model and execute arbitrary code without the restrictions imposed by the permission model.
Deno's permissions model is actually a very nice feature. But it is not very granular so I think you end up just allowing everything a lot of the time. I also think sandboxing is a responsibility of the OS. And lastly, a lot of use cases do not really benefit from it (e.g. server applications).
- If one gets nothing from them directly, they've at least been a good kick to get several features into Node. It's almost like neovim was to vim, perhaps to a lesser extent.
- Note that Bun was recently acquired by Anthropic.
- Faster, no transpilation, dev-ex sugar.
- I agree about the governance and long-term sustainability points but if you don't see any value in Bun or Deno is probably because (no offense) you are not paying attention.
- loud people on twitter are always switching to the new hotness. i personally can't see myself using bun until its reputation for segfaults goes away after a few more years of stabilizing. deno seems neat and has been around for longer, but its node compatibility story is still evolving; i'm also giving it another year before i try it.
- Wow, I thought you were exaggerating, but no: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
Open 80, closed 492.
- That's basically just Zig, right? Re-invented C but only fixed the syntax, not the problems.
- Yes people are using Node.js, most likely the majority.
- Why?
- The delusion in this comment is insane.
- The Node team has lost the plot IMO.
By far the most critical issue is the over reliance on third party NPM packages for even fundamental needs like connecting to a database.
- What would a Node-native database connection layer look like? What other platforms have that?
Databases are third party tech, I don’t think it’s unreasonable to use a third party NPM module to connect to them.
- Most obviously, Java has JDBC. I think .NET has an equivalent. Drivers are needed but they're often first party, coming directly from the DB vendor itself.
Java also has a JIT compiling JS engine that can be sandboxed and given a VFS:
https://www.graalvm.org/latest/security-guide/sandboxing/
N.B. there's a NodeJS compatible mode, but you can't use VFS+sandboxing and NodeJS compatibility together because the NodeJS mode actually uses the real NodeJS codebase, just swapping out V8. For combining it all together you'd want something like https://elide.dev which reimplemented some of the Node APIs on top of the JVM, so it's sandboxable and virtualizable.
- > Most obviously, Java has JDBC. I think .NET has an equivalent. Drivers are needed but they're often first party, coming directly from the DB vendor itself.
So it's an external dependency that is not part of Java. It doesn't really matter if the code comes from the vendor or not. Especially for OpenSource databases.
- DBMS vendor providing the client is nice. At least if you're using pg-native in Node, that's just a wrapper around the Postgres-owned libpq, but I've run into small breaking updates before that I don't feel would've happened if Postgres maintained both.
- Well in the case of Oracle you can get the language, runtime, DB and driver all from the same organization under unified support contracts.
If you don't value that, why would you want your programming language implementors to also implement database drivers?
- Well that's only because Oracle happens to own both Java and Oracle DB. Suppose you're not using that DB.
- Bun provides native MySQL, SQlite, and Postgres drivers.
I'm not saying Node should support every db in existence but the ones I listed are critical infrastructure at this point.
When using Postgres in Node you either rely on the old pg which pulls 13 dependencies[1] or postgres[2] which is much better and has zero deps but mostly depends on a single guy.
- Outside of sqlite, what runtimes natively include database drivers?
- Bun, .NET, PHP, Java
- For .NET only the old legacy .NET Framework, SqlClient was moved to a separate package with the rewrite (from System.Data.SqlClient to Microsoft.Data.SqlClient). They realized that it was a rather bad idea to have that baked in to your main runtime, as it complicates your updates.
- It's still provided by Microsoft. They are responsible for those first party drivers.