- > Fundamental in the dependency cooldown plan is the hope that other people - those who weren't smart enough to configure a cooldown - serve as unpaid, inadvertent beta testers for newly released packages.
This is wrong to an extent.
This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.
Their incentive is to be the first to publish a blog post about a cool new attack that they discovered and that their solution can prevent.
- Sure, but the alternative the author proposes not only allows for time for those scanners to run but explicitly models that time as a formal part of the release process.
Status quo (at least in most language's package managers) + cooldowns basically means that running those checks happens in parallel with the new version becoming the implicit default version shipped to the public. Isn't it better to run the safety and security checks before making it the default?
- Agreed that the upload queue solves this problem, but, one thing about the current system is it lets people choose where on the continuum they want to be depending on their risk/reward profile.
- FTA, "even make the queued releases available for intentional, explicitly volunteering beta testers to try out." Under the proposed system, you have to opt in to the insecure early releases. Rather than opting out of them. Which seems like a more secure default!
- > insecure early releases
This is the wrong framing.
There's no free lunch here. Delays in publishing not only slow down attacks, they also slow down critical security patches. There's no one-size-fits-all policy here, you're at risk either way.
- I would suggest the current system fails to efficiently choose (as you have to align multiple pathways, like updates, "manual" installs, adding new packages), and so effectively there's only the illusion of choice. Switching instead to a queue not only means that there's time for QA/security scans, but it's much easier to make the choice to speed up than slow down.
- Or: make the client side automatically pick the previous version if the latest is too new.
That's a lot less work than putting an extra validation step into the publishing pipeline. And with sane defaults it lets the user make an informed decision when special circumstances arise.
- Linux distributions have done this in the past. It can work and can provide a good revenue source.
- >Sure, but the alternative the author proposes not only allows for time for those scanners to run but explicitly models that time as a formal part of the release process.
This is true but that doesn't make "Dependency cooldowns turn you into a free-rider", the title of the article and the subject of the first part, true.
- Security people should love a delay in distribution as packages wait in the queue. Then they have an opportunity to report before anyone else.
- And it doesn't have to be separate companies. You can have cooldowns on most machines but reserve a few with no cooldowns that run vulnerability scanners & act like honeypots. Check for new activity after updates of the honeypot machines, e.g. connections to new domains, and flag what updated for review.
- I feel like this is false. These companies mostly seem to monitor social media and security mailing lists with an army of LLMs and then republish someone else's free labor as an LLM slop summary as fast as possible whilst using dodgy SEO practices to get picked up quickly.
They do do original work sometimes. But most of it feels like reposted stuff from the open source community or even other vendors
- "This plan works by letting software supply chain companies find security issues in new releases."
If it was that easy we'd simply find all vulnerabilities before the release. If the supply chain companies can run the scanners you can (and should) run them too. Even if we assume there is more to it, it would make sense to let those companies do the work before GA.
But it is not that easy. The true value comes from many eye balls and then we are back at cooldowns being some eye balls grifting others.
- Consumers of dependencies aren't necessarily - or, I would argue, even typically - eyeballing them. The eye ballers in practice seem to mostly be hackers. Skipping the cooldown doesn't mean you're contributing eyes, it means you're volunteering to help the news of how many victims the attack swept up bigger.
No-one is hurt by having the cooldown. Hackers could choose to also have a cooldown, but must balance the risk of competing groups exploiting vulnerabilities first against the reward of a bigger pool of victims to exploit, and without collusion that still favours early exploits over held ones.
- "Consumers of dependencies aren't necessarily - or, I would argue, even typically - eyeballing them."
No, but they are the reason software supply chain companies look into the releases. Cool downs very well shift the priorities and therefore hurt the ones not doing them, or doing shorter periods.
- [flagged]
- Okay sure, but what happens when a high CVE is discovered that requires immediate patching – does that get around the Upload Queue? If so, it's possible one could opportunistically co-author the patch and shuttle in a vulnerability, circumventing the Upload Queue.
If you instead decide that the Upload Queue can't be circumvented, now you're increasing the duration a patch for a CVE is visible. Even if the CVE disclosure is not made public, the patch sitting in the Upload Queue makes it far more discoverable.
Best as I can tell, neither one of these fairly obvious issues are covered in this blog post, but they clearly need to be addressed for Upload Queues to be a good alternative.
--
Separately, at least with NPM, you can define a cooldown in your global .npmrc, so the argument that cooldowns need to be implemented per project is, for at least one (very) common package manger, patently untrue.
# Wait 7 days before installing > npm config set min-release-age 7
- This literal example is actually addressed by the Debian example - the security team has powers to shuttle critical CVEs through but it’s a manual review process.
There’s a bunch of other improvements they call out like automated scanners before distribution and exactly what changed between two distributed versions.
The only oversight I think in the proposal is staggered distributions so that projects declare a UUID and the distribution queue progressively makes it available rather than all or nothing
- > The only oversight I think in the proposal is staggered distributions so that projects declare a UUID and the distribution queue progressively makes it available rather than all or nothing
That is indeed an oversight - I wish I had thought of that idea!
- But the whole point of using pypi and npm is because distributions are a thing that only old graybeard boomers use.
- > Okay sure, but what happens when a high CVE is discovered that requires immediate patching
I'm pretty sure, once cooldowns are widely implemented, the first priority of attackers will become to convince people to make an exception for their update because "this is really really urgent" etc.
- At least it’s a bit harder because you need to finesse the manual review somehow; and it’ll leave a bigger paper trail. It’s not a perfect defence but it’s an improvement.
- This doesn’t solve the problem either, which is that of the Confused Deputy [1]. An arbitrary piece of code I’m downloading shouldn’t be able to run as Ryan by default with access to everything Ryan has.
We need to revitalize research into capabilities-based security on consumer OSs, which AFAIK is the only thing that solves this problem. (Web browsers - literally user “agents” - solve this problem with capabilities too: webapps get explicit access to resources, no ambient authority to files, etc.)
Solving this problem will only become more pressing as we have more agents acting on our behalf.
- I’ve never seen code that is downloaded run itself. Why not be the change you want to see in the world and run sudo or spawn your browser in a jail. Or download as another user.
- Welcome to npm post-install scripts... https://docs.npmjs.com/cli/v11/using-npm/scripts
- And Rust build scripts: https://doc.rust-lang.org/cargo/reference/build-scripts.html
- glad pnpm disables those by default!
- PSA: if you're using (a newish release of) npm you should have something like this as a default, unless you've got good reasons not to:
min-release-age=7 # days
ignore-scripts=true
- This article is a category error.
Dependency cooldowns are how you can improve your security on an individual level. Using them does not make you a free rider any more than using Debian instead of Ubuntu instead of Arch does. Different people/companies/machines have different levels of acceptable risk - cooldowns let you tune that to your use case. Using open source software does not come with a contract or responsibility for free, implicit pentesting.
Upload queues are how a package manager/registry can collectively improve security for it's users. I cannot implement an upload queue for just me - the value comes from it being done in a centralized way.
I'm in favor of both, though hopefully with upload queues the broader practice of long dependency cooldowns would become more limited to security-focused applications.
- The people who will benefit from a cooldown weren’t reviewing updates anyway. Without the cooldown they would just be one more malware victim. If you don’t review code before you update, it just makes sense to wait until others have. Despite what the article says, the only people who benefit from a rush to update are the malware spreaders.
- > Despite what the article says, the only people who benefit from a rush to update are the malware spreaders.
And, you know, all the downstream users trying to install fixes for zero-days.
- Having skimmed the article I understand the title. While I agree on some level I wholly disagree on another: to me "dependency cooldown" is a way to automate something as old as time: the late-adopter-laggard. Although I am a tech-nerd and like the latest stuff. I have almost always let other people try it out first. I've missed out on some things because of it but if you are more conservative in your actions it just happens naturally. I think it is OK to have a dependency cooldown, in fact not everybody should update to the newest stuff right away. It's good to have cascaded updates. See the crowd-strike incident in 2024. If some people want to be later in the chain so be it. They will also miss out on important security updates by their cooldown time. I'd advocate for the feature despite never having used it. So "collectively rational" in my mind.
- I take issue with the expectation that a business should take on the additional liability and risk of immediate adoption, just because this person on the internet thinks so. I’m doubtful they are going to pay the millions in liabilities that could result when something gets exploited, so it makes it hard to care what they think about it.
- The problem is making it a default (or even popular). If everyone tries to move themselves later in the chain, you just moved detection later in the chain as well
- Yes. But also infection with a malicious package. I don't want anybody to be hacked and also don't want everybody to be hacked at the same time. If I am managing multiple software components with different levels of reliability requirements I certainly would stagger updates and updates to dependencies using "dependency cooldowns". I don't fault anybody for using them. As it stands I am very conservative with dependencies/updates in general and not using "dependency cooldowns" yet.
- Not everyone has the same update cycle. That's not free-riding. The framing around not being on the latest version as irresponsible doesn't hold up.
- Yeah this. If I don't buy the new iPhone XX.0 but instead wait for XX.1, which could include software and hardware fixes, does that make me a free rider?
- > If I don't buy the new iPhone XX.0 but instead wait for XX.1, which could include software and hardware fixes, does that make me a free rider?
Yes, that's what free-riding is.
And the major problem, which the article touches on but doesn't do much to explore, is that if you characterize this as "responsible behavior", it will automatically cause itself to fail, because all of the benefits come from free-riding. The only benefit of waiting is that other people might not do it, and those people will drive improvements. If everyone waits, the only thing that happens is that (1) improvements will take longer to be developed, and (2) everyone experiences exactly the same problems as they would have if no one waited. There's no benefit, but increased cost.
Imagine you and everyone you know are inside a minefield. You need to leave, because you have no water.
Does waiting until enough people have killed themselves to establish the outline of a safe path out make you a free-rider?
What is there to be gained by instituting a waiting period before any attempt to leave?
- It feels like the argument being made is you're a freerider if you don't adhere to the same million miles per hour frenzy that got us into this problem in the first place. The author probably also feels deploying from private repos with OS dependencies is wrong because that's the domain of the ultra-rich 1%
- Right.
Not to mention the (apparently not obvious?) option of detaching review- and release versions. We still look at the diff of latest versions of dependencies before they reach our codebase. That seems like the most responsible.
Besides, why stop there? Everyone installing packaged builds from NPM are already freeriding from those installing sources straight from Github releases. smh
- This all looks like game theory, the more people delay the more likely a compromise will slip through. The same reason why I don't use LTS/ESR releases either.
- A central package cooldown is not really any different to individual cooldowns.
The main reason for the cooldown is so security companies can find the issues, not that unwitting victims will find them.
One problem of the central cooldown is that it restricts the choice to be able to consume a package immediately, and some people might think that a problem.
- They are categorically different.
I can implement a dependency cooldown for my org and benefit from it immediately. An upload queue gets its value from being done centrally and allowing security researchers early access and the ability to coordinate.
- I can't help but wonder why security reviews aren't standard practice. Surely enterprises would be willing to pay for that? You get the default releases as they are today, then a second line that get a "security reviewed" certification released at most a few weeks later.
Of course the problem there is that security audits are fallible. Some issues are so subtle that they are only revealed years after they're introduced, despite them being open source and subject to potentially all the tools and eyes.
- > One problem of the central cooldown is that it restricts the choice to be able to consume a package immediately
Huh? The article specifically suggests there could be an opt-in to early releases, and that the published revisions are available (e.g. for researchers) just not distributed by default.
- Then I sincerely hope my bank and doctor and government offices are all free-riders.
Dependency cooldowns, like staged update rollouts, mean less brittleness / more robustness in that not every part of society is hit at once. And the fact that cooldowns are not evenly distributed is a good thing. Early adopters and vibe coders take more chances, banks should take less.
But yeah, upload queues also make sense. We should have both!
- It keeps me thinking that every company loves "those guys" who create OpenSource but won't give them a broken penny, nor support them in any other way
Servants! Just do your open source magic, We're impatient! Ah and thanks for all the code, our hungry hungry LLMs were starving.
- Which is why those guys should really stick to using copyleft licenses only, possibly just AGPL.
- As much as I think what you say in general holds, there's at least something against it here:
>And the PSF even recently took in $1.5m from Anthropic for, among other things: supply-chain security.
- thank you for this example. It's always heartwarming to see such case. However I have this, maybe defeatist, feeling that companies take more than they give - in general. I remember working in companies, where giving away my source code to the public would require a ton of work approval and effort, which was heavily discouraging that. On the other hand, the companies want the opensource to take care of everything...
Maybe it's only mine feeling, so I hope you guys have different experience.
- I agree with you. I guess that's what people sign up for whether they understand it or not when using licenses like MIT, BSD, etc.
- Yes they took money that they have to spend in AI to evaluate the new uploads.
Basically they got some free tokens, not actual "money".
Also I got a 2 week ban on the python discuss for suggesting that people who contribute on behalf of companies (such as microsoft) should be disclosing it. So PSF is as corporate as it gets in my eyes.
- The core point is of course solid. By not updating on day 0, maybe somebody else spend the effort to discover this and you didn't. But there are plenty of other benefits for not rolling with the newest and greatest versions enabled.
I'd argue for intentional dependency updates. It just so happens that it's identified in one sprint and planned for the next one, giving the team a delay.
First of all, sometimes you can reject the dependency update. Maybe there is no benefit in updating. Maybe there are no important security fixes brought by an update. Maybe it breaks the app in one way or another (and yes, even minor versions do that).
After you know why you want to update the dependency, you can start testing. In an ideal world, somebody would look at the diff before applying this to production. I know how this works in the real world, don't worry. But you have the option of catching this. If you automatically update to newest you don't have this option.
And again, all these rituals give you time - maybe someone will identify attacks faster. If you perform these rituals, maybe that someone will be you. Of course, it is better for the business to skip this effort because it saves time and money.
- One of the biggest issues I see with Upload Queues here that is not talked about is the added complexity on the package managers themselves (PyPI, NPM, crates.io ...).
They are already complex beasts of software, extremely important for the ecosystems, and not always well funded. Adding all this extra complexity, with official bypasses (for security reasons), monitoring APIs (for security review while a new version is in the queue), and others is not cheap.
And if somehow, they get the funding to do this, will they also get the funding for the maintenance in the long term?
I don't think the benefits here (which is only explicitly model the cooldown) are enough to offset the downsides.
- This is as useless as the circular view that releasing dependencies for others to test makes you a free-rider on them using your stuff.
Which, honestly, I think it is fair to say that a lot of supply chains are lulling people into a false sense of what they do. Your supply chain for groceries puts a lot of effort into making itself safe. Your supply chain for software dependencies is run more like a playground.
- I don't think this is wrong, but I don't think it will be a problem in practice. One alternative to cooldowns is commercial repackagers, like Chainguard. As long as there are commercial clients who want a validated source of packages, there'll be a market for providing a security wrapper around private package repositories. It's in their interests to a) be quick to get new package versions through, and b) share any fixes they make or any problems they find with the upstream, because it's always going to be cheaper to do that than maintain a long tail of proprietary security patches (not to mention the risk of the clients complaining about either licence problems or drift from the original projects).
That means there's an incentivised slot in the ecosystem for a group of package consumers who are motivated to find security problems quickly. It's not all on the wider development community.
- It’s hard to piece together what the actual proposal is around all of the hyperbole, strawmanned arguments, and emotional language. It more or less claims Upload Queues solve all of the problems without explaining any of the how… Then it suddenly shifts to “executing markdown” because LLMs?
Is the idea I’d point my security scanner at preview.registry.npmjs.org/ and npmjs.org would wait 7 days before the package would publish on the main registry?
- I think what you actually want is audit sharing as the cooldown period. No audit shared with the community yet? The package is still in cooldown. Or you can risk it and run unaudited dependencies or audit it yourself and potentially share that.
It seems to me that many organizations are relying on other companies to do their auditing in any case, why not just admit that and explicitly rely on that? Choose who you trust, accept their audits. Organizations can perform or even outsource their own auditing and publish that.
- I prefer crev-dev for the review sharing thing:
- While an upload queue does sound like a better solution overall, the suggestion of cooldowns as immoral is absurd.
Ever decided to not buy some new technology or video game or product right away and to wait and see if it’s worth it? You’re an immoral freeloader benefiting from the suffering of others who bought it right away.
- Would staying at an LTS version instead of running my production workloads on the bleeding edge also be free-riding, because I am depriving the community of my testing?
- > Dependency cooldowns turn you into a free-rider
Avg tech company: "that's perfect, we love to be free riders."
- One thing I don't understand about cooldowns is that it seems that if everybody uses cooldowns then there is no effective cooldown. Then you ll have to keep increase the cooldown period to get the advanatage...
- The primary benefit of cooldowns isn't other people upgrading first, it's vulnerability scanning tools and similar getting a chance to see the package before you do.
- Those tools aren't floating in the ether: someone has to go download it and run it in some way, automated or otherwise. I think the suggestion is to make that a step before publication as the post suggests.
- there are parties that don't want that cooldown, libraries or software writers. XZ utils backdoor are found by Microsoft and Postgresql developer Andres Freund due to high CPU usage (or latency? CMIIW) during SSH tests, those are the people who will keep the same workflow.
- The admins of the hacked project are likely to notice the hack in a day or two. Malicious actors are a separate concern, but hacks can be mitigated with cooldowns even if everyone was using them
- You can do this everywhere. Not just libraries. I take great pleasure in using the old 2022 LTS builds of Unity. The stability of these products is incredible compared to the latest versions. I simply have to ignore console errors in unity 6. In 2022 they are much more meaningful.
Think about how much cumulative human suffering must be experienced to bring you stable and effective products like this. Why hit the reset button right when things start getting good every time?
- Even Windows admins often wait a while after the release of an update so they don't get a bad update from Microsoft, which is a real concern unfortunately.
- I don't think queues like this are a panacea but they are a good idea. They buy time. That's the whole point. Time to respond. Time for a paper trail. Time to investigate. Time to cancel.
Have a normal path, eg days, a week or more (a month!). Have a selection of fast paths. Much shorter time. Days or even hours. Exceptions require higher trust. Indicators like money / reputation / history could be useful signals even if its only part of a paper trail. Treat exceptions as acceptable but requiring good reasons and explanation. This means a CVE fix from someone with high reputation could go through faster. While exceptions don't reduce the need for scrutiny they do enable clarity about the alternative chosen. Mainly because someone had to justify it away from the normal path. That's valuable in itself.
There's no perfection here. Credit cards and credentials get stolen. Reputation drifts since people change for all kinds of reasons.
Queues buy time. Time to find out. Time to back out.
- Couldn't you say that both ways are "upload queues"? A specifically declared upload queue is also just some kind of dep cooldown.
But as others have noted, people having different cooldown settings means a nice staggered rollout.
- It's open source. Free riding is expected and normal. We all benefit from the work of others.
If you're not doing the work yourself, it makes sense to give the people who review and test their dependencies some time to do their work.
- This is not true. Attackers are usually not publishing packages under their own accounts. They are publishing packages using hacked accounts of major packages that have many dependants.
The real owner will (hopefully) notice when a malicious version is published.
If you use a cooldown then it gives the real owner of the account enough time to report the hack and get the malicious version taken down.
- we tend to find to types of compromised packages: 1. the type you describe; literally published with stolen creds while the owner sleeps, and found the next day. 2. packages will malware found months or years after the fact, while everyone happily goes about their day. Cool-downs of only a few days basically solve the first, while neither of these solves the second.
- One thing people miss is that bugs in open source are much much easier to fix when you catch them right away. You find more bugs when you test aggressively, but the effort per bug is usually significantly lower.
I think the key is to differentiate testing from deployment: you don't need to run bleeding edge everywhere to find bugs and contribute. Even running nightly releases on one production instance will surface real problems.
- That can sometimes be true, but the reverse is also problematic: Uniform automatic updates can turn some users who were happy with the status-quo into unwitting guinea pigs for unexpected features and changes, without informed consent.
All else being equal, I'd rather the people who desire the new features be the earlier-adopters, because they're more likely to be the ones pushing for changes and because they're more likely to be watching what happens.
- The issue is single-channel feature and security updates.
- I feel that the title burries the lead and a positive one would be better:
Upload queues are better than cooldowns
I almost didn't read it because I wasn't interested in a rant. This is a genuinely good idea though so I'm glad I did.
Alas, I did click through so perhaps the title is more effective than my sentiments.
- itd be better for the title to be about upload queues and distribution, rather than free-loading.
idk if one of the touted benefits is really real - you need to be able to jump changes to the front of the queue and get them out asap sometimes.
hacked credentials will definitely be using that path. it gives you another risk signal sure, but the power sticks around
- This wouldn't stop a lot of supply chain attacks. Attacks aren't identified immediately. Often they are only identified months later. And in that period, plenty of zero days are fixed. So this technique not only doesn't fix the problem, it introduces others. Also, again, this only happens to Python because of design flaws in the package managers themselves. Fix the package managers and this all goes away.
- The topic of cooldowns just shifting the problem around got some discussion on an earlier post about them -- what I said there is at https://lobste.rs/s/rygog1/we_should_all_be_using_dependency... and here's something similar:
- One idea is for projects not to update each dep just X hours after release, but on their own cycles, every N weeks or such. Someone still gets bit first, of course, but not everyone at once, and for those doing it, any upgrade-related testing or other work also ends up conveniently batched.
- Developers legitimately vary in how much they value getting the newest and greatest vs. minimizing risk. Similar logic to some people taking beta versions of software. A brand new or hobby project might take the latest version of something; a big project might upgrade occasionally and apply a strict cooldown. For users' sake, there is value in any projects that get bit not being the widely-used ones!
- Time (independent of usage) does catch some problems. A developer realizes they were phished and reports, for example, or the issue is caught by someone looking at a repo or commit stream.
As I lamented in the other post, it's unfortunate that merely using an upgraded package for a test run often exposes a bunch of a project's keys and so on. There are more angles to attack this from than solely when to upgrade packages.
- Curious what happens in the context of a security flaw becoming known with a queue, especially with the whole dependency tree in play. Do we now wait for the fix to come through the queue? Or it gets an exception? Do packages that embed the flawed library have to wait for the fix to merge (through whatever path) before they can depend on it? Or does the exception cascade out to the entire ecosystem that depends on the flawed package?
- > they place substantial costs onto everyone else
Me choosing to NOT download something places NO burden on anyone else. There is no logic by which you'll convince me otherwise.
- My company implements cooldowns by never updating anything ever. (Send help)
- Not updating things is an underappreciated strategy and a core principle of production environments: "if it ain’t broke, don’t fix it".
There are nuances of course, and things that are broken should be fixed, and unfortunately, they often don't want you to just fix what needs to be fixed, like security vulnerabilities. That's how stuff break constantly, you just wanted a patch for a buffer overflow, and you get an AI chatbot and your keyboard shortcuts disappeared.
Sometimes, they let you get only the things you want (i.e. actual fixes), it is important for companies that do serious business like banks, production lines, etc... They know it and charge good money for the privilege.
- I agree a hundred percent with the authors. We have worked hard to get us where we are today where there is pressure for companies to update their packages. This so called cool down backslides us from it.
Here is one example
https://www.nuget.org/packages/System.CommandLine#versions-b...
2.0.6 was released less than a day ago. How long will you wait? I'd argue any wait is unwarranted.
It sounds nice to people because we are used to thinking in terms of Microsoft Windows and Microsoft SQL Server releases where people wait for months after a new version is released to update. Except companies actually pay for these! So somehow this kind of illogical action or I would argue learned helplessness that happens with flagship Microsoft product releases is what we are now advocating as the default everywhere which is a terrible idea.
Dependency cooldowns should NOT be the default. I don't know what a proper solution is but I know this isn't it.
- Yes the publish-distribute delay pattern looks like a reasonable design.
But you’re not a “free-rider” if you intentionally let others leap before you. You’re just being cautious, which is rational behavior and should be baked into assumptions about how any ecosystem actually works.
- Tangential, should server processes be defined with a whitelist of outbound hosts? Deno does that in-process. Don't see much incentive if a malware cant contact its mothership.
- Cooldown is merely a type of flighting. Specifically, picking a flight beyond canary.
- I genuinely don’t know why this warranted a blog post at all, yet alone such an accusatory one, let alone now when everyone has already talked this to death.
- I am surprised I don't hear about vim/neovim/vscode plugin supply chaim attacks. Feels like a similarly lucrative target to language package managers.
- I thought that this article is largely theoretical in nature. I have almost never updated a dependency in a commercial product in a timely fashion, unless it was explicitly a vulnerability fix. I believe very few companies will do that. Upgrades cause frictions so people do as little of them as possible anyways. I was confused about the terminology to begin with because in a decade of software development I never had to advocate to slow down updating dependencies … that sounds like absolutely wishful thinking. Maybe we can pay money to audit new releases of software we depend on, sure, but that is an entirely different issue.
- Mature professionals and organizations have always waited to install updated dependencies in production, with exceptions for severe security issues such as zero day attacks.
"Free riding" is not the right term here. It's more a case of being the angels in the saying "fools rush in where angels fear to tread".
If the industry as a whole were mature (in the sense of responsibility, not age), upgrades would be tested in offline environments and rolled out once they pass that process.
Of course, not everyone has the resources for that, so there's always going to be some "free riding" in that sense.
That dilutes the term, though. Different organizations have different tolerance for risk, different requirements for running the latest stuff, different resources. There's always going to be asymmetry there. This isn't free riding.
- I think the appeal to the categorical imperative is very interesting though. Someone needs to try it. If everyone were wise as you term it, then it's essentially a stalemate while you wait for someone else to blink first and update.
Then again, there are other areas where I feel that Kantian ethics also fail on collective action problems. The use of index funds for example can be argued against on the same line as we argue against waiting to update. (That is, if literally everyone uses index funds then price discovery stops working.) I wonder if this argument fails because it ignores that there are a diversity of preferences. Some organizations might be more risk averse, some less so. Maybe that's the only observation that needs to be made to defeat the argument.
- With that diversity of preferences, some organizations might also be willing and able to do rigorous testing of the updates that are most important to them.
It seems like a helpful efficiency to spread out the testing burden (both deliberate testing and just updating and running into unexpected issues). If everyone updated everything immediately, everyone would be impacted by the same problems at the same time, which seems suboptimal.
- > it's essentially a stalemate while you wait for someone else to blink first and update.
I addressed that in my comment, and you essentially repeated that point:
> I wonder if this argument fails because it ignores that there are a diversity of preferences.
The stalemate you described is only an issue if everyone is in the same circumstances and operating under the same criteria, but reality is very far from that situation.
- > Frankly, dependency cooldowns work by free-riding on the pain and suffering of others.
I suspect there are some reasonable points to be made here, but frankly, I pretty much stopped reading after that. Way too simple minded.
- I mean, speaking as an oss maintainer, there is an infinite list of things msft could do on npm and gh to make our lives better, but we might just have to accept that we’re on our own and have to deal with those platforms mostly as they are and dependency cooldown is just a pragmatic approach. :)
- This is like saying buying a second-hand car makes you a freeloader because you're paying less.
- > Python has multiple package managers at this point (how many now? 8?). All must implement dependency cooldowns.
No, nobody _has to_ implement it, and if only one did, then users who wanted cooldowns can migrate to that package manager.
- Hoo boy.
Anyone in the IT Ops side of things knows the adage that you don't run ".0" software. You wait for a while to let the kinks get worked out by those who can afford the risk of downtime, and of the vendors to find and work out bugs in new software on their own.
Are conservative, uptime-oriented organizations "free-riders" for waiting to install new software on critical systems? Is that a sin, as this implies?
The answer is no. It's certainly a quandry - someone has to run it first. But a little time to let it bake in labs and low-risk environments is worth it.
- I would argue the blind copy pasting, cargo cult orgs are less likely to be helpful anyway.
But I get the point, it's a numbers game so any and all usage can help catching issues.
- I just feel like this problem is something where unfettered capitalism does not work. What we are discussing here is a public utility, and should be managed as such
- Free-riding is frequently a good strategy. If you don't want other people free-riding on you, sign contracts saying they can't. That means, for instance, don't use MIT license.
- Dependency cooldowns are theater. They will do nothing. Supply chain hacks get caught when someone gets pwned, and all this does is push the deadline out.
You find attacks via cross-organization auditing, like you do in Linux distros, and this doesn't do that.
- I wrote the original (?) cooldown post that’s linked in this response, and put some thoughts on Cal’s response here [1].
[1]: https://lobste.rs/s/dl4jb6/dependency_cooldowns_turn_you_int...
- this is like the guiltying me about carbon offsets when there are mountains of burning tires in kuwait
- They are also collectively rational, as a response to an ecosystem that's spun out of control and habitually consumes rats nests of dependencies.
Early participation and beta programs are outsourcing careful engineering via making everybody else guinea pigs. If we want to sling around accusations of free-riding (really?!), you're slacking on testing and free-riding on your early users.
- > Frankly, dependency cooldowns work by free-riding on the pain and suffering of others.
Snyk and socket.dev take money for the pain and suffering...
- I'd rather be free-rider (which I don't buy into the author's thesis), than an unpaid guinea pig.
- If lawmakers understood even an iota of technology they'd be trying to legislate using your ID card to upload npm dependencies with more than 10k downloads instead of for watching porn.
But alas.
- Or you could just, like, not update things immediately just because you can. It's wild that we now refer to it as a "cooldown" to not immediately update something. The sane way would be each user upgrades when they feel they need to, and then updates would naturally be staggered. The security risks of vulnerabilities are magnified by everyone rushing to upgrade constantly.
- Sure, in the way that people who only use Debian stable are free riding or using Rust are free riding nightly users.
- One thing not addressed is the incentive for large software packages to make their own repositories that bypass this queue in order to have instant updates.
- [dead]
- [dead]
- [dead]
- Frankly, this reads as sometime going way too far to be contrary. Yeah, sure, Act Utilitarianism is different than Rule Utilitarianism. News at 11. But most developers don't get the luxury of fighting for the greater good. Most are fighting to keep their paycheck flowing so they can eat. What I'm saying is, insecure software comes from organizational dysfunction, not "bad developers adopting software too quickly." It's a corporate political problem to which you're attempting to apply technical management to solve.
- The brilliance of the implementation of cooldowns: For someone to go download and run it, automated or otherwise, they simply follow the standard installation process.
Users who want take the extra precaution of waiting an additional period of time must decide to manually configure this with their tooling.
This practice has been a thing in the sysadmin community for years and years - most sysadmins know that you never install Windows updates on the day they release.
Having a step before publication means that's it's essentially opt-in pre-release software, and that comes with baggage - I have zero doubts that many entities who download packages to scan for malware explicitly exclude pre-release software, or don't discover it at all until it's released through normal channels.