- Skimming through, this document feels thorough and transparent. Clearly, a hard lesson learned. The footnotes, in particular, caught my eye https://rfd.shared.oxide.computer/rfd/397#_external_referenc...
> Why does this situation suck? It’s clear that many of us haven’t been aware of cancellation safety and it seems likely there are many cancellation issues all over Omicron. It’s awfully stressful to find out while we’re working so hard to ship a product ASAP that we have some unknown number of arbitrarily bad bugs that we cannot easily even find. It’s also frustrating that this feels just like the memory safety issues in C that we adopted Rust to get away from: there’s some dynamic property that the programmer is responsible for guaranteeing, the compiler is unable to provide any help with it, the failure mode for getting it wrong is often undebuggable (by construction, the program has not done something it should have, so it’s not like there’s a log message or residual state you could see in a debugger or console), and the failure mode for getting it wrong can be arbitrarily damaging (crashes, hangs, data corruption, you name it). Add on that this behavior is apparently mostly undocumented outside of one macro in one (popular) crate in the async/await ecosystem and yeah, this is frustrating. This feels antithetical to what many of us understood to be a core principle of Rust, that we avoid such insidious runtime behavior by forcing the programmer to demonstrate at compile-time that the code is well-formed
- In case anyone else was confused: the link/quote in this comment are from the previous "async cancellation issue" write-up, which describes a situation where you "drop" a future: the code in the async function stops running, and all the destructors on its local variables are executed.
The new write-up from OP is that you can "forget" a future (or just hold onto it longer than you meant to), in which case the code in the async function stops running but the destructors are NOT executed.
Both of these behaviors are allowed by Rust's fairly narrow definition of "safety" (which allows memory leaks, deadlocks, infinite loops, and, obviously, logic bugs), but I can see why you'd be disappointed if you bought into the broader philosophy of Rust making it easier to write correct software. Even the Rust team themselves aren't immune -- see the "leakpocalypse" from before 1.0.
- > The new write-up from OP is that you can "forget" a future (or just hold onto it longer than you meant to), in which case the code in the async function stops running but the destructors are NOT executed.
If you're relying for global correctness on some future being continuously polled, you should just be spawning async tasks instead. Then the runtime takes care of the polling for you, you can't just neglect it - unless the whole thread is blocked, which really shouldn't happen. "Futures" are intentionally a lower-level abstraction than "async runtime tasks".
- Yeah, Rust mostly just eliminates memory safety and data race problems, which is an enormous improvement compared to what we had previously. Unfortunately right now if you really want to write software that's guaranteed to be correct, there's not alternative to formal verification.
- I would say it can go further than that: Rust enables you to construct many APIs in a way that can’t be misused. It’s not at all unique in this way, but compared with C or Go or the like, you can encode so many more constraints in types.
- Only if the data structures aren't exposed to outside of the program, in which case, Rust cannot guarantee safety from data race problems caused by OS IPC mechanisms like memory mapped data, shared memory segments or DMA buffers, accessed by external events.
- Minor nit: formal verification doesn't guarantee correctness.
- async rust continues to strike me as half-baked and too complex, if you’re developing an application (as opposed to some high performance utility like e.g. a data plane component) just use threads, they’re plenty cheap and not even half as messy.
- Async Rust is as complex as it needs to be given its constraints. But I wholeheartedly agree with you that people need to treat threads (especially scoped ones) as the default concurrency primitive. My intuition is that experience with other languages has led people astray; in most languages threads are a nightmare and/or async is the default or only way to achieve concurrency, but threads in Rust are absolutely divine by comparison. Async should only be used when you have a good reason that threads don't suffice.
- In the spirit of "every non-trivial program will expand until ...", I think preemptively choosing async for anything much more complex than a throwaway script might be justified. In this case, the relevant thing isn't performance or expected number of concurrent users/connections, but whether the program is likely to become or include a non-trivial state machine. My primary influence on this topic is this post from @sunshowers: https://sunshowers.io/posts/nextest-and-tokio/
- It's a good idea in concept but tons of popular libraries use async which makes it difficult to avoid. Want to do anything with a web server or sending requests, most likely async for popular libraries.
- Yeah, the nom asynch nats client got deprecated for instance. It really is a shame, because very few projects will ever scale large enough to need asynch, and apart from things like this, there are costs in portability and supply chain attack surface area when you bring in tokio.
- The main issue was shipping it without proper runtime support, and even nowadays async/await is synonym with Tokio.
Look at .NET, it took almost a decade to sort out async/await across all platform and language layers, and even today there are a few gotchas.
https://github.com/gerardo-lijs/Asynchronous-Programming
Rust still has a similar path to trail, with async traits, better Pin ergonomics, async lambdas, async loops,..... (yes I know some of them have been dealt with).
- I work on an application that has various components split between sync and async rust. For certain tasks, async actually makes things a lot simpler.
- I guess one big question here is whether there's a higher layer abstraction that is available to wrap around patterns to avoid this.
It does feel like there's still generally possibilities of deadlocks in Rust concurrency right? I understand the feeling here that it feels like ... uhh... RAII-style _something_ should be preventing this, because it feels like statically we should be able to identify this issue in this simple case.
I still have a hard time understanding how much of this is incidental and how much of this is just downstream of the Rust/Tokio model not having enough to work on here.
- > I guess one big question here is whether there's a higher layer abstraction that is available to wrap around patterns to avoid this.
Something like Actors, on top of Tokio, would be one way: https://ryhl.io/blog/actors-with-tokio/
- I love Actors and have used them professionally for over 6 years (C++). However to solve real world problems I have had to introduce “locks” to the Actor framework to support various scenarios. With my home-grown actor library, this was trivial to add, however for some 3rd party actor libraries, ideology is dominant and the devs refuse to add such a purity-breaking feature to their actor framework, and hence I cannot use their library for real-world code.
- What scenario requires locks that can't be solved by just having a single actor that owns the resource and controls access?
- Any scenario where you have to atomically update 2 actors. To use a simple analogy for illustrative purposes, transferring money between 2 accounts, you need to lock both actors before incrementing/decrementing. Because in the real world, the accounts can change from other pending parallel transactions and edits. Handshakes are very error prone. Lock the actor, do the critical transaction, unlock.
In a rationale world, this works. In a prejudiced world, devs fight against locks in actor models.
Hence why I had to roll my own …
- I would imagine that in... "soft realtime" might be much but in performance sensitive scenarios the actual cost to having some coordination code in that space might start mattering.
Maybe actor abstractions end up compiling away fairly nicely in Rust though!
- That sounds interesting, what kind of actor use cases would require adding locks to actors?
- Then you just replace deadlocks with livelocks, the fundamental problem AFAIK can't be avoided.
- > It does feel like there's still generally possibilities of deadlocks in Rust concurrency right?
I mean, is there any generic computation model where you can't have deadlocks? Even with stuff like actors you can trivially have cycles and now your blocking primitive is just different (not CPU-level), and we call it a livelock, but it's fundamentally the same.
- The Fuchsia guys use the trait system to enforce a global mutex locking order, which can statically prevent deadlocks due to two threads locking mutexes that they are both waiting for.
Doesn't help in this case, but it does suggest that we might be able to do better.
- Any chance you could dig up a link to that code? I’m curious to learn more
- That's a really subtle version of the deadlock described in withoutboats FuturesUnordered post [0]
When using “intra-task” concurrency, you really have to ensure that none of the futures are starving.
Spawning task should probably be the default. For timeouts use tokio::select! but make sure all pending futures are owned by it. I would never recommend FuturesUnordered unless you really test all edge-cases.
- This sounds very similar to priority inversion. E.g. if you have Thread T_high running at high priority and thread T_low running at low priority, and T_low holds a lock that T_high wants to acquire, T_high won't get to run until T_low gets scheduled.
The OS can detect this and make T_low "inherit" the priority of T_high. I wonder if there is a similar idea possible with tokio? E.g. if you are awaiting a Mutex held by a future that "can't run", then poll that future instead. I would guess detecting the "can't run" case would require quite a bit of overhead, but maybe it can be done.
I think an especially difficult factor is that you don't even need to use a direct await.
I.e. so "can't run" detector needs to determine that no other task will run the future, and the future isn't in the current set of things being polled by this task.let future1 = do_async_thing("op1", lock.clone()).boxed(); tokio::select! { _ = &mut future1 => { println!("do_stuff: arm1 future finished"); } _ = sleep(Duration::from_millis(500)) => { // No .await, but both will futurelock on future1. tokio::select! { _ = do_async_thing("op2", lock.clone()) => {}, _ = do_async_thing("op3", lock.clone()) => {}, }; } };- > I wonder if there is a similar idea possible with tokio? E.g. if you are awaiting a Mutex held by a future that "can't run", then poll that future instead.
Something like this could make sense for Tokio tasks. (I don't know how complicated their task scheduler is; maybe it already does stuff like this?) But it's not possible for futures within a task, as in this post. This goes all the way back to the "futures are inert" design of async Rust: You don't necessarily need to communicate with the runtime at all to create a future or to poll it or to stop polling it. You only need to talk to the runtime at the task level, either to spawn new tasks, or to wake up your own task. Futures are pretty much just plain old structs, and Tokio doesn't know how many futures my async function creates internally, any more than it knows about my integers or strings or hash maps.
- Yeah, a coworker coming from Go asked a similar question about why Rust doesn't have something like the Go runtime's deadlock detector. Your comment is quite similar to the explanation I gave him.
Go, unlike Rust, does not really have a notion of intra-task concurrency; goroutines are the fundamental unit of concurrency and parallelism. So, the Go runtime can reason about dependencies between goroutines quite easily, since goroutines are the things which it is responsible for scheduling. The fact that channels are a language construct, rather than a library construct implemented in the language, is necessary for this too. In (async) Rust, on the other hand, tasks are the fundamental unit of parallelism, but not of concurrency; concurrency emerges from the composition of `Future`s, and a single task is a state machine which may execute any number of futures concurrently (but not in parallel), by polling them until they cannot proceed without waiting and then moving on to poll another future until it cannot proceed without waiting. But critically, this is not what the task scheduler sees; it interacts with these tasks as a single top-level `Future`, and is not able to look inside at the nested futures they are composed of.
This specific failure mode can actually only happen when multiple futures are polled concurrently but not in parallel within a single Tokio task. So, there is actually no way for the Tokio scheduler to have insight into this problem. You could imagine a deadlock detector in the Tokio runtime that operates on the task level, but it actually could never detect this problem, because when these operations execute in parallel, it actually cannot occur. In fact, one of the suggestions for how to avoid this issue is to select over spawned tasks rather than futures within the same task.
- Thank you. Every time I've tried to approach the concept of Rust's parallelism this is what rubs me the wrong way.
I haven't yet read a way to prove it's correct, or even to reasonably prove a given program's use is not going to block.
With more traditional threads my mental model is that _everything_ always has to be interrupt-able, have some form of engineer chosen timeout for a parallel operation, and address failure of operation in design.
I never see any of that in the toy examples that are presented as educational material. Maybe Rust's async also requires such careful design to be safely utilized.
- Guess Rust is more built for memory safety not concurrency? Erlang maybe? Why can't we just have a language that is memory safe and built for concurrency? Like Ocaml and Erlang combine?
- Are you looking for Gleam? Simple but powerful typed functional language for BEAM and JavaScript. It’s a bit high level compared to Ocaml in terms of needing a thick runtime and being somewhat far from machine code.
Really beautiful language design imo. Does a great job avoiding the typelevel brainfuck problem I have with Haskell.
- Rust is absolutely built for concurrency, even moreso than for memory safety--it just so happens that memory safety is a prerequisite for thread safety. You're going to have a hard time finding any other industrial-strength language that statically prevents data races. If you can use Erlang, then sure, use Erlang. But if you can't use Erlang, and you need concurrency, you're not going to find a better candidate than Rust.
- I think an important takeaway here that many often ignore is that in language design, not having low-level control over something is sometimes just as important design tradeoff as having it.
From that it also follows that it may not be too fruitful to try to tackle every domain there is with a single language only.
(With that said, I absolutely love sync Rust, and Go is definitely not a good example of an elegantly designed language, I am talking in a more general way here)
- >This goes all the way back to the "futures are inert" design of async Rust
Yeap. And this footgun is yet another addition to the long list of reasons why I consider the Rust async model with its "inert" futures managed in user space a fundamentally flawed un-Rusty design.
- I feel there's a difference between a preference and a flaw. Rust has targets that make anything except inert futures simply unworkable, and in my opinion it's entirely valid for a programming language to prioritise those targets.
- The requirement is that the futures are not separate heap allocations, not that they are inert.
It's not at all obvious that Rust's is the only possible design that would work here. I strongly suspect it is not.
In fact, early Rust did some experimentation with exactly the sort of stack layout tricks you would need to approach this differently. For example, see Graydon's post here about the original implementation of iterators, as lightweight coroutines: https://old.reddit.com/r/ProgrammingLanguages/comments/141qm...
- If it’s not inert, how do you use async in the kernel or microcontrollers? A non-inert implementation presumes a single runtime implementation within std+compiler and not usable in environments where you need to implement your own meaning of dispatch.
- I think the kernel and microcontroller use-case has been overstated.
A few bare metal projects use stackless coroutines (technically resumable functions) for concurrency, but it has turned out to be a much smaller use-case than anticipated. In practice C and C++ coroutines are really not worth the pain that they are to use, and Rust async has mostly taken off with heavy-duty executors like Tokio that very much don't target tiny #[no-std] 16-bit microcontrollers.
The Kernel actually doesn't use resumable functions for background work, it uses kernel threads. In the wider embedded world threads are also vastly more common than people might think, and the really low-end uniprocessor systems are usually happy to block. Since these tiny systems are not juggling dozens of requests per second that are blocking on I/O, they don't gain that much from coroutines anyways.
We mostly see bigger Rust projects use async when they have to handle concurrent requests that block on IO (network, FS, etc), and we mostly observe that the ecosystem is converging on tokio.
Threads are not free, but most embedded projects today that process requests in parallel — including the kernel — are already using them. Eager futures are more expensive than lazy futures, and less expensive than threads. They strike an interesting middle ground.
Lazy futures are extremely cheap at runtime. But we're paying a huge complexity cost in exchange that benefits a very small user-base than hasn't really fully materialized as we hoped it would.
- > it has turned out to be a much smaller use-case than anticipated
Well, no, at the time of the design of Rust's async MVP, everyone was pretty well aware that the vast majority of the users would be writing webservers, and that the embedded use case would be a decided minority, if it ever existed at all. That Embassy exists and its ecosystem as vibrant as it is is, if anything, an unexpected triumph.
But regardless of how many people were actually expected to use it in practice, the underlying philosophy remained thus: there exist no features of Rust-the-language that are incompatible with no_std environments (e.g. Rust goes well out of its way, and introduces a lot of complexity, to make things like closures work given such constraints), and it would be exceptional and unprecedented for Rust to violate this principle when it comes to async.
- Point taken, I might have formed the wrong impression at the time.
With my C++ background, I'm very much at home with that philosophy, but I think there is room for nuance in how strictly orthodox we are.
C++ does have optional language features that introduce some often unwelcone runtime overhead, like RTTI and unwinding.
Rust does not come configured for freestanding environments out of the box either. Like C++, you are opting out of language features like unwinding as well as the standard library when going freestanding.
I want to affirm that I'm convinced Rust is great for embedded. It's more that I mostly love async when I get to use it for background I/O with a full fledged work stealing thread-per-core marvel of engineering like tokio!
In freestanding Rust the I/O code is platform specific, suddenly I'd have to write the low-level async code myself, and it's not clear this makes the typical embedded project that much higher performance, or all that easy to maintain.
So, I don't want to say anything too radical. But I think the philosophy doesn't have to be as clear cut as no language feature ever incompatible with no-std. Offering a std only language feature is not necessarily closing a door to embedded. We sort of already make opt-out concessions to have a friendlier experience for most people.
(Apologies for the wall of text)
- "Not inert" does not at all imply "a single runtime within std+compiler." You've jumped way too far in the opposite direction there.
The problem is that the particular interface Rust chose for controlling dispatch is not granular enough. When you are doing your own dispatch, you only get access to separate tasks, but for individual futures you are at the mercy of combinators like `select!` or `FuturesUnordered` that only have a narrow view of the system.
A better design would continue to avoid heap allocations and allow you to do your own dispatch, but operate in terms of individual suspended leaf futures. Combinators like `join!`/`select!`/etc. would be implemented more like they are in thread-based systems, waiting for sub-tasks to complete, rather than being responsible for driving them.
- On the other hand, early Rust also for instance had a tracing garbage collector; it's far from obvious to me how relevant its discarded design decisions are supposed to be to the language it is today.
- This one is relevant because it avoids heap allocation while running the iterator and for loop body concurrently. Which is exactly the kind of thing that `async` does.
- It avoids heap allocation in some situations. But in principle the exact same optimization could be done for stackful coroutines. Heck, right now in C I could stack-allocate an array and pass it to pthread_create as the stack for a new thread. To avoid an overlarge allocation I would need to know exactly how much stack is needed, but this is exactly the knowledge the Rust compiler already requires for async/await.
What people care about are semantics. async/await leaks implementation details. One of the reasons Rust does it the way it currently does is because the implementation avoids requiring support from, e.g., LLVM, which might require some feature work to support a deeper level of integration of async without losing what benefits the current implementation provides. Rust has a few warts like this where semantics are stilted in order to confine the implementation work to the high-level Rust compiler.
- > in principle the exact same optimization could be done for stackful coroutines.
Yes, I totally agree, and this is sort of what I imagine a better design would look like.
> One of the reasons Rust does it the way it currently does is because the implementation avoids requiring support from, e.g., LLVM
This I would argue is simply a failure of imagination. All you need from the LLVM layer is tail calls, and then you can manage the stack layout yourself in essentially the same way Rust manages Future layout.
You don't even need arbitrary tail calls. The compiler can limit itself to the sorts of things LLVM asks for- specific calling convention, matching function signatures, etc. when transferring control between tasks, because it can store most of the state in the stack that it laid out itself.
- In order to know for sure how much stack is needed (or to replace the stack with a static allocation, which used to be common on older machines and still today in deep embedded code, and even on GPU!), you must ensure that any functions you call within your thread are non-reentrant, or else that they resort to an auxiliary stack-like allocation if reentrancy is required. This is a fundamental constraint (not something limited to current LLVM) which in practice leads you right back into the "what color are your functions?" world.
- I thought Rust async is a colored stackless coroutine model and thus it would be unsafe to continue execution of previously executing async functions.
To explain, generally speaking, stackless coroutine async only need coloring because they are actually “independent stack”less coroutines. What they actually do is that they share the stack for their local state. This forces async function execution to proceed in LIFO order so you do not blow away the stack of the async function executing immediately after which demands state machine transforms to be safe. This is why you need coloring unlike stackful coroutine models which can execute, yield, and complete in arbitrary order since their local state is preserved in a safe location.
- Rust futures are "just" structs with a poll() method. The poll() method is a function like any other, so it can have local variables on the stack as usual, but anything it wants to save between calls needs to be a field of the struct instead of a stack local. The magic of async/await is that the compiler figures out which of your async function's variables need to be fields on that struct, and it generated the struct and the poll method for you.
I have a blog series that goes into the concrete details if you like: https://jacko.io/async_intro.html
- I see. The Rust implementation effectively splats out the transitive closure of all your callee stack frames upfront which would enable continuing previously executing async functions.
- > thus it would be unsafe to continue execution of previously executing async functions.
There's more nuance than this. You can keep polling futures as often as you want. When an async fn gets converted into the state machine, yielding is just expressed as the poll fn returning as not ready.
So it is actually possible for "a little bit" of work to happen, although that's limited and gets tricky because the way wakers work ensure that normally futures only get polled by the runtime when there's actually work for them to do.
- Off-topic but that code looks quite... complicated as opposed to what I would write in Erlang, Elixir, Go, or even C. Maybe it is just me.
- Erlang/Elixir and Go "solve" this problem by basically not giving you the rope to hang yourself in this particular way in the first place. This is a perfectly valid and sensible solution... but it is not the only solution. It means you're paying for some relatively expensive full locks that the Rust async task management is trying to elide, for what can be quite significant performance gains if you're doing a lot of small tasks.
It is good that not every language gives you this much control and gives some easier options for when those are adequate, but it is also good that there is some set of decent languages that do give you this degree of control for when it is necessary, and it is good that we are not surrendering that space to just C and/or C++. Unfortunately such control comes with footguns, at least over certain spans of time. Perhaps someone will figure out a way to solve this problem in Rust in the future.
- > It means you're paying for some relatively expensive full locks that the Rust async task management is trying to elide, for what can be quite significant performance gains if you're doing a lot of small tasks.
The point of Erlang/Elixir is that it is as performant as possible, and Erlang's history is a testament to this. BEAM is wonderful, and really fast, along with the languages on it being ergonomic (OTP behaviors, supervisors, etc.).
- This is a myth, from the old days when BEAM was the only thing that could juggle thousands of "processes" without losing performance, and even back then, people routinely missed that while BEAM could juggle those thousands of processes, each of them was individually not that fast. That is, BEAM's extremely high performance was only in one isolated thing, not high performance across the board.
Now BEAM is far from the only runtime juggling that many processes, but it remains a relatively slow VM. I rule-of-thumb it at 10x slower than C, making it a medium performance VM at best, and you want to watch your abstraction layers in those nicer languages like Gleam because further multiplicative slow downs can really start to bite.
The first serious Go program I wrote was a replacement for something written in Erlang, there was no significant architectural improvement in the rewrite (it was already reasonably well-architected), and from the first deployment, we went from 4 systems, sometimes struggling with the load spikes, to where just one could handle it all, even with BEAM being over a decade more mature and the Go clustering code being something I wrote over a few weeks rather than battle tested, optimized code.
BEAM is good at managing concurrency, but it is slowish in other ways. It's better than the dynamic scripting languages like Python by a good amount but it is not performance-competitive with a modern compiled language.
- My view of this is that its closer to the basic 2 lock Deadlock.
Thread 1 acquires A. Thread 2 acquires B. Thread 1 tries to acquire B. Thread 2 tries to acquire A.
In this case, the role "A" is being played by the front of the Mutex's lock queue. Role "B" is being played by the Tokio's actively executed task.
Based on this understanding, I agree that the surprising behavior is due to Tokio's Mutex/Lock Queue implementation. If this was an OS Mutex, and a thread waiting for the Mutex can't wake for some reason, the OS can wake a different thread waiting for that Mutex. I think the difficulty in this approach has to do with how Rust's async is implemented. My guess is the algorithm for releasing a lock goes something like this:
1. Pop the head of the wait queue. 2. Poll the top level tokio::spawn'ed task of the Future that is holding the Mutex.
What you want is something like this
For each Future in the wait queue (Front to Back): Poll the Future. If Success - Break ???Something if everything fails???
The reason this doesn't work has to do with how futures compose. Futures compile to states within a state machine. What happens when a future polled within the wait queue completes? How is control flow handed back to the caller?
I guess you might be able to have some fallback that polls the futures independently then polls the top level future to try and get things unstuck. But this could cause confusing behavior where futures are being polled even though no code path within your code is await'ing them. Maybe this is better though?
- Which is why "async" is a pox on our house. System threads can and do address these edge issues. User level concurrency generally doesn't (perhaps with the exceptions of golang and erlang).
- As far as I remember from building these things with others within the async rust ecosystem (hey Eliza!) was that there was a certain tradeoff: if you wouldn’t be able to select on references, you couldn’t run into this issue. However you also wouldn’t be able run use select! in a while loop and try to acquire the same lock (or read from the same channel) without losing your position in the queue.
I fully agree that this and the cancellation issues discussed before can lead to surprising issues even to seasoned Rust experts. But I’m not sure what really can be improved under the main operating model of async rust (every future can be dropped).
But compared to working with callbacks the amount of surprising things is still rather low :)
- Indeed, you are correct (and hi Matthias!). After we got to the bottom of this deadlock, my coworkers and I had one of our characteristic "how could we have prevented this?" conversations, and reached the somewhat sad conclusion that actually, there was basically nothing we could easily blame for this. All the Tokio primitives involved were working precisely as they were supposed to. The only thing that would have prevented this without completely re-designing Rust's async from the ground up would be to ban the use of `&mut future`s in `select!`...but that eliminates a lot of correct code, too. Not being able to do that would make it pretty hard to express a lot of things that many applications might reasonably want to express, as you described. I discussed this a bit in this comment[1] as well.
On the other hand, it also wasn't our coworker who had written the code where we found the bug who was to blame, either. It wasn't a case of sloppy programming; he had done everything correctly and put the pieces together the way you were supposed to. All the pieces worked as they were supposed to, and his code seemed to be using them correctly, but the interaction of these pieces resulted in a deadlock that it would have been very difficult for him to anticipate.
So, our conclusion was, wow, this just kind of sucks. Not an indictment of async Rust as a whole, but an unfortunate emergent behavior arising from an interaction of individually well-designed pieces. Just something you gotta watch out for, I guess. And that's pretty sad to have to admit.
- > All the Tokio primitives involved were working precisely as they were supposed to. The only thing that would have prevented this without completely re-designing Rust's async from the ground up would be to ban the use of `&mut future`s in `select!`...but that eliminates a lot of correct code, too.
But it still suggests that `tokio::select` is too powerful. You don't need to get rid of `tokio::select`, you just need to consider creating a less powerful mechanism that doesn't risk exhibiting this problem. Then you could use that less powerful mechanism in the places where you don't need the full power of `tokio::select`, thereby reducing the possible places where this bug could arise. You don't need to get rid of the fully powerful mechanism, you just need to make it optional.
- I feel like select!() is a good case study because the common future timeout use-case maps pretty closely to a select!(), so there is only so much room to weaken it.
The ways I can think of for making select!() safer all involve runtime checks and allocations (possibly this is just a failure of my imagination!). But if that's the case, I would find it bothersome if our basic async building blocks like select/timeout in practice turn out to require more expensive runtime checks or allocations to be safe.
We have a point in the async design space where we pay a complexity price, but in exchange we get really neat zero-cost futures. But I feel like we only get our money's worth if we can actually statically prove that correct use won't deadlock, without the expensive runtime checks! Otherwise, can we afford to spend all this complexity budget?
The implementation of select!() does feel way too powerful in a way (it's a whole mini scheduler that creates implicit future dependencies hidden from the rest of the executor, and then sometimes this deadlocks!). But the need is pretty foundational, it shows up everywhere as a building block.
- It feels to me like there's plenty of design space to explore. Sure, it's possible to view "selection" as a basic building block, but even that is insufficiently precise IMO. There's a reason that Javascript provides all of Promise.any and Promise.all and Promise.allSettled and Promise.race. Selection isn't just a single building block, it's an entire family of building blocks with distinct semantics.
- You must guarantee forward progress inside your critical sections and that means your critical sections are guaranteed to finish. How hard is that to understand? From my perspective this situation was basically guaranteed to happen.
There is no real difference between a deadlock caused by a single thread acquiring the same non reentrant lock twice and a single thread with two virtual threads where the the first thread calls the code of the second thread inside the critical section. They are the same type of deadlock caused by the same fundamental problem.
>Remember too that the Mutex could be buried beneath several layers of function calls in different modules or packages. It could require looking across many layers of the stack at once to be able to see the problem.
That is a fundamental property of mutexes. Whenever you have a critical section, you must be 100% aware of every single line of code inside that critical section.
>There’s no one abstraction, construct, or programming pattern we can point to here and say "never do this". Still, we can provide some guidelines.
The programming pattern you're looking for is guaranteeing forward progress inside critical sections. Only synchronous code is allowed to be executed inside a critical section. The critical section must be as small as possible. It must never be interrupted, ever.
Sounds like a pain in the ass, right? That's right, locks are a pain in the ass.
- > However you also wouldn’t be able run use select! in a while loop and try to acquire the same lock (or read from the same channel) without losing your position in the queue.
No, just have select!() on a bunch of owned Futures return the futures that weren't selected instead of dropping them. Then you don't lose state. Yes, this is awkward, but it's the only logically coherent way. There is probably some macro voodoo that makes it ergonomic. But even this doesn't fix the root cause because dropping an owned Future isn't guaranteed to cancel it cleanly.
For the real root cause: https://news.ycombinator.com/item?id=45777234
- > No, just have select!() on a bunch of owned Futures return the futures that weren't selected instead of dropping them. Then you don't lose state.
How does that prevent this kind of deadlock? If the owned future has acquired a mutex, and you return that future from the select so that it might be polled again, and the user assigns it to a variable, then the future that has acquired the mutex but has not completed is still not dropped. This is basically the same as polling an `&mut future`, but with more steps.
- > How does that prevent this kind of deadlock?
Like I said, it doesn't:
> even this doesn't fix the root cause because dropping an owned Future isn't guaranteed to cancel it cleanly.
It fixes this:
> However you also wouldn’t be able run use select! in a while loop and try to acquire the same lock (or read from the same channel) without losing your position in the queue.
If you want to fix the root cause, see https://news.ycombinator.com/item?id=45777234
- If any rust designers are lurking about here: what made you decide to go for the async design pattern instead of the actor pattern, which - to me at least - seems so much cleaner and so much harder to get wrong?
Ever since I started using Erlang it felt like I finally found 'the right way' when before then I did a lot of work with sockets and asynchronous worker threads. But even though it usually worked as advertised it had a large number of really nasty pitfalls which the actor model seemed to - effortlessy - step aside.
So I'm seriously wondering what the motivation was. I get why JS uses async, there isn't any other way there, by the time they added async it was too late to change the fundamentals of the language to such a degree. But rust was a clean slate.
- Not a Rust designer, but a big motivation for Rust's async design was wanting it to work on embedded, meaning no malloc and no threads. This unfortunately precludes the vast majority of the design space here, from active futures as seen in JS/C#/Go to the actor model.
You can write code using the actor model with Tokio. But it's not natural to do so.
- Kind of a tangent, but I think "systems programming" tends to bounce back and forth between three(?) different concerns that turn out to be closely related:
1. embedded hardware, like you mentioned
2. high-performance stuff
3. "embedding" in the cross-language sense, with foreign function calls
Of course the "don't use a lot of resources" thing that makes Rust/C/C++ good for tiny hardware also tends to be helpful for performance on bigger iron. Similarly, the "don't assume much about your runtime" thing that's necessary for bare metal programming also helps a lot with interfacing with other languages. And "run on a GPU" is kind of all three of those things at once.
So yeah, which of those concerns was async Rust really designed around? All of them I guess? It's kind of like, once you put on the systems programming goggles for long enough, all of those things kind of blend together?
- > So yeah, which of those concerns was async Rust really designed around? All of them I guess?
Yes, all of them. Futures needed to work on embedded platforms (so no allocation), needed to be highly optimizable (so no virtual dispatch), and need to act reasonably in the presence of code that crosses FFI boundaries (so no stack shenanigans). Once you come to terms with these constraints--and then add on Rust's other principles regarding guaranteed memory safety, references, and ownership--there's very little wiggle room for any alternative designs other than what Rust came up with. True linear types could still improve the situation, though.
- > so no virtual dispatch
Speaking of which, I'm kind of surprised we landed on a Waker design that requires/hand-rolls virtual dispatch. Was there an alternate universe where every `poll()` function was generic on its Waker?
- In my view, the major design sin was not _forcing_ failure into the outcome list.
.await(DEADLINE) (where deadline is any non 0 unit, and 0 is 'reference defined' but a real number) should have been the easy interface. Either it yields a value or it doesn't, then the programmer has to expressly handle failure.
Deadline would only be the minimum duration after which the language, when evaluating the future / task, would return the empty set/result.
- > Deadline would only be the minimum duration after which the language, when evaluating the future / task, would return the empty set/result.
This appears to be misunderstanding how futures work in Rust. The language doesn't evaluate futures or tasks. A future is just a struct with a poll method, sort of like how a closure in Rust is just a struct with a call method. The await keyword just inserts yield points into the state machine that the language generates for you. If you want to actually run a future, you need an executor. The executor could implement timeouts, but it's not something that the language could possibly have any way to enforce or require.
- Does that imply a lot of syscalls to get the monotonic clock value? Or is there another way to do that?
- On Linux there is the VDSO, which on all mainstream architectures allows you to do `clock_gettime` without going through a syscall. It should take on the order of (double digit) nanoseconds.
- If the scheduler is doing _any_ sort of accounting at all to figure out any remote sort of fairness balancing at all, then whatever resolution that is probably works.
At least for Linux, offhand, popular task scheduler frequencies used to be 100 and 1000hz.
Looks like the Kernel's tracking that for tasks:
https://www.kernel.org/doc/html/latest/scheduler/sched-desig...
"In CFS the virtual runtime is expressed and tracked via the per-task p->se.vruntime (nanosec-unit) value."
I imagine the .vruntime struct field is still maintained with the newer "EEVDF Scheduler".
...
A Userspace task scheduler could similarly compare the DEADLINE against that runtime value. It would still reach that deadline after the minimum wait has passed, and thus be 'background GCed' at a time of the language's choice.
- The issue is that no scheduler manages futures. The scheduler sees tasks, futures are just a struct. See discussion of embedded above: there is no “kernel esque” parallel thread
- As a curious bystander, it will be interesting to see how the Zig async implementation pans out. They have the advantage of getting to see the pitfalls of those that have come before.
Getting back to Rust, even if not natural, I agree with the parent that the actor model is simply the better paradigm. Zero runtime allocation should still be possible, you just have to accept some constraints.
I think async looks simple because it looks like writing imperative code; unfortunately it is just obfuscating the complex reality underlying. The actor model makes things easier to reason about, even if it looks more complicated initially.
- I think you can do a static list of actors or tasks in embedded, but it's hard to dynamically spin up new ones. That's where intra-task concurrency is helpful.
- iiuc zig has thought about this specifically and there is a safe async-cancel in the new design that wasn't there in the old one.
- I was wondering when someone would bring up Zig. I think it's fascinating how far it has come in the last couple of years and now the new IO interface/async implementation.
Question is - when will Zig become mature enough to become a legit choice next to say, Go or Rust?
I mean for a regular dev team, not necessarily someone who works deeply along with Andrew Kelley etc like Tigerbeetle.
- > But it's not natural to do so.
I tend to write most of my async Rust following the actor model and I find it natural. Alice Rhyl, a prominent Tokio contributor, has written about the specific patterns:
- The ‘Beware of cycles’ section at the end has some striking similarities with futurelock avoidance recommendations from the original article… not sure what to make of this except to say that this stuff is hard?
- Oh I do too, and that's one of the recommendations in RFD 400 as well as in my talk. cargo-nextest's runner loop [1] is also structured as two main actors + one for each test. But you have to write it all out and it can get pretty verbose.
- Rust async still uses a native stack which just a form of memory allocator that uses LIFO order. And controlling stack usage in the embedding world is just as important as not relying on the system allocator.
So its a pity that Rust async design tried so hard to avoid any explicit allocations rather than using an explicit allocator that embedding can use to preallocate and reuse objects.
- > a native stack [is] just a form of memory allocator
There is a lot riding on that “just”. Hardware stacks are very, very unlike heap memory allocators in pretty much every possible way other than “both systems provide access to memory.”
Tons and tons of embedded code assumes the stack is, indeed, a hardware stack. It’s far from trivial to make that code “just use a dummy/static allocator with the same api as a heap”; that code may not be in Rust, and it’s ubiquitous for embedded code to not be written with abstractions in front of its allocator—why would it do otherwise, given that tons of embedded code was written for a specific compiler+hardware combination with a specific (and often automatic or compiler-assisted) stack memory management scheme? That’s a bit like complaining that a specific device driver doesn’t use a device-agnostic abstraction.
- During the design phase of Rust async there were no async embedded code written to be inspired. For systems with tight memory budget it is common to pre-allocate everything often using custom bump allocation or split memory into few regions for fixed-sized things and allocate from those.
And then the need to poll features by the runtime means that async in Rust requires non-trivial runtime going against the desire to avoid abstractions in the embedded.
Async without polling while stack-unfriendly requires less runtime. And if Rust supported type-safe region-based allocators when a bunch of things are allocated one by one and then released at once it could be a better fit for the embedded world.
- Stack allocation/deallocation does not fragment memory, that’s a yuge difference for embedded systems and the main reason to avoid the heap
- Even with the stack the memory can fragment. Just consider one created 10 features on the stack and the last completed last. Then memory for the first 9 will not be released until the last completes.
This problem does not happen with a custom allocator where things to allocate are of roughly the same size and allocator uses same-sized cells to allocate.
- Indeed, arena allocators are quite fast and allow you to really lock down the amount of memory that is in use for a particular kind of data. My own approach in the embedded world has always been to simply pre-allocate all of my data structures. If it boots it will run. Dynamic allocation of any kind is always going to have edge cases that will cause run-time issues. Much better to know that you have a deterministic system.
- Why would the actor model require malloc and/or threads?
- You basically have a concurrency-safe message queue. It would be pretty limited without malloc (fixed max queue size).
- _an answer_ is performance - the necessity of creating copyable/copied messages for inter-actor communication everywhere in the program _can be_ expensive.
that said there are a lot of parts of a lot of programs where a fully inlined and shake optimized async state machine isn't so critical.
it's reasonable to want a mix, to use async which can be heavily compiler optimized for performance sensitive paths, and use higher level abstractions like actors, channels, single threaded tasks, etc for less sensitive areas.
- I’m not sure this is actually true? Do messages have to be copied?
- if you want your actors to be independent computation flows and they're in different coroutines or threads, then you need to arrange that the data source can not modify the data once it arrives at the destination, in order to be safe.
in a single threaded fully cooperative environment you could ensure this by implication of only one coroutine running at a time, removing data races, but retaining logical ones.
if you want to eradicate logical races, or have actual parallel computation, then the source data must be copied into the message, or the content of the message be wrapped in a lock or similar.
in almost all practical scenarios this means the data source copies data into messages.
- Rust solves this at compile-time with move semantics, with no runtime overhead. This feature is arguably why Rust exists, it's really useful.
- Rust moves are a memcpy where the source becomes effectively unitialized after the move (that is say it is undefined to access it after the move). The copies are often optimized by the compiler but it isn't guaranteed.
This actually caused some issues with rust in the kernel because moving large structs could cause you to run out the small amount of stack space availabe on kernel threads (they only allocate 8-16KB of stack compared to a typical 8MB for a userspace thread). The pinned-init crate is how they ended solving this [1].
- if you can always move the data that's the sweet spot for async, you just pass it down the stack and nothing matters.
all of the complexity comes in when more than one part of the code is interested in the state at the same time, which is what this thread is about.
- In Rust wouldn’t you just Send the data?
- I'd recommend watching this video: https://www.infoq.com/presentations/rust-2019/; and reading this: https://tokio.rs/blog/2020-04-preemption
I'm not the right person to write a tl;dr, but here goes.
For actors, you're basically talking about green threads. Rust had a hard constraint that calls to C not have overhead and so green threads were out. C is going to expect an actual stack so you have to basically spin up a real stack from your green-thread stack, call the C function, then translate it back. I think Erlang also does some magic where it will move things to a separate thread pool so that the C FFI can block without blocking the rest of your Erlang actors.
Generally, async/await has lower overhead because it gets compiled down to a state machine and event loop. Languages like Go and Erlang are great, but Rust is a systems programming language looking for zero cost abstractions rather than just "it's fast."
To some extent, you can trade overhead for ease. Garbage collectors are easy, but they come with overhead compared to Rust's borrow checker method or malloc/free.
To an extent it's about tradeoffs and what you're trying to make. Erlang and Go were trying to build something different where different tradeoffs made sense.
EDIT: I'd also note that before Go introduced preemption, it too would have "pitfalls". If a goroutine didn't trigger a stack reallocation (like function calls that would make it grow the stack) or do something that would yield (like blocking IO), it could starve other goroutines. Now Go does preemption checks so that the scheduler can interrupt hot loops. I think Erlang works somewhat similarly to Rust in scheduling in that its actors have a certain budget, every function call decrements their budget, and when they run of of budget they have to yield back to the scheduler.
- Indeed, in Erlang the budget is counted in 'reductions'. Technically Erlang uses the BEAM as a CPU with some nifty extra features which allow you to pretend that you are pre-empting a process when in fact it is the interpreter of the bytecode that does the work and there are no interrupts involved. Erlang would not be able to do this if the Erlang input code was translated straight to machine instructions.
But Go does compile down to machine code, so that's why until it did pre-emption it needed that yield or hook.
Come to think of it: it is strange that such quota management isn't built into the CPU itself. It seems like a very logical thing to do. Instead we rely on hardware interrupts for pre-emption and those are pretty fickle. It also means that there is a fixed system wide granularity for scheduling.
- Fickle? Pray tell, when the OS switch your thread for another thread, in what way does that fickleness show?
- Your application needs concurrency. So, the answer is... switch your entire application, code style, and libraries it uses into a separate domain that is borderline incompatible with normal one? And has its own dialects that have their own compatibility barriers? Doesn't make sense to me.
- It's easier to write all applications, concurrent or not, in a style that works well for concurrency. Lots of applications can benefit from concurrency.
You can do straight line, single threaded, non concurrent code in an actor model. Mostly, that's what most of the actor code will look like. Get a message, update local state in a straight forward way, send a response, repeat.
- I'm surprised learning this too. I know the hobby embedded, and HTTP-server OSS ecosystem have committed to Async, but I didn't expect Oxide would.
- We actually don't use Rust async in the embedded parts of our system. This is largely because our firmware is based on a multi-tasking microkernel operating system, Hubris[1], and we can express concurrency at the level of the OS scheduler. Although our service processors are single-core systems, we can still rely on the OS to schedule multiple threads of execution.
Rust async is, however, very useful in single-core embedded systems that don't have an operating system with preemptive multitasking, where one thread of execution is all you ever get. It's nice to have a way to express that you might be doing multiple things concurrently in an event-driven way without having to have an OS to manage preemptive multitasking.
- Heh, this is super interesting to hear. Single threaded async/concurrent code is so fun and interesting to see. I’ve ran some tokio programs in single threaded mode just to see it in action
- > FAQ: doesn’t future1 get cancelled?
I guess cancellation is really two different things, which usually happen at the ~same time, but not in this case: 1) the future stops getting polled, and 2) the future gets dropped. In this example the drop is delayed, and because the future is holding a guard,* the delay has side effects. So the future "has been cancelled" in the sense that it will never again make forward progress, but it "hasn't been cancelled yet" in the sense that it's still holding resources. I wonder if it's practical to say "make sure those two things always happen together"?
* Technically a Tokio-internal `Acquire` future that owns a queue position to get a guard, but it sounds like the exact same bug could manifest after it got the guard too, so let's call it a guard.
- > // Start a background task that takes the lock and holds it for a few seconds.
Holding a lock while waiting for IO can destroy a system's performance. With async Rust, we can prevent this by making the MutexGuard !Send, so it cannot be held across an await. Specifically, because it is !Send, it cannot be stored in the Future [2], so it must be dropped immediately, freeing the lock. This also prevents Futurelock deadlock.
This is how I wrote safina::sync::Mutex [0]. I did try to make it Send, like Tokio's MutexGuard, but stopped when I realized that it would become very complicated or require unsafe.
> You could imagine an unfair Mutex that always woke up all waiters and let them race to grab the lock again. That would not suffer from risk of futurelock, but it would have the thundering herd problem plus all the liveness issues associated with unfair synchronization primitives.
Thundering herd is when clients overload servers. This simple Mutex has O(n^2) runtime: every task must acquire and release the mutex, which adds all waiting tasks to the scheduler queue. In practice, scheduling a task is very fast (~600ns). As long as polling the lock-mutex-future is fast and you have <500 waiting tasks, then the O(n^2) runtime is fine.
Performance is hard to predict. I wrote Safina using the simplest possible implementations and assumed they would be slow. Then I wrote some micro-benchmarks and found that some parts (like the async Mutex) actually outperform Tokio's complicated versions [1]. I spent days coding optimizations that did not improve performance (work stealing) or even reduced performance (thread affinity). Now I'm hesitant to believe assumptions and predictions about performance, even if they are based on profiling data.
[0] https://docs.rs/safina/latest/safina/sync/struct.MutexGuard....
[1] https://docs.rs/safina/latest/safina/index.html#benchmark
[2] Multi-threaded async executors require futures to be Send.
- Considering this issue did also make me think - maybe the real footgun here is the async mutex. I think a better "rule" to avoid this issue might be something like, don't just use the tokio async mutex by default just because it's there and you're in an async function; instead default to a sync mutex that errors when held across awaits and think very hard about what you're really doing before you switch to the async one.
- Actually I think I might be a little misguided here - confusing a mutex with an awaitable lock method versus blocking, and a mutex whose LockGuard is Send and can be held across other await points.
To clarify, I do still think it's probably wise to prefer using a mutex whose LockGuard is not Send. If you're in an async context though, it seems clearly preferable to use a mutex that lets you await on lock instead of possibly blocking. Looks like that's what that Safina gives you.
It does bring to mind the point though - does it really make sense to call all of these things Mutexes? Most Mutexes, including the one in std, seem relatively simplistic, with no provision for exactly what happens if multiple threads/tasks are waiting to acquire the lock. As if they're designed for the case of, it's probably rare to never for multiple threads to actually need this thing at once, but we have to guard against it just to be certain. The case of this resource is in high demand by a bunch of threads, we expect there to be a lot of time spent by a lot of threads waiting to get the lock, so it's actually important which lock requesters actually get the lock in what order, seems different enough that it maybe ought to have a different name and more flexibility and selection as to what algorithm is being used to control the lock order.
- I would guess this is just to make the explanation of the bug easier.
In real world, the futurelock could occur even with very short locks, it just wouldn't be so deterministic. Having a minimal reproducer that you have to run a thousand times and it will maybe futurelock doesn't really make for a good example :)
- >In real world, the futurelock could occur even with very short locks, it just wouldn't be so deterministic.
You have to explain the problem properly then. The problem here has nothing to do with duration whatsoever so don't bring that up. The problem here is that if you acquire a lock, you're inside a critical section. Critical sections have a programming paradigm that is equivalent to writing unsafe Rust. You're not allowed to panic inside unsafe Rust or inside critical sections. It's simply not allowed.
You're also not allowed to interrupt the critical section by something that does not have a hard guarantee that it will finish. This rules out await inside the critical section. You're not allowed to do await. It's simply not allowed. The only thing you're allowed to do is execute an instruction that guarantees that N-1 instructions are left to be executed, where N is a finite number. Alternatively you do the logical equivalent. You have a process that has a known finite bound on how long it will take to execute and you are waiting for that external process.
After that process has finished, you release the lock. Then you return to the scheduler and execute the next future. The next future cannot be blocked because the lock has already been released. It's simply impossible.
You now have to explain how the impossible happened. After all, by using the lock you've declared that you took all possible precautions to avoid interrupting the critical section. If you did not, then you deserve any bugs coming your way. That's just how locks are.
- I think you misunderstand the problem. The only purpose of the sleep in this example is to control interleaving of execution to ensure the problem happens. Here's a version where the background task (the initial lock holder) only runs a bounded number of instructions with the lock held, just as you suggest:
https://play.rust-lang.org/?version=stable&mode=debug&editio...
It still futurelocks.
> After that process has finished, you release the lock. Then you return to the scheduler and execute the next future. The next future cannot be blocked because the lock has already been released. It's simply impossible.
This is true with threads and with tasks that only ever poll futures sequentially. It is not true in the various cases mentioned in this RFD (notably `tokio::select!`, but also others). Intuitively: when you have one task polling on multiple futures concurrently, you're essentially adding another layer to the scheduler (kernel thread scheduler, tokio task scheduler, now some task is acting as its own future scheduler). The problem is it's surprisingly easy to (1) not realize that and (2) accidentally have that "scheduler" not poll the next runnable future and then get stuck, just like if the kernel scheduler didn't wake up a runnable thread.
- Work stealing is more a technique to function better when architecture is pessimal (think mixing slow and fast tasks in one queue) than something that make things go faster in general. It also tend to shuffle around complexity a bit, in ways that are sometimes nice.
Same thing with task preemption, though that one has less organisatorial impact.
In general, getting something to perform well enough on specific tasks is a lot easier than performing well enough on tasks in general. At the same time, most tasks have kinda specific needs when you start looking at them..
- When considering this issue alongside with RFD 397, it seems to me that the problem is actually using future drops as an implicit (!) cancellation signal. This makes drop handlers responsible for handling every cancellation-related task, which they are not very good at. If a future is not immediately dropped after selecting on it, you get futurelock, and if it is, you get an async cancellation correctness problem, where the only way to try and interact with the cancellation execution flow is to use drop handlers (maybe in the form of scope guards).
Sadly, the only solution I know of is to use an explicit cancellation signal, and to modify ~everything to work with it. In that world, almost all async functions would need to accept a cancellation parameter of some sort, like a Go Context or like the tokio-utils CancellationToken, and explicitly check it every time they await a function. The new select!-equivalent would need to signal cancellations and then keep polling all unfinished cancellation-aware futures in a loop until they finished, and maybe immediately drop all non-aware futures to prevent futurelock. The entire Tokio API would need to be wrapped to take into account cancellation tokens, as well as any other async library you would want to use.
A lot of work, and you would need to do something if cancel-aware futures get dropped anyway. What a mess.
- This feels like the sort of thing that has led to the development of deterministic simulation testing (DST) techniques as pioneered by FoundationDB and TigerBeetle.
https://notes.eatonphil.com/2024-08-20-deterministic-simulat...
I hope something like this becomes popular in the Rust/Tokio space. It seems like Turmoil is that?
- In my experience, almost any asynchronous runtime faced similar issue at some point (e.g. we helped to find and fix such issue in ZIO).
It's hard to verify these protocols and very easy to write something fragile.
- I have very little Rust experience... but I'm hung up on this:
> The lock is given to future1
> future1 cannot run (and therefore cannot drop the Mutex) until the task starts running it.
This seems like a contradiction to me. How can future1 acquire the Mutex in the first place, if it cannot run? The word "given" is really odd to me.
Why would do_async_thing() not immediately run the prints, return, and drop the lock after acquiring it? Why does future1 need to be "polled" for that to happen? I get that due to the select! behavior, the result of future1 is not consumed, but I don't understand how that prevents it from releasing the mutex.
It's more typical in my experience that the act of granting the lock to a thread is what makes it runnable, and it runs right then. Having to take some explicit second action to make that happen seems fundamentally broken to me...
EDIT: Rephrased for clarity.
- > This seems like a contradiction to me. How can future1 acquire the Mutex in the first place, if it cannot run? The word "given" is really odd to me.
`future1` did run for a bit, and it got far enough to acquire the mutex. (As the article mentioned, technically it took a position in a queue that means it will get the mutex, but that's morally the same thing here.) Then it was "paused". I put "paused" in scare quotes because it kind of makes futures sound like processes or threads, which have a "life of their own" until/unless something "interrupts" them, but an important part of this story is that Rust futures aren't really like that. When you get down to the details, they're more like a struct or a class that just sits there being data unless you call certain methods on it (repeatedly). That's what the `.await` keyword does for you, but when you use more interesting constructs like `select!`, you start to get more of the details in your face.
It's hard to be more concrete than that without getting into an overwhelming amount of detail. I wrote a set of blog posts that try to cover it without hand-waving the details away, but they're not short, and they do require some Rust background: https://jacko.io/async_intro.html
- So my understanding was correct, it requires the programmer to deal with scheduling explicitly in userspace.
If I'm writing bare metal code for e.g. a little cortex M0, I can very much see the utility of this abstraction.
But it seems like an absolutely absurd exercise for code running in userspace on a "real" OS like Linux. There should be some simpler intermediate abstraction... this seems like a case of forcing a too-complex interface on users who don't really require it.
- There is one: tasks. But having the lower level (futures) available makes it very tempting to use it, both for performance and because the code is simpler (at least, it looks simpler). Some things that are easy with select! would be clunky with tasks.
On the other hand, some direct uses of futures are reminiscent of the tendency to obsess over ownership and borrowing to maximize sharing, when you could just use .clone() and it wouldn’t make any practical difference. Because Rust is so explicit, you can see the overhead so you want to minimize it.
- To be clear, if you restrict yourself to `async`/`.await` syntax, you never see any of this. To await something means to poll it to completion, which is usually what you want. "Joining" two futures lets you poll both of them concurrently until they're both done, which is kind of the point of async as a concept, and this also doesn't really require you to think about scheduling. One place where things get hairy (like in this article) is "selecting" on futures, which polls them all until one of them is done, and then stops polling the rest. (Normally I'd loosely say it "drops the rest on the floor", but the deadlock in this article actually hinges on exactly what gets "dropped" when, in the Rust sense of the `Drop` trait.) This is where scheduling as you put it, or "cancellation" as Rust folks often put it, starts to become important. And that's why the article concludes "In the end, you should always be extremely careful with tokio::select!" However, `select!` is not the only construct that raises these issues. Speaking of which...
> But it seems like an absolutely absurd exercise for code running in userspace on a "real" OS like Linux
Clearly you have a point here, which is why these blog posts are making an impact. That said, one counterpoint is, have you ever wished you could kill a thread? The reason there are so many old Raymond Chen "How many times does it have to be said: Never call TerminateThread" blog posts, is that lots of real world applications really desperately want to call TerminateThread, and it's hard to persuade them to stop! The ability to e.g. put a timeout on any async function call is basically this same superpower, without corrupting your whole process (yay), but still with the unavoidable(?) difficulty of thinking about what happens when random functions give up halfway through.
- Your confusion is very natural:
> It's more typical in my experience that the act of granting the lock to a thread is what makes it runnable, and it runs right then.
This gets at why this felt like a big deal when we ran into this. This is how it would work with threads. Tasks and futures hook into our intuitive understanding of how things work with threads. (And for tasks, that's probably still a fair mental model, as far as I know.) But futures within a task are different because of the inversion of control: tasks must poll them for them to keep running. The problem here is that the task that's responsible for polling this future has essentially forgotten about it. The analogous thing with threads would seem to be something like if the kernel forgot to enqueue some runnable thread on a run queue.
- > tasks must poll them for them to keep running.
So async Rust introduces an entire novel class of subtle concurrent programming errors? Ugh, that's awful.
> The analogous thing with threads would seem to be something like if the kernel forgot to enqueue some runnable thread on a run queue.
Yes. But I've never written code in a preemptible protected mode environment like Linux userspace where it is possible to make that mistake. That's nuts to me.
From my POV this seems like a fundamental design flaw in async rust. Like, on a bare metal thing I expect to deal with stuff like this... but code running on a real OS shouldn't have to.
- I definitely hear that!
To keep it in perspective, though: we've been operating a pretty good size system that's heavy on async Rust for a few years now and this is the first we've seen this problem. Hitting it requires a bunch of things (programming patterns and runtime behavior) to come together. It's really unfortunate that there aren't guard rails here, but it's not like people are hitting this all over the place.
The thing is that the alternatives all have tradeoffs, too. With threaded systems, there's no distinction in code between stuff that's quick vs. stuff that can block, and that makes it easy to accidentally do time-consuming (blocking) work in contexts that don't expect it (e.g., a lock held). With channels / message passing / actors, having the receiver/actor go off and do something expensive is just as bad as doing something expensive with a lock held. There are environments that take this to the extreme where you can't even really block or do expensive things as an actor, but there the hidden problem is often queueing and backpressure (or lack thereof). There's just no free lunch.
I'd certainly think carefully in choosing between sync vs. async Rust. But we've had a lot fewer issues with both of these than I've had in my past experience working on threaded systems in C and Java and event-oriented systems in C and Node.js.
- Rust can't assume you're running on a real OS though.
- Great read, and the example code makes sense. This stuff can be a nightmare to find, but once you do it's like a giant 1000 piece puzzle just clicks together instantly.
- Indeed. One of the interesting side effects of being a remote company that records everything[0] is that we have the instant where the "1000 piece puzzle just clicks together" recorded, and it's honestly pretty wild. In this case, it was very much a shared brainstorming between four engineers (Eliza, Sean, John and Dave) -- and there is almost a passing of the baton where they start to imagine the kind of scenario that could induce this and then realize that those are exactly the conditions that exist in the software.
We are (on brand?) going to do a podcast episode on this on Monday[1]; ahead of that conversation I'm going to get a clip of that video out, just because it's interesting to see the team work together to debug it.
- As a member of (Eliza, Sean, John, and Dave), I can second that debugging this was certainly an adventure. I'm not going to go as far as to say that we had fun, since...you can't have a heroic narrative without real struggle. But it was certainly rewarding to be in the room for that "a-ha!" moment, in which all the pieces really did begin to fit together very quickly. It was like the climax of a detective story --- and it was particularly well-scripted the way each of us contributed a little piece of the puzzle.
- Since you are of of the people working directly on this codebase, may I ask you why is select! being used/allowed in the first place?
Its footgun-y nature has been known for years (IIRC even the first version of the tokio documentation warned against that) and as such I don't really understand why people are still using it. (For context I was the lead of a Rust team working on a pretty complex async networking program and we had banned select! very early in the project and never regretted this decision once).
- What to use instead?
- > &mut future1 is dropped, but this is just a reference and so has no effect. Importantly, the future itself (future1) is not dropped.
There's a lot of talk about Rust's await implementation, but I don't really think that's the issue here. After all, Rust doesn't guarantee convergence. Tokio, on the other hand (being a library that handles multi-threading), should (at least when using its own constructs, e.g. the `select!` macro).
So, since the crux of the problem is the `tokio::select!` macro, it seems like a pretty clear tokio bug. Side note, I never looked at it before, but the macro[1] is absolutely hideous.
[1] https://docs.rs/tokio/1.34.0/src/tokio/macros/select.rs.html
- There's nothing `select!` could do here to force `future1` to drop, because it doesn't receive ownership of `future1`. If we wanted to force this, we'd have to forbid `select!` from polling futures by reference, but that's a pretty fundamental capability that we often rely on to `select!` in a loop for example. The blanket `impl<F> Future for &mut F where F: Future ...` isn't a Tokio thing either; that's in the standard library.
- Surely not every use of `select!` needs this ability. If you can design a more restrictive interface that makes correctness easier to determine, then you should use that interface where you can, and reserve `select!` for only those cases where you can't.
- What could `tokio::select!` do differently here to prevent bugs like this?
In the case of `select!`, it is a direct consequence of the ability to poll a `&mut` reference to a future in a `select!` arm, where the future is not dropped should another future win the "race" of the select. This is not really a choice Tokio made when designing `select!`, but is instead due to the existence of implementations of `Future` for `&mut T: Future + Unpin`[1] and `Pin<T: Future>`[2] in the standard library.
Tokio's `select!` macro cannot easily stop the user from doing this, and, furthermore, the fact that you can do this is useful --- there are many legitimate reasons you might want to continue polling a future if another branch of the select completes first. It's desirable to be able to express the idea that we want to continually poll drive one asynchronous operation to completion while periodically checking if some other thing has happened and taking action based on that, and then continue driving forward the ongoing operation. That was precisely what the code in which we found the bug was doing, and it is a pretty reasonable thing to want to do; a version of the `select!` macro which disallows that would limit its usefulness. The issue arises specifically from the fact that the `&mut future` has been polled to a state in which it has acquired, but not released, a shared lock or lock-like resource, and then another arm of the `select!` completes first and the body of that branch runs async code that also awaits that shared resource.
If you can think of an API change which Tokio could make that would solve this problem, I'd love to hear it. But, having spent some time trying to think of one myself, I'm not sure how it would be done without limiting the ability to express code that one might reasonably want to be able to write, and without making fundamental changes to the design of Rust async as a whole.
[1] https://doc.rust-lang.org/stable/std/future/trait.Future.htm... [2]: https://doc.rust-lang.org/stable/std/future/trait.Future.htm...
- A meta-idea I have: look at all usages of `select!` with `&mut future`s in the code, and see if there are maybe 4 or 5 patterns that emerge. With that it might be possible to say "instead of `select!` use `poll_continuing!` or `poll_first_up!` or `poll_some_other_common_pattern!`".
It feels like a lot of the way Rust untangles these tricky problems is by identifying slightly more contextful abstractions, though at the cost of needing more scratch space in the mind for various methods
- I can imagine an alternate universe in which you cannot do:
1. Create future A.
2. Poll future A at least once but not provably poll it to completion and also not drop it. This includes selecting it.
3. Pause yourself by awaiting anything that does not involve continuing to poll A.
I’m struggling a bit to imagine the scenario in which it makes sense to pause a coroutine that you depend on in the middle like this. But I also don’t immediately see a way to change a language like Rust to reliably prevent doing this without massively breaking changes. See my other comment :)
- I'm not familiar with tokio, but I am familiar with folly coro in C++ which is similiar-ish. You cannot co_await a folly::coro::Task by reference, you must move it. It seems like that prevents this bug. So maybe select! is the low level API and a higher level (i.e. safer) abstraction can be built on top?
- (author here)
Although the design of the `tokio::select!` macro creates ways to run into this behavior, I don't believe the problem is specific to `tokio`. Why wouldn't the example from the post using Streams happen with any other executor?
- First of all, great write-up! Had a blast reading it :) I think there's a difference between a language giving you a footgun and a library giving you a footgun. Libraries, by definition, are supposed to be as user-friendly as possible.
For example, I can just do `loop { }` which the language is perfectly okay with letting me do anywhere in my code (and essentially hanging execution). But if I'm using a library and I'm calling `innocuous()` and there's a `loop { }` buried somewhere in there, that is (in my opinion) the library's responsibility.
N.B. I don't know enough about tokio's internals to suggest any changes and don't want to pretend like I'm an expert, but I do think this caveat should be clearly documented and a "safe" version of `select!` (which wouldn't work with references) should be provided.
- i forget if this part unwinds to the exact same place, but some of this kind of design constraint in tokio stems from the much earlier language capabilities and is prohibitive to adjust without breaking the user ecosystem.
one of the key advertised selling points in some of the other runtimes was specifically around behavior of tasks on drop of their join handles for example, for reasons closely related to this post.
- I am wondering if there is a larger RFC for Rust to force users to not hold a variable across await points.
In my mind futurelock is similar to keeping a sync lock across an await point. We have nothing right now to force a drop and I think the solution to that problem would help here.
- There's an existing lint that lets you prohibit instances of specific types from being held across await points: https://rust-lang.github.io/rust-clippy/stable/index.html#aw...
- Note that forcing a drop of a lock guard has its own issues, particularly around leaving the guarded data in an invalid state. I cover this a bit in my talk that Bryan linked to in the OP [1].
[1] timestamped: https://youtu.be/zrv5Cy1R7r4?t=1067
- I’m not convinced that this can help in a meaningful way.
Fundamentally, if you have two coroutines (or cooperatively scheduled threads or whatever), and one of them holds a lock, and the other one is awaiting the lock, and you don’t schedule the first one, you’re stuck.
I wonder if there’s a form of structured concurrency that would help. If I create two futures and start both of them (in Rust this means polling each one once) but do not continue to poll both, then I’m sort of making a mistake.
So imagine a world where, to poll a future at all, I need to have a nursery, and the nursery is passed in from my task and down the call stack. When I create a future, I can pass in my nursery, but that future then gets an exclusive reference to my future until it’s complete or cancelled. If I want to create more than one future that are live concurrently, I need to create a FutureGroup (that gets an exclusive reference to my nursery) and that allows me to create multiple sub-nurseries that can be used to make futures but cannot be used to poll them — instead I poll the FutureGroup.
(I have yet to try using an async/await system or a reactor or anything of the sort that is not very easy to screw up. My current pet peeve is this pattern:
What if thingy.read() succeeds but I am cancelled? This gets nasty is most programming languages. Python: the docs on when I can get cancelled are almost nonexistent, and it’s not obviously possible to catch the CancelledError such that I still have data and can therefore save it somewhere so it’s not lost. Rust: what if thingy thinks it has returned the data but I’m never polled again? Maybe this can’t happen if I’m careful, but that requires more thought than I’m really happy with.)data = await thingy.read() - The ideas that have been batted around is called "async drop" [1]
And it looks like it's still just an unaddressed well known problem [2].
Honestly, once the Mozilla sackening of rust devs happened it seems like the language has been practically rudderless. The RFC system seems almost dead as a lot of the main contributors are no longer working on rust.
This initiative hasn't had motion since 2021. [3]
[1] https://rust-lang.github.io/async-fundamentals-initiative/ro...
[2] https://rust-lang.github.io/async-fundamentals-initiative/
[3] https://github.com/rust-lang/async-fundamentals-initiative
- Those pages are out of date, and AsyncDrop is in progress: https://github.com/rust-lang/rust/issues/126482
I think "practically rudderless" here is fairly misinformed and a little harmful/rude to all the folks doing tons of great work still.
It's a shame there are some stale pages around and so on, but they're not good measures of the state of the project or ecosystem.
The problem of holding objects across async points is also partially implemented in this unstable lint marker which is used by some projects: https://dev-doc.rust-lang.org/unstable-book/language-feature...
You also get a similar effect in multi-threaded runtimes by not arbitrarily making everything in your object model Send and instead designing your architecture so that most things between wake-ups don't become arbitrarily movable references.
These aren't perfect mitigations, but some tools.
- In fairness, if you're a layman to the rust development process (as I am, so I'm speaking from personal experience here) it's damn near impossible to figure out the status of things. There tracking issues, RFCs, etc which is very confusing as an outsider and gives no obvious place to look to find out the current status of a proposal. I'm sure there is a logic to it and that if I spent the time to learn it would make sense. But it is really hard to approach for someone like me.
- If you want to find out the status of something, the best bet is to go to the Rust Zulip and ask around: https://rust-lang.zulipchat.com/ . Most Rust initiatives are pushed forward by volunteers who are happy to talk about what they're working on, but who only periodically write status reports on tracking issues (usually in response to someone asking them what the status is). Rust isn't a company where documentation is anyone's job, it's just a bunch of people working on stuff, for better or worse.
- > I think "practically rudderless" here is fairly misinformed and a little harmful/rude to all the folks doing tons of great work still.
That great work is mostly opaque on the outside.
What's been noticeable as an observer is that a lot of the well known names associated with rust no longer work on it and there's been a large amount of turnover around it.
That manifests in things like this case where work was in progress up until ~2021 and then was ultimately backburnered while the entire org was reshuffled. (I'd note the dates on the MCP as Feb 2024).
I can't tell exactly how much work or what direction it went in from 2021 to 2024 but it does look apparent that the work ultimately got shifted between multiple individuals.
I hope rust is in a better spot. But I also don't think I was being unfair in pointing out how much momentum got wrecked when Mozilla pulled support.
- The language team tends to look at these kinds of challenges and drive them to a root cause, which spins off a tree of work to adjust the core language to support what's required by the higher level pieces, once that work is done then the higher level projects are unblocked (example: RPIT for async drop).
That's not always super visible if you're not following the working groups or in contact with folks working on the stuff. It's entirely fair that they're prioritizing getting work done than explaining low level language challenges to everyone everywhere.
I think you're seeing a lack of data and trying to use that as a justification to fit a story that you like, more than seeing data that is derivative of the story that you like. Of course some people were horribly disrupted by the changes, but language usage also expanded substantially during and since that time, and there are many team members employed by many other organizations, and many independents too.
And there are more docs, anyway:
https://rust-lang.github.io/rust-project-goals/2024h2/async.... https://rust-lang.github.io/rust-project-goals/2025h1/async.... https://rust-lang.github.io/rust-project-goals/2025h2/field-... https://rust-lang.github.io/rust-project-goals/2025h2/evolvi... https://rust-lang.github.io/rust-project-goals/2025h2/goals....
- While the Mozilla layoffs were a stressful time with a lot of uncertainty involved, in the end it hasn't appeared to have had a deleterious effect on Rust development. Today the activity in the Rust repo is as high as it's ever been (https://github.com/rust-lang/rust/graphs/contributors) and the governance of the project is more organized and healthy than it's ever been (https://blog.rust-lang.org/2025/10/15/announcing-the-new-rus...). The language certainly isn't rudderless, it's just branched out beyond the RFC system (https://blog.rust-lang.org/2025/10/28/project-goals-2025h2/). RFCs are still used for major things as a form of documentation, validation, and community alignment, but doing design up-front in RFCs has turned out to be an extremely difficult process. Instead, it's evolving toward a system where major things get implemented first as experiments, whose design later guides the eventual RFC.
- Wow, that makes sense afterwards but I would not have guessed at it immediately looking at the code. Very insidious. Great blogpost.
- Simplify: tokio::select! will discard other futures when one future progress.
The discarded futures will never be run again.
Normally when a future is discarded it's dropped. When a future holding lock is dropped, lock is released, but it's passing future borrow to select so the discarded future is not dropped while holding lock.
So it leaves a future that holds a lock that will never run again.
- Wow, it is simply outrageous that Rust doesn't just allow all active tasks to make progress. It creates a whole class of incomprehensible bugs, like this one, for no reason. Can any Rust experts explain why it's done this way? It seems like an unforced error.
In Python, I often use the Trio library, which offers "structured, concurrency": tasks are (only) spawned into lexical scopes, and they are all completed (waited for) before that scope is left. That includes waiting for any cancelled tasks (which are allowed to do useful async work, including waiting for any of their own task scopes to complete).
Could Rust do something like that? It's far easier to reason about than traditional async programs, which seems up Rust's street. As a bonus it seems to solve this problem, since a Rust equivalent would presumably have all tasks implicitly polled by their owning scope.
- It's hard to answer the question because of unclear terminology (tasks vs. futures). There are ways to do structured concurrency in Rust, but they are for tasks, not futures. There's not really a concept of "active futures" (other than calling an "active future" one that returned Pending the last time you polled it).
A task is the thing that drives progress by polling some futures. But one of those futures may want to handle polling for other futures that it made, which is where this arises.
As the article says, one option is to spawn everything as a task, but that doesn't solve all problems, and precludes some useful ways of using futures.
- So there's a distinction between a task and a future. A future doesn't do anything until it's polled, and since there's nothing special about async runtimes (it's just user level code), it's always possible to create futures and never poll them, or stop polling them.
A task is a different construct and usually tied to the runtime. If you look at the suggestions in the RFD they call out using a task explicitly instead of polling a future in place.
There's some debate to be had over what constitutes "cancellation." The article and most colloquial definitions I've heard define it as a future being dropped before being polled to completion. Which is very clean - if you want to cancel a future, just drop it. Since Rust strongly encourages RAII, cleanup can go in drop implementations.
A much tougher definition of cancellation is "the future is never polled again" which is what the article hits on. The future isn't dropped but its poll is also unreachable, hence the deadlock.
- I wish "cancellation" wasn't used for both of those. It seems to obfuscate understanding quite a bit. We should call them "dropped" and "abandoned" or something.
- I don't think anyone really calls the latter "cancellation" in practice. I'm just pointing out that "is never polled again" is the tricky state with futures.
- Interesting, thanks. So is it fair to say that if tokio::select!() only accepted tasks (or implicitly turned any futures it receives into tasks, like Python's asyncio.gather() does) then it wouldn't have this problem? Or, even if the async runtime is careful, is it still possible to create and fail to poll a raw Future by accident?
- > if tokio::select!() only accepted tasks (or implicitly turned any futures it receives into tasks, like Python's asyncio.gather() does) then it wouldn't have this problem?
Yes, this is correct. However, many of the use cases for select rely on the fact that it doesn't run all the tasks to completion. I've written many a select! statement to implements timeouts or other forms of intentionally preempting a task. Sometimes I want to cancel the task and sometimes I want to resume it after dealing with the condition that caused the preemption -- so the behavior in the article is very much an intentional feature.
> even if the async runtime is careful, is it still possible to create and fail to poll a raw Future by accident?
This is also the case. There's nothing magic about a future; it's just an ordinary object with a poll function. Any code can create a future and do whatever it likes with it; including polling it a few times and then stopping.
Despite being included as part of Tokio, select! does not interact with the runtime or need any kind of runtime support at all. It's an ordinary function that creates a future which waits for the first of several "child" futures to complete; similar functions are also provided in other prominent ecosystem crates besides Tokio and can be implemented in user code as well.
- > However, many of the use cases for select rely on the fact that it doesn't run all the tasks to completion.
That seems like a different requirement than "all arguments are tasks". If I understand it right (and quite possibly I don't), making them all tasks means that they are all polled and therefore continue progressing until they are dropped. It doesn't mean that select! would have to run them all the way to completion.
- I was sloppy with my wording, I should have said "it doesn't run all the futures to completion".
> making them all tasks means that they are all polled and therefore continue progressing until they are dropped. It doesn't mean that select! would have to run them all the way to completion.
This is exactly correct, but oftentimes the reason you're using select is because you don't want to run the futures all the way to completion. In my experience, the most common use cases for select are:
- An event handler loop that receives input from multiple channels. You could replace this with multiple tasks, one reading from each channel; but this could potentially mess with your design for queueing / backpressure -- often it's important for the loop to pause reading from the channels while processing the event.
- An operation that's run with a timeout, or a shutdown event from a controlling task. In this case I want the future to be dropped when the task is cancelled.
The example in the original post was the second case: an operation with a timeout. They wanted the operation to be cancelled when the timeout expired, but because the select statement borrowed the future, it only suspended the future instead of cancelling it. This is a very common code pattern when calling select! in a loop, when you want a future to be resumed instead of restarted on the next loop iteration -- it's very intentional that select! allows you to use either way, because you often want either behavior.
- Doing select on tasks doesn’t really make sense semantically in the first place. Tasks are already getting polled by the executor. The purpose of select is to run some set of futures the executor doesn’t know about, until the first one of them completes. If you wanted to wait for one of a set of tasks to do something, you don’t need any additional polling, you’d just use something like a signal or a channel to communicate with them.
- It's always possible to create a future that is never polled, and this is a feature of Rusts zero cost abstraction for async/await. If tokio::select required tasks it would be a lot less useful.
This problem would have been avoided by taking the future by value instead of by reference.
- It’s kind of wild how even the most careful Rust code can run into issues like this, really shows how deep async programming goes.
- This is why I use a threadpool instead. Cant deal with the complexity of async code.
- I read this once over and the part that doesn't seem to make sense to me is why the runtime chose, when there are two execution contexts up to the lock() in both future1 and future3, to wake up the main thread instead? I get why in a fair lock it would pick future1 but I don't get how that causes a different thread than the one holding the lock to execute.
- It seems more and more clear every day that async was rushed out the door way to quickly in Rust.
- There's a lot of improvements I could think of for async Rust, but there's basically nothing I would change about the fundamentals that underlie it (other than some tweaks to Pin, maybe, and I could quibble over some syntax). There's nothing rushed about it; it's a great foundation that demonstrably just needs someone to finish building the house on top of it (and, to continue the analogy, needs someone to finish building the sub-basement (cough, generalized coroutines)).
- A foundation full of warts belongs in experimental. I don't know how by your own confession of the house and the sub-basement not yet being finished doesn't instantly mean it should have stayed in experimental.
- Your assertion is that it was "rushed". And yet here we are today, talking about how much we wish were implemented. That's not rushed--that's the polar opposite of rushed. Almost nothing about what we currently have on stable would have been better if it was still percolating on nightly, and would have the downside of having almost no feedback from real-world use. I remember the pre-async days, nesting callbacks by hand. What we have now is a great improvement, and just needs more niceties stacked on top of it, not any sort of fundamental overhaul.
- I can’t say whether it was rushed out, but it’s clearly not everything it was advertised to be. Early on, the big talking point was that the async implementation was so modular you could swap runtimes like Lego bricks. In reality, that’s nowhere near true. Changing runtimes means changing every I/O dependency (mutexes, networking, fs), because everything is tightly coupled to the runtime. I raised this in a Reddit thread some time ago, and the feedback there reinforced that I'm not the only one with a sour Rust async taste in my mouth. https://www.reddit.com/r/rust/comments/1f4z84r/is_it_fair_to...
- I’m just gonna make a new language that has future borrowing semantics and future lifetimes to solve this.
- A masterclass in debugging
- I feel like I’m pretty good at writing multithreaded code. I’ve done it a lot. As long as you use primitives like Rust Mutex that enforce correctness for data access (ie no accessing data without the lock) it’s pretty simple. Define a clean boundary API and you’re off to the races.
async code is so so so much more complex. It’s so hard to read and rationalize. I could not follow this post. I tried. But it’s just a full extra order of complexity.
Which is a shame because async code is supposed to make code simpler! But I’m increasingly unconfident that’s true.
- Async code isn't supposed to be simpler than sync code, it's supposed to be simpler than doing thing like continuation passing.
- Async code is simpler because you're implicitly holding a lock on the CPU. That's also why you should stay away from it: it increases latency. Especially since Rust is about speed and responsiveness. In general, async programming in Rust makes little sense.
- I love Rust. But I’m 100% convinced Rust chose the wrong tradeoffs with their async model. Just give me green threads and use malloc to grow the stack. It’s fine. That would have been better imho.
- You can't have a low-level language and green threads at the same time.
- Why not?
- Why not?
- Hell, even force threads to be allocated from a bucket of N threads defined at compile time. Surely that'd work for embedded / GPU space?
- I rewrote this in Go and it also deadlocks. It doesn't seem to be something that's Rust specific.
I'm going to write down the order of events.
1. Background task takes the lock and holds it for 5 seconds.
2. Async Thing 1 tries to take the lock, but must wait for background task to release it. It is next in line to get the lock.
3. We fire off a goroutine that's just sleeping for a second.
4. Select wants to find a channel that is finished. The sleepChan finishes first (since it's sleeping for 1 second) while Async Thing 1 is still waiting 4 more seconds for the lock. So select will execute the sleepChan case.
5. That case fires off Async Thing 2. Async Thing 2 is waiting for the lock, but it is second in line to get the lock after Async Thing 1.
6. Async Thing 1 gets the lock and is ready to write to its channel - but the main is paused trying to read from c2, not c1. Main is "awaiting" on c2 via "<-c2". Async Thing 1 can't give up its lock until it writes to c1. It can't write to c1 until c1 is "awaited" via "<-c1". But the program has already gone into the other case and until the sleepChan case finishes, it won't try to await c1. But it will never finish its case because its case depends on c1 finishing first.
You can use buffered channels in Go so that Async Thing 1 can write to c1 without main reading from it, but as the article notes you could use join_all in Rust.
But the issue is that you're saying with "select" in either Go or Rust "get me the first one that finishes" and then in the branch that finishes first, you are awaiting a lock that will get resolved when you read the other branch. It just doesn't feel like something that is Rust specific.
func main() { lock := sync.Mutex{} c1 := make(chan string) c2 := make(chan string) sleepChan := make(chan bool) go start_background_task(&lock) time.Sleep(1 * time.Millisecond) //make sure it schedules start_background_task first go do_async_thing(c1, "op1", &lock) go func() { time.Sleep(1 * time.Second) sleepChan <- true }() for range 2 { select { case msg1 := <-c1: fmt.Println("In the c1 case") fmt.Printf("received %s\n", msg1) case _ = <-sleepChan: fmt.Println("In the sleepChan case") go do_async_thing(c2, "op2", &lock) fmt.Printf("received %s\n", <-c2) // "awaiting" on c2 here, but c1's lock won't be given up until we read it } } fmt.Println("all done") } func start_background_task(lock *sync.Mutex) { fmt.Println("starting background task") lock.Lock() fmt.Println("acquired background task lock") defer lock.Unlock() time.Sleep(5 * time.Second) fmt.Println("dropping background task lock") } func do_async_thing(c chan string, label string, lock *sync.Mutex) { fmt.Printf("%s: started\n", label) lock.Lock() fmt.Printf("%s: acuired lock\n", label) defer lock.Unlock() fmt.Printf("%s: done\n", label) c <- label }- I think the thing that rubs me the wrong way is that Rust was supposed to be "fearless" concurrency. Go doesn't claim that title so I'm not offended when it doesn't live up to it.
- Despite "fearless concurrency", Rust has been careful to never claim to prevent deadlocks/race conditions in general, in either async code or non-async code. It's certainly easier to get deadlocks in async Rust than in non-async Rust, but this isn't some sort of novel failure mode.
- I wrote a version of the article's code in Java and couldn't figure out why it was working until reading your example. I see now that the channel operations in Go must rendezvous which I assume matches Rust's Future behavior. Whereas, the Java CompletableFuture operations I was using to mimic the select aren't required to meet. Thanks for writing this.
- Difference in Go is that you've _expressly_ constructed a dependency ring. Should Go or any runtime go out of it's way to detect a dependency ring?
This the programming equivalent of using welding (locks) to make a chain loop, you've just done it with the 3D space impossible two links case.
As with the sin of .await(no deadline), the sin here is not adding a deadline.
- Yeah but Go makes it abvious why it is deadlocking because the async primitives are more explicit. Even a dumb LLM could have told us where the problem is (I tested).
Menawhile in Rust it looks like it took thousands of dollars in engineering time to find the issue.
- Sadly I'm away from my bookshelf but I think Concurrent ML solved this issue.
- Trying to get my head around this. It seems like the "rootest" cause here is a paradigm clash between lock fairness and async futures.
A fair lock[1] is designed to wake up the longest-waiting task, since it got to the queue first and might otherwise be starved if the algorithm doesn't guarantee it gets to the head of the queue.
BUT CRITICALLY: a future isn't a task. It's not a thread, it's not guaranteed to "run". It's just a flag that gets set somewhere. But it can consume that wakeup nonetheless. So it's possible to "wake up"[2] a future that isn't actually being polled, and won't be, until something else that is waiting on the resource that just tried to wake it up.
I don't see that these concepts are ever going to work together. You can't have locks generating wakeup events that aren't consumed. If you're going to use them with async, you need to do something like a broadcast to guarantee that every waiter sees an event.
Stated differently: the lock is signalling an edge-triggered interrupt, but rust async demands level sensititivity.
[1] In one sense of fair. There are others, like "switch now" vs. "defer context switch", but that's not relevant here.
[2] Which doesn't actually wake anything up, thus the bug.
- I know this is going to sound trite, but “don’t do that”. It’s no different than deciding to poll the win32 event queue inside an a method you executed in response to polling the event queue. Nested shit is always going to cause a bug. I guess each new generation just has to learn.
- Don't do ... what, exactly? The RFD answers this more precisely and provides suggestions for alternatives. But it's not very simple because the things that can cause this are all common patterns individually and it's only the confluence (which can be spread across layers of the program) that introduces this problem. In our case, it wasn't a Mutex, but an mpsc channel (that was working correctly! it just got very briefly saturated) and it was 3-4 modules lower in the stack than the code with the `tokio::select!` that induced this.
- It’s not nested, that’s the thing.
- Hmm, curious to see if this could happen on JS. I'll reproduce the code.
- JS shouldn't have a direct equivalent because JS async functions are eager. Once you call an async function, it will keep running even if the caller doesn't await it, or stops awaiting it. So in the scenario described, the function next in line for the lock would always have a chance to acquire and release it. The problem in Rust is that async functions are lazy and only run while they're being polled/awaited (unless wrapped in tasks). A function that's next in line for the lock might never acquire it if it's not being polled, blocking progress for other functions that are being polled.
- yes, you can produce similar issues with promise guarded states and so on as well, it's a fairly common issue in async programming, but can be surprising when it's hidden by layers of abstraction / far up/down a call-chain.
- Based on the description:
>This RFD describes futurelock: a type of deadlock where a resource owned by Future A is required for another Future B to proceed, while the Task responsible for both Futures is no longer polling A. Futurelock is a particularly subtle risk in writing asynchronous Rust.
I was honestly wondering how you could possibly cause this in any sane code base. How can an async task hold a lock and keep it open? It sounds illogical, because critical sections are meant to be short and never interrupted by anything. You're also never allowed to panic, which means you have to write no panic Rust code inside a critical section. Critical sections are very similar to unsafe blocks, but with the caveat that they cannot cause complete take over of your application.
So how exactly did they bring about the impossible? They put an await call inside the critical section. The part of the code base that is not allowed to be subject to arbitrary delays. Massive facepalm.
When you invoke await inside a critical section, you're essentially saying "I hereby accept that this critical section will last an indeterminate amount of time, I am fully aware of what the code I'm calling is doing and I am willing to accept the possibility that the release of the lock may never come, even if my own code is one hundred percent correct, since the await call may contain an explicit or implicit deadlock"
- > So how exactly did they bring about the impossible? They put an await call inside the critical section. The part of the code base that is not allowed to be subject to arbitrary delays. Massive facepalm.
I'm not sure where you got the impression that the example code was where we found the problem. That's a minimal reproducer trying to explain the problem from first principles because most people look at that code and think "that shouldn't deadlock". It uses a Mutex because people are familiar with Mutexes and `sleep` just to control the interleaving of execution. The RFD shows the problem in other examples without Mutexes. Here's a reproducer that futurelocks even though nobody uses `await` with the lock held: https://play.rust-lang.org/?version=stable&mode=debug&editio...
> I was honestly wondering how you could possibly cause this in any sane code base.
The actual issue is linked at the very top of the RFD. In our cases, we had a bounded mpsc channel used to send messages to an actor running in a separate task. That actor was working fine. But the channel did become briefly saturated (i.e., at capacity) at a point where someone tried to send on it via a `tokio::select!` similar to the one in the example.
- For anybody who wants to cut to the chase, it's this:
> The behavior of tokio::select! is to poll all branches' futures only until one of them returns `Ready`. At that point, it drops the other branches' futures and only runs the body of the branch that’s ready.
This is, unfortunately, doing what it's supposed to do: acting as a footgun.
The design of tokio::select!() implicitly assumes it can cancel tasks cleanly by simply dropping them. We learned the hard way back in the Java days that you cannot kill threads cleanly all the time. Unsurprisingly, the same thing is true for async tasks. But I guess every generation of programmers has to re-learn this lesson. Because, you know, actually learning from history would be too easy.
Unfortunately there are a bunch of footguns in tokio (and async-std too). The state-machine transformation inside rustc is a thing of beauty, but the libraries and APIs layered on top of that should have been iterated many more times before being rolled out into widespread use.
- No, dropping a Rust future is an inherently safe operation. Futures don't live on their own, they only ever do work inside of .poll(), so you can't "catch them with their pants down" and corrupt state by dropping them. Yield points are specifically designed to be cancel-safe.
Crucially, however, because Futures have no independent existence, they can be indefinitely paused if you don't actively and repeatedly .poll() them, which is the moral equivalent of cancelling a Java Thread. And this is represented in language state as a leaked object, which is explicitly allowed in safe Rust, although the language still takes pains to avoid accidental leakage. The only correct way to use a future is to poll it to completion or drop it.
The problem is that in this situation, tokio::select! only borrows the future and thus can't drop it. It also doesn't know that dropping the Future does nothing, because borrows of futures are still futures so all the traits still match up. It's a combination of slightly unintuitive core language design and a major infrastructure library not thinking things out.
- I genuinely don't understand why people use select! at all given how much of a footgun it is.
- Well the less-footgun-ish alternative would look something like a Stream API, but the last time I checked tokio-stream wasn't stable yet.
Then you could merge a `Stream<A>` and `Stream<B>` into a `Stream<Either<A,B>>` and pull from that. Since you're dealing with owned streams, dropping the stream forces some degree of cleanup. There are still ways to make a mess, but they take more effort.
Ratelimit so I have to reply to mycoliza with an edit here:....................................That example calls `do_thing()`, whose body does not appear anywhere in the webpage. Use better identifiers.
If you meant `do_stuff()`, you haven't replaced select!() with streams, since `do_stuff()` calls `select!()`.
The problem is `select!()`; if you keep using `select!()` but just slather on a bunch of streams that isn't going to fix anything. You have to get rid of select!() by replacing it with streams.
- In reply to your edit, that section in the RFD includes a link to the full example in the Rust playground. You’ll note that it does not make any use of ‘select!`: https://play.rust-lang.org/?version=stable&mode=debug&editio...
Perhaps the full example should have been reproduced in the RFD for clarity…
- An analogous problem is equally possible with streams: https://rfd.shared.oxide.computer/rfd/0609#_how_you_can_hit_...
- [dead]
- It’s really important to understand what’s happening here
Then maybe you should take a moment to pick more descriptive identifiers than future1, future2, future3, do_stuff, and do_async_thing. This coding style is atrocious.
- If you prefer "real" names you can always look at the actual code that had a bug - here it is before the bug was fixed: https://github.com/oxidecomputer/omicron/blob/a253f541a4a32a...
- Is it possible that those names are intentionally chosen and actually do carry meaning?
- LOL. All the Rust evangelists talk about safety when stuff like this exists? JFC. Can we stop calling Rust safe now? Finally? I mean we all know deep in our hearts that trivial memory safety doesn't mean programs are correct or "safe" by any means but its nice to have proof that Rust is fundamentally unsafe for asynchronous tasks at least. Or at least "unsound".
Structured concurrency will always win IMO.
- > All the Rust evangelists talk about safety when stuff like this exists? JFC.
Deadlocks can happen anywhere? You can replicate this pattern in golang.
- While I don't like the tone of the grandparent, comparing to Go is kinda irrelevant when it used structured concurrency as the example of how to solve it. It is of course also not a panacea..
- Golang doesn't have legions of evangelicals claiming it's a safe language and everything should be written in it.
- - wrong -
- I think the next sentence clarifies pretty well.
> In this case, what’s dropped is &mut future1. But future1 is not dropped, so the actual future is not cancelled.
- The author clearly understands these details. I think it's just a question of wording: did we "drop a reference (which has no effect)" or did we "not drop anything (because references don't implement Drop)"?
- In October alone I seen 5+ articles and comments about multi-threading and I don't know why
I always said if your code locks or use atomics, it's wrong. Everyone says I'm wrong but you get things like what's described in the article. I'd like to recommend a solution but there's pretty much no reasonable way to implement multi-threading when you're not an expert. I heard Erlang and Elixir are good but I haven't tried them so I can't really comment
- > I always said if your code locks or use atomics, it's wrong. Everyone says I'm wrong but you get things like what's described in the article.
Ok so say you are simulating high energy photons (x-rays) flowing through a 3d patient volume. You need to simulate 2 billion particles propagating through the patient in order to get an accurate estimation of how the radiation is distributed. How do you accomplish this without locks or atomics without the simulation taking 100 hours to run? Obviously it would take forever to simulate 1 particle at a time, but without locks or atomics the particles will step on each others' toes when updating radiation distribution in the patient. I suppose you could have 2 billion copies of the patient's volume in memory and each particle gets its own private copy and then you merge them all at the end...
- From my understanding this talk describes how he implemented a solution for a similar problem https://www.youtube.com/watch?v=Kvsvd67XUKw
I'm saying if you're not writing multi-threaded code everyday, use a library. It can use atomics/locks but you shouldn't use it directly. If the library is designed well it'd be impossible to deadlock.
- To clarify by "your code" I mean your code excluding a library. A good library would make it impossible to deadlock. When I wrote mine I never called outside code during a lock so it was impossible for it to deadlock. My atomic code had auditing and test. I don't recommend people to write their own thread library unless they want to put a lot of work into it
- > I always said if your code locks or use atomics, it's wrong.
Why atomics?
- People mess up the order all the time. When you mess up locks you get a deadlock, when you mess up an atomic you have items in the queue drop or processed twice, or some other weird behavior (waking up the wrong thread) you didn't expect. You just get hard to understand race conditions which are always a pain to debug
Just say no to atomics (unless they're hidden in a well written library)
- People are messing up any number of things all the time. The more important question is, do you need to risk messing up in a particular situation? I.e. do you need multithreading? In many cases, for example HPC or GUI programming, the answer is yes, you need multithreading to avoid blocking and to get much higher performance.
With a little bit of experience and a bit of care, multithreading isn't _that_ hard. You just need to design for it. You can reduce the number of critical pieces.