gwbas1c 17 hours ago

Before making criticisms that Garbage Collection "defeats the point" of Rust, it's important to consider that Rust has many other strengths:

- Rust has no overhead from a "framework"

- Rust programs start up quickly

- The rust ecosystem makes it very easy to compile a command-line tool without lots of fluff

- The strict nature of the language helps guide the programmer to write bug-free code.

In short: There's a lot of good reasons to choose Rust that have little to do with the presence or absence of a garbage collector.

I think having a working garbage collection at the application layer is very useful; even if it, at a minimum, makes Rust easier to learn. I do worry about 3rd party libraries using garbage collectors, because they (garbage collectors) tend to impose a lot of requirements, which is why a garbage collector usually is tightly integrated into the language.

  • jvanderbot 17 hours ago

    You've just listed "Compiled language" features. Only the 4th point has any specificity to Rust, and even then, is vague in a way that could be misinterpreted.

    Rust's predominant feature, the one that brings most of its safety and runtime guarantees, is borrow checking. There are things I love about Rust besides that, but the safety from borrow checking (and everything the borrow checker makes me do) is why I like programming in rust. Now, when I program elsewhere, I'm constantly checking ownership "in my head", which I think is a good thing.

    • gwbas1c 16 hours ago

      Oh no, I'm directly criticizing C/C++/Java/C#:

      The heavyweight framework (and startup cost) that comes with Java and C# makes them challenging for widely-adopted lightweight command-line tools. (Although I love C# as a language, I find the Rust toolchain much simpler and easier to work with than modern dotnet.)

      Building C (and C++) is often a nightmare.

      • hypeatei 16 hours ago

        > The heavyweight framework

        Do you mean the VM/runtime? If so, you might be able to eliminate that with an AOT build.

        > I find the Rust toolchain much simpler and easier to work with than modern dotnet

        What part of the toolchain? I find them pretty similar with the only difference being the way you install them (with dotnet coming from a distro package and Rust from rustup)

        • jillesvangurp 12 hours ago

          Exactly, natively compiled garbage collected languages (like Java with Graal; or as executed on Android) don't have a lot of startup overhead. In Java the startup overhead is mostly two things that usually conspire to make things worse:

          1) dynamic loading of jar files

          2) reflection

          Number 1 allows you to load arbitrary jar files with code and execute them. Number 2 allows you to programmatically introspect existing code and then execute logic like "Find me all Foo sub classes and create an instance of those and return the list of those objects". You can do that at any time but a lot of that kind of stuff happens at startup. That involves parsing, loading and introspecting thousands of class files in jar files that need to be opened and decompressed.

          Most of "Java is slow" is basically programs loading a lot of stuff at startup, and then using reflection to look for code to execute. You don't have to do those things. But a lot of popular web frameworks like Spring do. A lot of that stuff is actually remarkably quick considering what it is doing. You'd struggle to do this in many other languages. Or at all because many languages don't have reflection. If you profile it, there are millions of calls happening in the first couple of seconds. It's taking time yes. But that code has also been heavily optimized over the years. Dismissing what that does as "java is slow" and X is fast is usually a bit of an apples and oranges discussion.

          With Spring Boot, there are dozens of libraries that self initialize if you simply add the dependency or the right configuration to your project. We can argue about whether that's nice or not; I'm leaning to no. But it's a neat feature. I'm more into lighter weight frameworks these days. Ktor server is pretty nice, for example. It starts pretty quickly because it doesn't do a whole lot on startup.

          Loading a tiny garbage collector library on startup isn't a big deal. It will add a few microseconds to your startup time maybe. Probably not milliseconds. Kotlin has a nice native compiler. If you compile hello world with it it's a few hundred kilobytes for a self contained binary with the program, runtime, and the garbage collection. It's not a great garbage collector. For memory intensive stuff you are better off using the JVM. But if that's not a concern, it will do the job.

          • mk89 4 hours ago

            You forgot to mention Quarkus :)

      • ComputerGuru 14 hours ago

        New AOT C# is nice, but not fully doable with the most common dependencies. It addresses a lot of the old issues (size, bloat, startup latency, etc)

        • jiggawatts 5 hours ago

          Hilariously, the Microsoft SQL Client is the primary blocker for AOT for most potential usecases.

          Want fast startup for an Azure Function talking to Azure SQL Database? Hah… no.

          In all seriousness, that one dependency is the chain around the ankle of modern .NET because it’s not even fully async capable! It’s had critical performance regression bugs open for years.

          Microsoft’s best engineers are busy partying in the AI pool and forgot about drudgery like “make the basics components work”.

      • pjmlp 10 hours ago

        Only for those that don't know how to use AOT compilation tools for Java and C#.

        • jraph 10 hours ago

          GraalVM indeed do wonders wrt startup times and in providing a single binary you can call.

          • pjmlp 2 hours ago

            Open J9 as well.

            Then there are all the others that used to be commercial like ExcelsiorJET, or surviving ones like PTC and Aicas.

        • paulddraper an hour ago

          Compiling Java AOT doesn’t obviate the need for the JVM.

          At least not for Graal.

          https://stackoverflow.com/questions/75316542/why-do-i-need-j...

          • gudzpoz 7 minutes ago

            ... That post you linked was from two years ago, discussing JEP 295, which was delivered eight years ago. Graal-based AOT has evolved a lot ever since. And the answer even explicitly recommended using native images:

            > I think what you actually want to do, is to compile a native image of your program. This would include all the implications like garbage collection from the JVM into the executable.

            And it is this "native image" that all the comments above in this thread have been discussing, not JEP 295. (And Graal-based AOT in native images does remove the need to bundle a whole JRE.)

      • procaryote 15 hours ago

        Hello world in java is pretty fast. Not rust fast but a lot faster than you'd expect.

        Java starting slowly is mostly from all the cruft in the typical java app, with springboot, dependency injection frameworks, registries etc. You don't have to have those, it's just that most java devs use them and can't conceive of a world of low dependencies

        Still not great for commandline apps, but java itself is much better than java devs

        • gizmo686 14 hours ago

          Testing on my machine, Hello World in java (openjdk 21) takes about 30ms.

          In contrast, "time" reports that rust takes 1ms, which is the limit of it's precision.

          Python does Hello World in just 8ms, despite not having a separate AOT compilation step.

          The general guidance I've seen for interaction is that things start to feel laggy at 100ms; so 30ms isn't a dealbreaker, but throwing a third of your time budget at the baseline runtime cost is a pretty steep ask.

          If you want to use the application as a short lived component in a larger system, than 30ms on every invocation can be a massive cost.

          • zigzag312 14 hours ago

            App that actually does something will probably have even larger startup overhead in Java as there will be more to compile just-in-time.

            • pjmlp 10 hours ago

              Only when not using either AOT or JIT cache.

          • 0cf8612b2e1e 12 hours ago

            I recall that Mercurial was really fighting their Python test harness. It essentially would startup a new Python process for each test. At 10ms per, it added up to something significant, given their volume of work to cover something as complicated as SCM.

            • typpilol an hour ago

              10ms?

              Did they have like 100k tests?

          • guelo 12 hours ago

            I'm trying and failing to imagine a situation where 30ms startup time would be a problem. Maybe some kind of network service that needs to execute a separate process on every request?

            • davemp 6 hours ago

              30ms is pretty close to noticeable for anything that responds to user input. 30ms startup + 20-70ms processing would probably bump you into the noticeable latency range.

            • pixelpoet 3 hours ago

              It's not about how long someone is willing to wait with a timer and judge it on human timescales, it's about what is an appropriate length of time for the task.

              30ms for a program to start, print hello world, and terminate on a modern computer is batshit insane, and it's crazy how many programmers have completely lost sight of even the principle of this.

            • tacticus 9 hours ago

              30ms is the absolute best case. Throw some spring in there and you're very quickly at 10s. rub some spring-soap and it's near enough to 60s

              • ori_b 7 hours ago

                And imagine if you start adding sleep calls! Those could take minutes to hours, or even days!

        • kbolino 15 hours ago

          Java's biggest weakness in this area is its lack of value types. It's well known, Project Valhalla has been trying to fix it for years, but the JVM just wasn't built around such types and it's hard to bolt them on after the fact. Java's next biggest weakness (which will become more evident with value types) is its type-erased generics. Both of these problems lead to time wasted on unnecessary GC, and though they can be worked around with arrays and codegen, it's unwieldy to say the least.

          • pron 14 hours ago

            Project Valhalla will also specialise generics for value types. When you say, "it's hard to bolt on", the challenge isn't technical, but how to do this in a way that adds minimal language complexity (i.e. less than in other languages with explicit "boxed" and "inlined" values). Ideally, this should be done in a way that tells the compiler know which types can be inlined (e.g. they don't require identity) and then letting the compiler decide when it wants to actually inline an instance as a transparent optimisation. The challenge would not have been any smaller had Java done this from the beginning.

            • kbolino 13 hours ago

              Maybe I picked the wrong wording--I don't mean to diminish the ambitions or scope of Valhalla--but I definitely think the decision to eschew value types at the start has immense bearing on the difficulty of adding them now.

              Java's major competitors, C# and Go, both have had value types since day one and reified generics since they gained generics; this hasn't posed any major problems to either language (with the former being IMO already more complex than Java, but the latter being similarly or even less complex than Java).

              If the technical side isn't that hard, I'd have expected the JVM to have implemented value types already, making it available to other less conservative languages like Kotlin, while work on smoothly integrating it in Java took as long as needed. Project Valhalla is over a decade old, and it still hasn't delivered, or even seems close to delivering, its primary goals yet.

              Just to be clear, I don't think every language needs to meet every need. The lack of value types is not a critical flaw of Java in general, as it only really matters when trying to use Java for certain purposes. After all, C# is very well suited to this niche; Java doesn't have to fit in it too.

              • pron 12 hours ago

                > Java's major competitors, C# and Go, both have had value types since day one

                Yes (well, structs; not really value types), but at a significant cost to FFI and/or GC and/or user-mode threads (due to pointers into the stack and/or middle of objects). Java would not have implemented value types in this way, and doing it the way we want to would have been equally tricky had it been done in Java 1.0. Reified generics also come at a high price, that of baking the language's variance strategy into the ABI (or VM, if you want). However, value types will be invariant (or possibly extensible in some different way), so it would be possible to specialise generics for them without necessarily baking the Java language's variance model into the JVM (as the C# variance model is baked into the CLR).

                Also, C# and Go didn't have as much of a choice, as their optimising compilers and GCs aren't as sophisticated as Java's (e.g. Java doesn't actually allocate every `new` object on the heap). Java has long tried to keep the language as simple as possible, and have very advanced compilers and GCs.

                > If the technical side isn't that hard, I'd have expected the JVM to have implemented value types already, making it available to other less conservative languages like Kotlin, while work on smoothly integrating it in Java took as long as needed

                First, that's not how we do things. Users of all alternative Java Platform languages (aka alternative JVM languages) combined make up less than 10% of all Java platform users. We work on the language, VM, and standard library all together (this isn't the approach taken by .NET, BTW). We did deliver invokedynamic before it was used by the Java language, but 1. that was after we knew how the language would use it, and 2. that was at a time when the JDK's release model was much less flexible.

                Second, even if we wanted to work in this way, it wouldn't have mattered here. Other Java Platform languages don't just use the JVM. They make extensive use of the standard library and observability tooling. Until those are modified to account for value types, just a JVM change would be of little use to those languages. The JVM comprises maybe 25% of the JDK, while Kotlin, for example, makes use of over 95% of the JDK.

                Anyway, Project Valhalla has taken a very long time, but it's making good progress, and we hope to deliver some of its pieces soon enough.

          • pjmlp 10 hours ago

            Currently it takes lots of boilerplate code, however with Project Panama API you can model C types in memory, thus kind of already using value types even if Valhala isn't yet here.

            To avoid manually writing all the Panama boilerplate, you can instead write a C header file with the desired types, and then run jextract through it.

      • quotemstr 13 hours ago

        Heavyweight startup? What are you talking about? A Graal-compiled Java binary starts in a few milliseconds. Great example of how people don't update prejudices for decades.

    • jrop 16 hours ago

      Just going to jump in here and say that there's another reason I might want Rust with a Garbage Collector: The language/type-system/LSP is really nice to work with. There have indeed been times that I really miss having enums + traits, but DON'T miss the borrow checker.

      • tuveson 15 hours ago

        Maybe try a different ML-influenced language like OCaml or Scala. The main innovation of Rust is bringing a nice ML-style type system to a more low level language.

        • Yoric 15 hours ago

          Jane Street apparently has a version of OCaml extended with affine types. I'd like to test that, because that would (almost) be the best of all worlds.

          • nobleach 10 hours ago

            I think you're referring to OxCaml. I'd love to see this make a huge splash. Right now one of the biggest shortcomings of OCaml, is one is still stuck implementing so much stuff from scratch. Languages like Rust, Go and Java have HUGE ecosystems. OCaml is just as old (even older than Rust since OCaml inspired Rust and its original compiler was written in OCaml) as these languages. Since it's not been as popular, it's hard to find well-supported libraries.

        • umanwizard 13 hours ago

          There are other nice things about Rust over OCaml that are mainly just due to its popularity. There are libraries for everything, the ecosystem is polished, you can find answers to any question easily, etc. I don't think the same can be said for OCaml, or at least not to the same extent. It's still a fairly niche language compared to Rust.

          • nobleach 10 hours ago

            I remember about 5 years ago, StackOverflow for OCaml was a nightmare. It was a mishmash of Core (from Jane Street) Batteries, and raw OCaml. New developers were confronted with the prospect of opening multiple libraries with the same functionality. (not the correct way of solving any problem)

    • zamalek 16 hours ago

      - Rust is a nice language to use

    • tayo42 15 hours ago

      What other language has modern features like rust and is compiled?

      • procaryote 15 hours ago

        it depends completely on what you put in "modern features"

        • tayo42 14 hours ago

          Pattern matching, usable abstractions, non null types, tagged unions or w/e enums are, build tools etc

          • procaryote 12 hours ago

            This sounds more like "this is what I like in rust" than "features any modern language should have" though

            If you like rust, use rust. It's very likely the best rust

            • lmm 5 hours ago

              > This sounds more like "this is what I like in rust" than "features any modern language should have" though

              Good build tooling has been around since 2004, and all of the rest of those features have been around since the late 1970s. There's really no excuse for a language not having all of them.

            • antonvs 5 hours ago

              That’s definitely a list of features that any modern language should have. It’s in no way specific to Rust.

          • pjmlp 10 hours ago

            Standard ML from 1983, alongside all those influenced by it like Haskell, OCaml, Agda, Rocq,....

            • lmm 5 hours ago

              Most of those have nothing remotely approaching Rust's level of build tooling.

              • pjmlp 2 hours ago

                Yet parent was mostly talking about type systems.

                If you prefer, Rust tooling is still quite far behind from languages like Kotlin and Scala, which I didn't mention, but also have such type system.

                • lmm an hour ago

                  > If you prefer, Rust tooling is still quite far behind from languages like Kotlin and Scala

                  I'm not sure that's true, at least when it comes to specifically build tooling. I'd say Cargo is far ahead of Gradle, Ant, or worst of all SBT, and probably even slightly ahead of Maven (which never really reached critical mass in the Kotlin or Scala ecosystems sadly).

                  • pjmlp an hour ago

                    You are missing the IDE capabilities, maturity of GUI frameworks, a full OS that 80% of the world uses,... the whole tooling package.

          • munificent 13 hours ago

            I'm not sure what you mean by "usable abstractions" and tagged unions are a little verbose because they are defined in terms of closed sets of subtypes, but otherwise Dart has all of those.

            • tayo42 13 hours ago

              Nothing like "oh you can do that but with this weird work around" or if they're clunky to use

  • gizmo686 16 hours ago

    Also, the proposed garbage collector is still opt in. Only pointers that are specifically marked as GC are garbage collected. This means that most references are still cleaned up automatically when the owner goes out of scope. This greatly reduces the cost of GC compared to making all heap allocations garbage collected.

    This isn't even a new concept in Rust. Rust already has a well accepted RC<T> type for reference counted pointers. From a usage perspective, GC<T> seems to fit in the same pattern.

    • zigzag312 14 hours ago

      Language where most of the libraries are without GC, but has an GC opt in would be interesting. For example only your business logic code would use GC (so you can write it more quickly). And parts where you don't want GC are still written in the same language, avoiding the complexity of FFI.

      Add opt-in development compilation JIT for quick iteration and you don't need any other language. (Except for user scripts where needed.)

  • yoyohello13 15 hours ago

    I love the rust ecosystem, syntax, and type system. Being able to write Rust without worrying about ownership/lifetimes sounds great honestly.

  • victorbjorklund 13 hours ago

    Also assuming one can mix garbage collection with the borrower (is that what its called in rust?) one should be able to use GC for things that arent called that much / that important and use the normal way for things that benefit from no GC interupts etc

  • drnick1 5 hours ago

    Aren't Rust programs still considerably larger than their C equivalent because everything is statically linked? It's kind of hard to see that as an advantage.

    • paulddraper an hour ago

      No.

      They may be larger because they are doing more work, depends on the program.

      But no they don’t statically compile everything.

  • rixed 15 hours ago

    In all honesty, there are three topics I try to refrain myself from engaging with on HN, often unsuccesfully: politics, religion, and rust.

    I don't know what you had to go through before reaching rust's secure haven, but what you just said is true for the vast majority of compiled languages, which are legions.

    • bregma 13 hours ago

      > politics, religion, and rust

      Is there a real distinction between any of those?

    • quotemstr 13 hours ago

      It's the fledging of a new generation of developers. Every time I see one of these threads I tell myself, "you, too, were once this ignorant and obnoxious". I don't know any cute except letting them get it out of their system and holding my nose as they do.

      • Ar-Curunir 3 hours ago

        Well you might find it good to learn that Rust is based on plenty of ideas dating back decades, so _your_ obnoxious and patronizing attitude is unwarranted.

        • quotemstr 2 hours ago

          Rust gets some things right and some things wrong. Its designers are generally clueful, but like all humans, fallible. But what does this discussion have to do with Rust exactly? Exactly the same considerations would apply to a C++ GC.

          The only thing more cringe than insisting on a GC strategy without understanding the landscape is to interpret everything as an attack on one's favored language.

  • jadenPete 10 hours ago

    Rust's choice of constructs also makes writing safe and performant code easy. Many other compiled languages lack proper sum and product types, and traits (type classes) offer polymorphism without many of the pitfalls of inheritance, to name a few.

  • fithisux 11 hours ago

    I really like your work

  • James_K 14 hours ago

    Go is probably a better pick in this case.

    • throwaway127482 3 hours ago

      With data intensive Go applications you eventually hit a point where your code has performance bottlenecks that you cannot fix without either requiring insane levels of knowledge on how Go works under the hood, or using CGo and incurring a high cost for each CGo call (last I heard it was something like 90ns), at which point you find yourself regretting you didn't write the program in Rust. If GC in Rust could be made ergonomic enough, I think it could be a better default choice than Go for writing a compiled app with high velocity. You could start off with an ergonomic GC style of Rust, then later drop into manual mode wherever you need performance.

  • imtringued 12 hours ago

    The problem with conventional garbage collection has very little to do with the principle or algorithms behind garbage collection and more to do with the fact that seemingly every implementation has decided to only support a single heap. The moment you can have isolated heaps almost every single problem associated with garbage collection fades away. The only thing that remains is that cleaning up memory as late as possible is going to consume more memory than doing it as early as possible.

    • tuveson 11 hours ago

      What problem does that solve with GC, specifically? It also seems like that creates an obvious new problem: If you have multiple heaps, how do you deal with an object in heap A pointing to an object in heap B? What about cyclic dependencies between the two?

      If you ban doing that, then you’re basically back to manual memory management.

      • paulddraper an hour ago

        There’s a ton of work that goes into multi-generational management, incremental vs stop the world, frequency heuristics, etc.

        A lot of the challenge is there is not just one universal answer for these, the optimum strategies vary case by case.

        You are correct that each memory arena is the boundary of the GC. Any GC between them must be handled manually.

    • grogers 9 hours ago

      BEAM (i.e. erlang) is exactly that model, every lightweight process has its own heap. I don't see how you'd make that work in a more general environment that supports sharing pointers across threads.

taylorallred 15 hours ago

For those who are interested, I think that arena allocation is an underrated approach to managing lifetimes of interconnected objects that works well with borrow checking.

  • haberman 10 hours ago

    I agree, but in my experience arena allocation in Rust leaves something to be desired. I wrote something about this here: https://blog.reverberate.org/2021/12/19/arenas-and-rust.html

    I was previously excited about this project which proposed to support arena allocation in the language in a more fundamental way: https://www.sophiajt.com/search-for-easier-safe-systems-prog...

    That effort was focused primarily on learnability and teachability, but it seems like more fundamental arena support could help even for experienced devs if it made patterns like linked lists fundamentally easier to work with.

  • worik 11 hours ago

    > works well with borrow checking.

    Yes, because it defeats borrow checking.

    Unsafe Rust, used directly, works too

    • celeritascelery 10 hours ago

      It does not defeat borrow checking. The borrow checker will ensure that objects do not outlive the arena. It works with borrow checking.

      • Archit3ch 10 hours ago

        This. Arenas don't work when you don't know when it's okay to free. The borrow checker can help with that (or you can track it manually in C/Zig).

      • worik 3 hours ago

        The borrow checker knows nothing about your arena allocations.

        That is if we are talking about the same thing!

        All the borrow checker knows is there is a chunk of memory (the arena) in scope.

        It works, no memory safety in the sense that you must manage your own garbage and you can reference uninitialized parts of the arena

        I have found myself using arenas in Rust for managing circular references (networks with cycles) and if I were to do it again I think I would write that bit in C or unsafe Rust.

FridgeSeal 9 hours ago

I don’t understand the desire to staple a GC into Rust.

If you want this, you might just…want a different language? Which is fine and good! Putting a GC on Rust feels like putting 4WD tyres on a Ferrari sports car and towing a caravan with it. You could (maybe) but it feels like using the wrong tool for the job.

  • lmm 5 hours ago

    Adding a GC to Rust might honestly be easier than getting the OCaml ecosystem to adopt something that works as well as cargo. It's tragic, but that's the world we live in.

  • dajonker 9 hours ago

    If I understand the article correctly it's for those cases where you want memory safety (i.e. not using "unsafe") but where the borrow checker is really hard to work with such as a doubly linked list, where nodes can point to each other.

    For the rest you'd still use non-GC rust.

    • zozbot234 9 hours ago

      A doubly linked list is not the optimal case for GC. It can be implemented with some unsafe code, and there are approaches that implement it safely with GhostCell (or similar facilities, e.g. QCell) plus some zero-overhead (mostly) "compile time reference counting" to cope with the invariants involved in having multiple references simultaneously "own" the data. See e.g. https://github.com/matthieu-m/ghost-collections for details.

      Where GC becomes necessary is the case where even static analysis cannot really mitigate the issue of having multiple, possibly cyclical references to the same data. This is actually quite common in some problem domains, but it's not quite as simple as linked lists.

    • FridgeSeal 9 hours ago

      I just foresee it become irrevocably viral, as it becomes the “meh, easier” option, and then suddenly half your crates depend on it, and then you’re losing one of the major advantages of the language.

gwbas1c 14 hours ago

> Having acknowledged that pointers can be 'disguised' as integers, it is then inevitable that Alloy must be a conservative GC

C# / dotnet don't have this issue. The few times I've needed a raw pointer to an object, first I had to pin it, and then I had to make sure that I kept a live reference to the object while native code had its pointer. This is "easier done than said" because most of the time it's passing strings to native APIs, where the memory isn't retained outside of the function call, and there is always a live reference to the string on the stack.

That being said, because GC (in this implementation) is opt-in, I probably wouldn't mix GC and pointers. It's probably easier to drop the requirement to get a pointer to a GC<T> instead of trying to work around such a narrow use case.

  • MereInterest 9 hours ago

    Even their so-called conservative assumption is also insufficient.

    > if a machine word's integer value, when considered as a pointer, falls within a GCed block of memory, then that block itself is considered reachable (and is transitively scanned). Since a conservative GC cannot know if a word is really a pointer, or is a random sequence of bits that happens to be the same as a valid pointer, this over-approximates the live set

    Suppose I allocate two blocks of memory, convert their pointers to integers, then store the values `x` and `x^y`. At this point, no machine word points to the second allocation, and so the GC would consider the second allocation to be unreachable. However, the value `y` could be computed as `x ^ (x^y)`, converted back to a pointer, and accessed. Therefore, their reachability analysis would under-approximate the live set.

    If pointers and integers can be freely converted to each other, then the GC would need to consider not just the integers that currently exist, but also every integer that could be produced from the integers that currently exist.

    • kmeisthax 6 hours ago

      What you're describing is not just a problem with GC, but pointers in general. Optimizers would choke on exactly the same scheme.

      What compiler writers realized is that pointers are actually not integers, even though we optimize them down to be integers. There's extra information in them we're forgetting to materialize in code, so-called "pointer provenance", that optimizers are implicitly using when they make certain obvious pointer optimizations. This would include the original block of memory or local variable you got the pointer from as well as the size of that data.

      For normal pointer operations, including casting them to integers, this has no bearing on the meaning of the program. Pointers can lower to integers. But that doesn't mean constructing a new pointer from an integer alone is a sound operation. That is to say, in your example, recovering the integer portion of y and casting it to a pointer shouldn't be allowed.

      There are two ways in which the casting of integers to pointers can be made a sound operation. The first would be to have the programmer provide a suitably valid pointer with the same or greater provenance as the one that provided the address. The other, which C/C++ went with for legacy reasons, is to say that pointers that are cast to integers become 'exposed' in such a way that casting the same integer back to a pointer successfully recovers the provenance.

      If you're wondering, Rust supports both methods of sound int-to-pointer casts. The former is uninteresting for your example[0], but the latter would work. The way that 'exposed provenance' would lower to a GC system would be to have the GC keep a list of permanently rooted objects that have had their pointers cast to integers, and thus can never be collected by the system. Obviously, making pointer-to-integer casts leak every allocation they touch is a Very Bad Idea, but so is XORing pointers.

      Ironically, if Alloy had done what other Rust GCs do - i.e. have a dedicated Collect trait - you could store x and x^y in a single newtype that transparently recovers y and tells the GC to traverse it. This is the sort of contrived scenario where insisting on API changes to provide a precise collector actually gets what a conservative collector would miss.

      [0] If you're wondering what situations in which "cast from pointer and int to another pointer" would be necessary, consider how NaN-boxing or tagged pointers in JavaScript interpreters might be made sound.

  • GolDDranks 12 hours ago

    Also, Rust is not going to have it for the long run that pointers can be, in fact, disguised as integers. There is this thing called pointer provenance, and some day, all pointers are required to have provenance (i.e. a proof where they did come from) OR they are required to admit that POOF this is a pointer out of thin air, you can't assume anything about the pointee. As long as there are no POOF magicians, the GC can assume that it knows every reference!

    • celeritascelery 10 hours ago

      > As long as there are no POOF magicians, the GC can assume that it knows every reference!

      creating pointers without provenance is safe, so the GC can’t assume that a program won’t have them also be sound. This always be an issue.

  • quotemstr 13 hours ago

    Worse, conservatism in a GC further implies it can't be a moving GC, which means you can't compact, use bump pointer allocation, and so on. It keeps you permanently behind the frontier.

    I remain bitterly disappointed that so much of the industry is so ignorant of the advances of the past 20 years. It's like it's 1950 and people are still debating whether their cloth and wood airplanes should be biplanes or triplanes.

    • gwbas1c 11 hours ago

      The thing I don't understand is why anyone would pass a pointer to a GC'ed object into a 3rd party library (that's in a different language) and expect the GC to track the pointer there?

      Passing memory into code that uses a different memory manager is always a case where automatic memory management shouldn't be used. IE, when I'm using a 3rd party library in a different language, I don't expect it to know enough about my language's memory model to be able to effectively clean up pointers that I pass to it.

      • quotemstr 2 hours ago

        > The thing I don't understand is why anyone would pass a pointer to a GC'ed object into a 3rd party library

        The promise of GC is to free the programmer from the burden of memory management. If I can't give (perhaps fractional) ownership of a data structure to a library and expect its memory to be reclaimed at the appropriate time, have I freed myself from the burden of memory management?

vsgherzi 12 hours ago

No has seemed to call it out yet but swift uses a form of garbage collection but remains relatively fast. I was against this at first but the more I think about it, I think it has real potential to make lots of hard problems with ownership easier to solve. I think the next big step or perhaps an alternative would be to make changes to restrictions in unsafe rust.

I think the pursuit of safety is a good goal and I could see myself opting into garbage collections for certain tasks.

  • worik 12 hours ago

    Swift uses reference counting

    Slows down every access to objects as reference counts must be maintained

    Something weird that I never bothered with to enable circular references

    • Someone an hour ago

      > Slows down every access to objects as reference counts must be maintained

      Definitely not every access. Between an “increase refcount” and an “decrease refcount” you can access an object as many times as you want.

      Also:

      - static analysis can remove increase/decrease pairs.

      - Swift structs are value types, and not reference counted. That means Swift code can have fewer reference-counted objects than similar Java code has garbage-collected objects.

      It does perform slower than GC-ed languages or languages such as C and rust, but is easier to write [1] than rust and C and needs less memory than GC-ed languages.

      [1] The latest Swift is a lot more complex than the original Swift, but high-level code still can be reasonably easy.

    • marcianx 9 hours ago

      Reference counted pointers can deference an object (via a strong pointer) without checking the reference count. The reference count is accessed only on operations like clone, destruction, and such. That being said, access via a weak pointer does require a reference count check.

      • fulafel 4 hours ago

        This sounds different from common refcounting semantics in other languages, is it really so in Swift?

        Usually access increases the reference count (to avoid the object getting GC'd while you use it) and weak pointers are the exception where you are prepared for the object reference to suddenly become invalid.

        • Someone 43 minutes ago

          > Usually access increases the reference count

          Taking shared ownership increases the reference count, not access.

rurban 6 hours ago

Memory safety would be a good idea indeed. Just to get rid of the unsafeties in the stdlib and elsewhere. With this would also go type-safety, because there will be no more unsafe hacks. Concurrency safety would another hill to die on, as they choose not to approach this goal with their blocking IO and locks all over.

king_terry 17 hours ago

The whole point of Rust is to not have a garbage collector while not worrying about memory leaks, though.

  • IainIreland 15 hours ago

    One clear use case for GC in Rust is for implementing other languages (eg writing a JS engine). When people ask why SpiderMonkey hasn't been rewritten in Rust, one of the main technical blockers I generally bring up is that safe, ergonomic, performant GC in Rust still appears to be a major research project. ("It would be a whole lot of work" is another, less technical problem.)

    For a variety of reasons I don't think this particular approach is a good fit for a JS engine, but it's still very good to see people chipping away at the design space.

    • nitwit005 9 hours ago

      Once you are generating and running your own machine code, isn't the safety of Rust generally out the window?

    • quotemstr 13 hours ago

      Would you plug Boehm GC into a first class JS engine? No? Then you're not using this to implement JS in anything approaching a reasonable manner either.

      • zorgmonkey 13 hours ago

        It looks like the API of Alloy was at least designed in such a way that can somewhat easily change the GC implementation out down the line and I really hope they do cause Boehm GC and conservative GC in general is much too slow compared to state of the art precise GCs.

        • quotemstr 12 hours ago

          It's not an implementation thing. It's fundamental. A GC can't move anything it finds in a conservative root. You can build partly precise hybrid GCs (I've built a few) but the mere possibility of conservative roots complicates implementation and limits compaction potential.

          If, OTOH, Alloy is handle based, then maybe there's hope. Still a weird choice to use Rust this way.

          • ltratt 12 hours ago

            We don't exactly want Alloy to have to be conservative, but Rust's semantics allow pointers to be converted to usizes (in safe mode) and back again (in unsafe mode), and this is something code really does. So if we wanted to provide an Rc-like API -- and we found reasonable code really does need it -- there wasn't much choice.

            I don't think Rust's design in this regard is ideal, but then again what language is perfect? I designed languages for a long while and made far more, and much more egregious, mistakes! FWIW, I have written up my general thoughts on static integer types, because it's a surprisingly twisty subject for new languages https://tratt.net/laurie/blog/2021/static_integer_types.html

            • quotemstr 10 hours ago

              > We don't exactly want Alloy to have to be conservative, but Rust's semantics allow pointers to be converted to usizes (in safe mode) and back again (in unsafe mode), and this is something code really does. So if we wanted to provide an Rc-like API -- and we found reasonable code really does need it -- there wasn't much choice.

              You can define a set of objects for which this transformation is illegal --- use something like pin projection to enforce it.

              • ltratt 10 hours ago

                The only way to forbid it would be to forbid creating pointers from `Gc<T>`. That would, for example, preclude a slew of tricks that high performance language VMs need. That's an acceptable trade-off for some, of course, but not all.

                • quotemstr 9 hours ago

                  Not necessarily. It would just require that deriving these pointers be done using an explicit lease that would temporarily defer GC or lock an object in place during one. You'd still be able to escape from the tyranny of conservative scanning everything.

  • GolDDranks 17 hours ago

    The mechanisms that Rust provide for memory management are various. Having a GC as a library for usecases with shared ownership / handles-to-resources is not out of question. The problem is that they have been hard to integrate with the language.

    • jvanderbot 17 hours ago

      While you're of course correct, there's just something that feels off. I'd love if we kept niche, focused-purpose languages once in a while, instead of having every language do everything. If you prioritize everything you prioritize nothing.

      • GolDDranks 17 hours ago

        I agree specifically with regards to GC; I think that focusing on being an excellent language for low-level programming (linkable language-agnostic libraries, embedded systems, performance-sensitive systems, high-assurance systems etc.) should continue being the focus.

        However, this is 3rd party research. Let all flowers bloom!

      • sebastianconcpt 17 hours ago

        > If you prioritize everything you prioritize nothing

        Well...

        If you prioritize everything you prioritize generalism.

        (the "nothing" part comes from our natural limitation to pay enough multidisciplinary attention to details but that psychological impression is being nuked with AI as we speak and the efforts to get to AGI are an attempt to make synthetic "intelligence" be able to gain dominion over this spirit before us)

        • virgilp 16 hours ago

          No, they are quite identical. Both cases logically lead to "now everything has the same priority". There's nothing about generalism in there.

      • bryanlarsen 15 hours ago

        Just like when hiring developers, there's an advantage in choosing "jack of all trades, master of some".

  • Manishearth 15 hours ago

    Worth highlighting: library-level GC would not be convenient enough to use pervasively in Rust anyway. library-level GC does not replace Rust's "point".

    It's useful to have when you have complex graph structures. Or when implementing language runtimes. I've written a bit about these types of use cases in https://manishearth.github.io/blog/2021/04/05/a-tour-of-safe...

    And there's a huge benefit in being able to narrowly use a GC. GCs can be useful in gamedev, but it's a terrible tradeoff to need to use a GC'd language to get them, because then everything is GCd. library-level GC lets you GC the handful of things that need to be GCd, while the bulk of your program uses normal, efficient memory management.

    • zorgmonkey 13 hours ago

      This is a very important point, careful use of GCs for a special subset of allocations that say have tricky lifetimes for some reason and aren't performance critical could have a much smaller impact on overall application performance than people might otherwise expect.

      • Manishearth an hour ago

        Yeah, and it's even better if you have a GC where you can control when the collection phase happens.

        E.g. in a game you can force collection to run between frames, potentially even picking which frames it runs on based on how much time you have. I don't know if that's a good strategy, but it's an example of the type of thing you can do.

  • hedora 15 hours ago

    Yeah; I wished they'd gone the other way, and made memory leaks unsafe (yes, this means no Rc or Arc). That way, you could pass references across async boundaries without causing the borrow checker to spuriously error out.

    (It's safe to leak a promise, so there's no way for the borrow checker to prove an async function actually returned before control flow is handed back to the caller.)

    • dzaima 15 hours ago

      Same as with GC, neither need be a fixed choice; having a GC library/feature in Rust wouldn't mean that everything will be and must be GC'd; and it's still possible to add unleakable types were it desired: https://rust-lang.github.io/keyword-generics-initiative/eval... while keeping neat things like Rc<T> available for things that don't care. (things get more messy when considering defaults and composability with existing libraries, but I'd say that such issues shouldn't prevent the existence of the options themselves)

  • James_K 14 hours ago

    Actually, memory leaks are the major class of memory error for which Rust offers no protection. See the following safe function in Box:

    https://doc.rust-lang.org/std/boxed/struct.Box.html#method.l...

    • gizmo686 14 hours ago

      Rust's memory safety guarantees do not insure the absence of leaks. However, Rust's design does offer significant protection against leaks (relative to languages like C where all heap allocations must be explicitly freed).

      The fact that any felt it nessasary to add a "leak" function to the standard library should tell you something about how easy it is to accidentally leak memory.

  • umanwizard 13 hours ago

    Not really, modern C++ already makes it about as hard to leak memory as it is in Rust.

    Rust has loads of other advantages over C++, though.

torginus 16 hours ago

While I'm not ideologically opposed to GC in Rust I have to note:

- the syntax is hella ugly

- GC needs some compiler machinery, like precise GC root tracking with stack maps, space for tracking visted objects, type infos, read/write barriers etc. I don't know how would you retrofit this into Rust without doing heavy duty brain surgery on the compiler. You can do conservative GC without that, but that's kinda lame.

sebastianconcpt 11 hours ago

I'm curious about the applicability.

If memory management is already resolved with the borrow checker rules, then what case can make you want a GC in a Rust program?

  • dajonker 9 hours ago

    Implementing a doubly linked list without either unsafe or some very confusing code that could arguably win an obfuscation contest.

  • antonvs 5 hours ago

    > If memory management is already resolved with the borrow checker rules

    Even in standard Rust, this only applies to a subset of memory management. That’s why Rust supports reference counting, for example, which is an alternative to borrow checking. But one could make the case that automatic garbage collection was developed specifically to overcome the problems with reference counting. Given that context, GC in Rust makes perfect sense.

Dwedit 15 hours ago

There was one time where I actually had to use object resurrection in a finalizer. It was because the finalizer needed to acquire a lock before running destruction code. If it couldn't acquire the lock, you resurrect the object to give it a second chance to destroy (calling GC.ReRegisterForFinalize)

nu11ptr 15 hours ago

While it might be useful for exploration/academic pursuit/etc., am I the only one who finds "conservative GC" a non-starter? Even if this was fully production ready, I had a use case for it, etc. I still would never ship an app with a conservative GC. It is difficult enough to remove my own bugs and non-determinism, and I just can't imagine trying to debug a memory leak caused due to a conservative GC not finding all used memory.

  • ltratt 14 hours ago

    If you've used Chrome or Safari to read this post, you've used a program that uses (at least in parts) conservative GC. [I don't know if Firefox uses conservative GC; it wouldn't surprise me if it does.] This partly reflects shortcomings in our current compilers and in current programming language design: even Rust has some decisions (e.g. pointers can be put in `usize`s) that make it hard to do what would seem at first glance to be the right thing.

    • astrange 11 hours ago

      Also most mobile games written in C# use a conservative GC (Boehm).

      • Rohansi 9 hours ago

        Not just mobile games - all games made with Unity.

worik 12 hours ago

I have thought for years Rust needs to bifurcate.

Asyc/await really desperately needs a garbage collector. (See this talk from Rustconf 2025: https://youtu.be/zrv5Cy1R7r4?si=lfTGLdJOGw81bvpu and this blog:https://rfd.shared.oxide.computer/rfd/400)

Rust that uses standard techniques for asynchronous code, or is synchronous, does not. Async/await sucks all the oxygen from asynchronous Rust

Async/await Rust is a different language, probably more popular, and worth pursuing (for somebody, not me) it already has a runtime and the dreadful hacks like (pin)[https://doc.rust-lang.org/std/pin/index.html] that are due to the lack of a garbage collector

What a good idea

  • sunshowers 11 hours ago

    Hi -- I'm the one who presented the talk -- honored!

    I'm curious how you got to "async Rust needs a [tracing] garbage collector" in particular. While it's true that a lot of the issues here are downstream of futures being passive (which in turn is downstream of wanting async to work on embedded), I'm not sure active futures need a tracing GC. Seems to me like Arc or even a borrow-based approach would work, as long as you can guarantee that the future is dropped before the scope exits (which admittedly isn't possible in safe Rust today [0]).

    [0]: https://without.boats/blog/the-scoped-task-trilemma/

    • worik 3 hours ago

      My comment is quite general.

      The difficulties with async/await seem to me to be with the fact that code execution starts and stops using "mysterious magic", and it is very hard for the compiler to know what is in, and what is out, of scope.

      I am by no means an expert on async/await, but I have programmed asynchronously for decades. I tried using async/await in Rust, Typescript and Dart. In Typescript and Dart I just forget about memory and I pretend I am programming synchronously. Managed memory, runtimes, money in the bank, who is complaining? Not me.

      \digression{start} This is where the first problem I had with async/await cropped up. I do not like things that are one thing, and pretend to be another - personally or professionally - and async/await is all about (it seems to me) making asynchronous programming look synchronous. Not only do I not get the point - why? is asynchronous programming hard? - but I find it offensive. That is a personal quibble and not one I expect many others to find convincing I guess I am complaining.... \digression{end}

      In Rust I swiftly found myself jumping through hoops, and having to add lots and lots of "magic incantations" none of which I needed in the other languages. It has been a while, and I have blotted out the details.

      Having to keep a piece of memory in scope when the scope itself is not in my control made me dizzy. I have not gone back and used async/await but I have done a lot of asynchronous rust programming since, and I will be doing more.

      My push for Rust to bifurcate and become two languages is because async/await has sucked up all the oxygen. Definitely from asynchronous Rust programming, but it has wrecked the culture generally. The first thing I do when I evaluate a new crate is to look for "tokio" in the dependencies - and two out of three times I find it. People are using async/await by default.

      That is OK, for another language. But Rust, as it stands, is the wrong choice for most of those things. I am using it for real time audio processing and it is the right choice for that. But (e.g) for the IoT lighting controller [tapo](https://github.com/mihai-dinculescu/tapo) it really is not.

      I am resigned to my Cassandra role here. People like your good self (much respect for your fascinating talk, much envy for your exciting job) are going to keep trying to make it work. I think it will fail. It is too hard to manage memory like Rust does with a borrow checker with a runtime that inserts and runs code outside the programmer's control. There is a conflict there, and a lot of water is going under the bridge and money down the drain before people agree with me and do what I say...

      Either that or I will be proved wrong

      Lastly I have to head off one of the most common, and disturbing, counter (non) arguments: I absolutely do not accept that "so many smart people are using it it must be OK". Many smart people do all sorts of crazy things. I am old enough to have done some really crazy things that I do not like to recall, and anyway, explain Windows - smart people doing stupid things if ever

fithisux 11 hours ago

Very important paper.

ape4 17 hours ago

If only there was a C++-like language with garbage collection (Java, C#, etc)

  • jerf 15 hours ago

    I can't be an expert in every GC implementation because there are so many of them, but many of the problems they mention are problems in those languages too. Finalizers are highly desirable to both the authors of the runtimes and the users of the languages, but are generally fundamentally flawed in GC'd languages, to the point that the advice in those languages is to stay away from them unless you really know what you are doing, and then, to stay away from them even so if you have any choice whatsoever... and that's the "solution" to these problems most languages end up going with.

    Which does at least generally work. It's pretty rare to be bitten by these problems, like, less-than-once-per-career levels of rare (if you honor the advice above)... but certainly not unheard of, definitely not a zero base rate.

  • reactordev 16 hours ago

    Latest version of C# is a fantastic choice for this. Java too but I would lean more C# due to the new delegate function pointers for source-generated p/invoke. Thing of beauty.