Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
There are of course macros, but they’re kind of a pain to use. Zigs
comptime fn
are really nice and a similar concept. Rust does haveconst fn
but of course those come with limits on them.You kind of get that with Rust for free. You get implicit GC for anything stack allocated, and technically heap allocated values are deterministically freed which you can work out by tracking their ownership. As soon as the owning scope exits it will be freed. If you want more explicit control you can always invoke
std::mem::drop
to force it to be freed immediately, but generally you don’t gain much by doing so.Some really great work is being done on that pretty much all the time but… yeah, I can’t reasonably argue that the Rust compiler is fast. Taking full advantage of incremental compilation helps a lot, but if you’re doing a clean build, better grab a coffee.
What would be nice is if cargo explored a similar solution to what Arch Linux used, where there’s a repository of pre-compiled libraries for various platforms and configurations that can be used to speed up build times. That of course does come with a whole heap of problems though, probably the biggest of which is that it’s a HUGE security nightmare. Of lesser concern is the fact that they could not realistically do so for every possible combination of features or platforms, so it would likely only apply to crates built with the default features for a small subset of the most popular platforms. I’m also not sure what the tree shaking would end up looking like in a situation like that.
Yup, and Rust’s macros are pretty cool, but in D you can just do:
static if (condition) { ... }
There’s a whole compile-time reflection library as well, so you can take a class and make a super-optimized serialization/deserialization library if you want. It’s super cool, and I built a compile-time JSON library just because I could…
Yup, Rust is awesome.
But in D you can do explicit scope guards:
scope(exit)
- basically Go’sdefer()
scope(success)
- only runs when no exceptions are runscope(failure)
- only runs when there’s an exceptionI didn’t use them much, but they are really cool, so you can do explicit cleanup as you go through the logic flow, but defer them until they’re needed.
It’s a neat alternative to RAII, which D also supports.
I still need to try out Cranelift, which was posted here recently. Cranelift release mode could mostly solve this for me.
That said, I haven’t touched D in years since moving to Rust, so I obviously find more value in it. But I do miss some of the candy.
Hmm… that is interesting.
scope(exit)
is basically just an inlinestd::ops::Drop
trait, I actually think it’s a bad thing that you can mix that randomly into your code as you go instead of collecting all of the cleanup actions into a single function. Reasoning about what happens when something gets dropped seems much more straightforward in the Rust case. For instance it wasn’t immediately clear that those statements get evaluated in reverse order from how they’re encountered which is something I assumed, but had to check the documentation to verify.scope(success)
andscope(failure)
are far more interesting as I’m not aware of a direct equivalent in Rust. There’s the nightly only feature ofstd::ops::Try
that’s somewhat close to that, but not exactly the same. Once again though, I’m not convinced letting you sprinkle these statements throughout the code is actually a good idea.Ultimately, while it is interesting, I’m actually happy Rust doesn’t have that feature in it. It seems like somewhat of a nightmare to debug and something ripe to end up as a footgun.
It’s a stack, just like Go’s
defer()
.Probably because Rust doesn’t have exceptions, and I’m pretty sure there are no guarantees with
panic!()
.Same, but that’s because Rust’s semantics are different. It’s nice to have the option if RAII isn’t what you want for some reason (it usually is), but I absolutely won’t champion it since it just adds bloat to the language for something that can be solved another way.
Well, it has something semantically equivalent while being more explicit, which is
Result
(just likeOption
is the semantic equivalent ofnull
).I actually do quite a bit of bare metal Rust work so I’m pretty familiar with this. There are sort of guarantees with panic. You can customize the panic behavior with a
panic_handler
function, and you can also somewhat control stack unwinding during a panic usingstd::panic::catch_unwind
. The later requires that anything returned from it implement theUnwindSafe
trait which is sort of like a combinationSend + Sync
. That said, Rust very much does not want you to regularly rely on stack unwinding. Anything that’s possible to recover from should useResult
rather thanpanic!()
to signal a failure state.Yup. My point is just that
scope(failure)
could be problematic because of the way Rust works with error handling.What could maybe be cool is D’s in/out contracts (example pulled from here):
int fun(ref int a, int b) in { assert(a > 0); assert(b >= 0, "b cannot be negative!"); } out (r) { assert(r > 0, "return must be positive"); assert(a != 0); } do { // function body }
The
scope(failure)
could partially be solved with theout
contract. I also don’t use this (I find it verbose and distracting), but maybe that line of thinking could be an interesting way to generically handle errors.Hmm… I think the Rust-y answer to that problem is the same as the Haskell-y answer, “Use the Types!”. I.E. in the example above instead of returning an
i32
you’d return aNonZero<u32>
, and your args would bea: &NonZero<u32>, b: u32
. Basically make invalid state unrepresentable and then you don’t need to worry about the API being used wrong.I’m more referring to a more general application, such as:
fn do_stuff() -> Result<...> { if condition { return Error(...) } return Ok(...) } out (r) { if r.is_err() { // special cleanup (maybe has access to fn scope vars) } }
That gives you some of the
scope(failure)
behavior, without as many footguns. Basically, it would desugar to:fn do_stuff() -> Result<...> { let ret = if condition { Error(...) } else { Ok(eee) }; if ret.is_err() { ... }
I’m not proposing this syntax, just suggesting that something along these lines may be interesting.
I think the issue with that is that it’s a little bit of a solution in search of a problem. Your example of:
fn do_stuff() -> Result<...> { if condition { return Error(...) } return Ok(...) } out (r) { if r.is_err() { // special cleanup (maybe has access to fn scope vars) } }
isn’t really superior in any meaningful way (and is arguably worse in some ways) to:
fn do_stuff() -> Result<...> { if condition { // special cleanup (maybe has access to fn scope vars) return Error(...) } return Ok(...) }
For more complicated error handling the various functions on Result probably have all the bases covered.
For what it’s worth a lot of my day to day professional work is actually in Java and our code base has adopted various practices inspired by Rust and Haskell. We completely eliminated null from our code and use Optional everywhere and use a compile time static analysis tool to validate that. As for exception handling, we’re using the Reactor framework which provides a type very similar to Result, and we essentially never directly throw or catch exceptions any more, it’s all handled with the functions Reactor provides for error handling.
I just don’t think the potential footguns introduced by
null
andexception
s are worth it, the safer type level abstractions ofOption
andResult
are essentially superior to them in every way.