Rust tries very hard to make clear it only prevents “data races”, which is just an important subset of race conditions.
I see plenty of potential in Rust, but it’ll need to stabilise as a language and as an ecosystem of well known, well tested libraries before I’d consider writing anything important in it.
Also, to be blunt, a skilled C++ programmer using modern idioms already writes code that leaks no memory, has no race conditions and is extremely tough to exploit with ROP chain attacks, far more so than C code. Not to toot my own horn here, but Boost.AFIO has undergone an enormous number of changes in recent months, and not a single memory corruption, race condition nor memory leak has occurred in any commit tested by Jenkins in over six months. And that’s even in experimental branch commits.
Now, where I get interested in Rust is because writing high quality C++ using modern idioms requires a lot of concentration and it’s slow going, so your productivity is low. I’ve been doing a lot of 12am - 5am shifts on AFIO this past month, and writing bug free C++ when you are exhausted is very hard and slow, much harder and slower than when writing in Python etc. If Rust’s safe mode lets me take less care when coding without risking correctness, that’s a big win for me. However, I observe that Rust has an even more complicated type and memory and concurrency model than C++'s, so maybe in fact Nim is the future of statically typed languages. It starts from something like what if Python were a fully compiled language which compiles into C, and goes from there. That’s how C++ started, as a front end compiler into C, and it did well out of that.
Niall
That’s wise, I’ve tried to clear up some of these things for you…
The way Java handles memory is a garbage collector, whereby at runtime a list of objects are crawled through to see if there are still references to them, which, as you seem to be aware, does not guarantee no memory leaks. The way Rust handles memory, is with a compile time ownership system, whereby if you own an object, and it goes out of scope, the memory can be freed. There is no way to leak memory unless you touch unsafe code. [Edit: I should probably note that you can still leak memory through logical errors, such as if you maintain a reference to something you no longer need.]
This is a common misconception about Rust - it does not prevent race conditions. It only prevents data races (you can’t have multiple threads writing to a variable at once, or one writing and one reading for example). This is possible, again, thanks to the ownership system - you can only modify a variable if either you own it, or have mutably borrowed it - the compiler enforces that there can only be one mutable reference at a time, and that you can’t have both mutable and immutable references to something at once, thus enabling data race freedom. Other kinds of race conditions (or even deadlocks) are very much possible.
The benefit here, exactly as you point out, is that there’s no risk of a security vulnerability. From a users perspective there’s no difference. It’s often possible to completely avoid the issues though, often APIs will provide ways to avoid hitting out of bounds issues (iterators, the Option ADT, etc).
Rust is a completely different language to what it was 10 months ago, it’s probably worth taking a peek again, with it stabilising/hitting 1.0 in the next couple of months. I won’t comment on the C++ point, I won’t want to accidentally start a flame war
@ned14 I didn’t quote you directly, but I agree with everything you said. The time consuming part is a huge negative. I’m also not sure any language matches the power of the C++ template language (D?), but that comes at even bigger time price.
There will be plenty of code in the wild that does an abort (or equivalent) in the case for none
when visiting an option
type. The effect will be similar to an unexpected null pointer exception in Java. This is why I don’t think Rust has completely removed these issues from the language. I do think it will dramatically reduce them, since the frequency null/uninitialized is needed is less than the times a pointer is needed.
I don’t think its anything new and different from C++ (std::vector::at
), although Rust does try harder to “force” you into correct code (either an unchecked foreach loop or a checked direct index access).
The smart pointer types in rust are nearly a duplicate to C++ (someone let me know if !true). The main gain in the language design is that the compiler prevents accesses to an object that was the source of a move operation. This is why there are “no null pointers” - the source of the pointer move is marked as dead. This isn’t stopped by C++, and its a real pain in certain situations. However, I think Rust still has memory cycles with its shared pointer type std::rc
, which would act exactly the same as a memory leak. Two live objects pointing at each other, referenced by nothing else is still a blantant memory leak if the no object in the execution path can access either cycled object. I think you have to implement a proper GC implementation to handle this situation.
Rust still has references too, which don’t affect the original smart pointers, exactly like C++. I’ve been looking to see if the compiler can do analysis to determine whether the use of a dead reference is possible, but I’m guessing there are some gotchas where its tough for the compiler to “see” such an issue.
I already typed this above - what about memory cycles with shared semantics? It appears rust has merely duplicated the smart pointers from C++? I’m repeating myself again, but the only improvement is the source of a move operation is marked as dead. That is huge, and will cause pains in C++, but the rest is functionally equivalent.
And what about references (currently listed on the main page)? Are those only usuable in “unsafe” code? If you can put a reference inside of an object allocated on the heap, you’ll still have the same issues as C++ unless the reference magically turns into a shared count. Are references part of the “unsafe” code? Or are does the language prevent the usage of references in some scenarios (don’t see a mention of that anywhere)?
I feel like the evangelists aren’t clearly stating as such though, or that at least its confusing people because its an easy talking point. I think @mrmonday confirmed as such by calling it a common misconception. If Rust does become popular, it could be one of those things that drives me up the wall with lazy programmers who feel like it solves all of their problems. I think competent programmers will leverage that static analysis quite well (hopefully).
I think if you had smart pointers as const types and enforced that way (Rust has no notion of cont pointer to non cost type or vice versa, its all const or not) then this holds more so. I have not done any code so far with rc or arc (atomic reference counted) but as far as I can see they still enforce ownership semantics as the references do (references are called borrowed types or similar, you can borrow a ref but it is safe, but you must at some stage give it back or you will cause a leak).
I think the ownership model is a huge win for me, it is a real pain for me when I see lots of sharing in code as it feels alien. I think share immutable stuff and nothing else. This is maybe an attraction for me, not type const, but type mut instead so opt out as opposed to pt in, even semantically it feels better regardless of extra protection offered.
In the end this is a systems language that can be easily adopted by non systems languages programmers who will be able to cause deadlocks etc. if they try. As a systems programmer this is just so much easier to handle as it packages things in a nice responsibility managed fashion and defaults to a position of safety with bolt holes through unsafe
and FFI (where everything is unsafe)
I also think if you look at large code bases, there are data races and memory leaks via bounds issues and they are all over the place (and used widely to exploit systems). Where sharing is now widely used (via non const shared_ptr) the complexity is gonna get much worse, especially as the destructor gets/sidelined ignored as Sean Parent points out.
So writing safe code is entirely possible in c++ especially 11/14 but it is very hard as you need to opt in to const and prohibit unsafe sharing (which is hard) and then use RAII or some roll back for exception safety. What I am observing this is done in rust, it’s just a done deal to handle this stuff as a default. I think our code is very well done and we use santitisers to get safe as we can, but we have build complexity, coupling (which modules makes very hard to do) and the cross platform complier incompatibilities that drive us all daft. Removing much of that to me means huge productivity increases.
Less time implementing and more time getting the core logic correct is very appealing.
[edit] I should add this is not a God language as in it will have deficiencies and we will find them, my claim is this is a vastly superior way to implement systems level code and potentially application code to, as many projects already are showing. So not evangelising, but if a nail gun came out and we all used hammers then even the early nail guns improved productivity, with all their faults. It is just evolution I believe.
I don’t think I can answer your question sufficiently, since I don’t know C++. It may be worth taking a look at GitHub - rust-lang/rust: Empowering everyone to build reliable and efficient software., which is designed specifically for C++ programmers. It’s not possible to create memory cycles without using smart pointers, at which point you can get memory leaks (eg. Gc and Rc from the standard library). I was likely overenthusiastic in my last post - sorry.
You mention unsafe
a few times - unsafe code is rare most of the time in Rust, you generally do not need to write it or use it unless you’re interfacing with C, or perhaps implementing an unusual data structure which doesn’t exist in the standard library (or can’t be built on top of them). References are used all the time in safe Rust.
Hi, Rust user here (that found this thread through a link in the Rust subreddit)
A core part of Rusts semantic is that the lifetime and borrowing system ensures at compiletime that a reference will never point at invalid memory, or even gets mutably aliased with another one.
The gotcha is not that sometimes this analysis fails down, but rather that it is sometimes too conservative in order to catch all cases. (There are official safe solutions for working around many of these issues though.)
Example:
let rc: Rc<T> = /* ... */;
let r: &T = &*rc;
This creates a shared reference &T
into the inside of the Rc<T>
, which borrows the rc
value as long as the reference in r
is in scope.
Which means the compiler prevents mutating/replacing/moving the rc
value as long as r
is in scope (which is purely a compiletime analysis).
Rc
’s API ensures that it is not possible to break the semantic of references:
- That one
Rc
instance is pinned down for the duration of the borrow, so the ref count will at least be 1, ensuring the backing memory remains valid. Rc
allows taking shared references (&T
) into its inside, so having manyRc
handles to the same data is fine.Rc
does not allow taking mutable references (&mut T
) into its inside, so you can not create mutably aliased references by creating mutable references from differentRc
handles.
EDIT:
References are guaranteed to be safe, you can put them into heap allocations, and they will not magically start using runtime checking.
As you correctly noted however, this implies some restrictions: References are safe through the lifetime/borrowing compile time analysis which fundamentally works based on the control flow graph of a function.
Put more simply, a reference is valid for the lifetime of the variable it points at, which can be seen as the scope of that variable:
let z: Box<&i32>; // type: heap allocation containing a reference to an integer
{
let x = 5; // Lifetime of variable `x` starts here and
// ends at the end of the block
let y = &x; // Reference is valid for lifetime of `x`,
// that is the end of the block.
z = Box::new(y); // ERROR: Reference would be invalid outside the block,
// so can not assign to variable with
// greater lifetime/scope than `x`
let z2: Box<&i32> = Box::new(y); // This is fine, z2 drops at
// the end of the block, so the reference
// never becomes invalid.
}
println!("{}", **z);
This covers many, many uses cases where you’d use pointers, and enables stuff like safely throwing around and returning stack pointers.
Of course, your data does not need to actually live on the stack for this system to work: Its enough that all actual accesses to it necessarily happen as part of the execution of a function.
The biggest thing this does not work for, however, is references that are supposed to be safe independent of the stack, like for example a pointer to or into a sibling field. Such use cases still require the user to expose a safe interface using raw pointer internally.
Rust will not eliminate these two classes of errors, as they are not memory unsafe.
Instead of catchable exceptions, they would cause uncatchable panics, but thats it:
- Indexing or slicing out of bound will panic.
- Calling
unwrap()
on aOption<T>
that contains aNone
will panic. (General case of the NPE)
The important thing Rust ensures is that those cases are safely checked at all, like in Java, and not have it cause undefined behavior as in C/C++.
However, its considered unidiomatic Rust to directly index or unwrap if it is avoidable, as there are higer-level APIs and language features for avoiding the error cases directly. (For example, iterators for safe, non-bound checked access to all elements of an array, and the chaining methods of Option<T>
in combination with pattern matching to handle the None
case explicitly.)

Rust will not eliminate these two classes of errors, as they are not memory unsafe.Instead of catchable exceptions, they would cause uncatchable panics, but thats it:
Indexing or slicing out of bound will panic.
Calling unwrap() on a Option<T> that contains a None will panic. (General case of the NPE)
Ah my bad, of course you can index or slice like this. I have used this type of code as well as the more functional iterator/map type style which is nicer and more like range based for and some of the stl algorithms like transform.
This to me is where you need to look at tests to check the API is implemented in your module and that it handles random size inputs if these can happen at runtime.
This is discouraging news. On the heels of the “i had a dream about rewriting code” update from a few weeks ago, now we have this “I want to rewrite the whole codebase in another language” post. David you’re the CEO of a major project and need to think about the confidence, or lack thereof, in your abilities to lead to the finish and publish the promised product.
The project is already going into its seventh month of delayed launch. Still no implementation of SAFECoin, and now the lead is threatening a complete another rewrite in another untested programming language of an 8 year old codebase?
What are we supposed to make of this - a signal of more delays to come, and lengthy ones? But Nick is promising testnet 3 and SAFEcoin in March. Which one of you is right? Are we close to launch within a month or two or are we a year or two away with two major rewrites?
I hope this whole “transposition to rust” talk is put to rest, it is very frightening to investors in SAFEcoin to think the project is so rudderless at this late juncture that a week of fiddling with a pre-beta new programmimg language is enough to convince the project lead to make a major course change.
A year or two from now, once SAFEnet is operational, no known exploits or bugs, then look at a development fork into Rust, maybe. But not now.
You’re Jim Lowry the founder of Storj ? Because that would put this post in a different perspective.
I can understand the worries from an investors perspective but if you’ve followed this forum, most of the changes, delays etc. are explained by Maidsafe and we do want the best product (at least I do).
I’ve never heard Nick saying that safecoin will be available in march but do know that Testnet3 should be there this month so let’s hope they can make it happen.

This is discouraging news
The day we stop looking forward and innovating and I announce that, then you will have discouraging news. Until that day we will look forward the whole time. Removing 98% of vault code and ending up more efficient is not a dream. Re-implementing the web in decentralised fashion is not easy we never said it was. Progress is what this is all about moving faster leaner and more efficiently. I have found this to work in all the projects I have been involved with or managed. An Engineer who does not investigate, is no Engineer and will not be in our team.
If you read the post then you will also realise this is not a delay in fact it’s me investigating. Just as I take a lot of time with crypto algorithms and attacks, new technology, academic papers as well as issues like deterministic builds and much more I will investigate a new systems language with so much promise, or do you prefer we stick to horse and cart as people whizz by in jet bikes? I would not do that to our community or to the people I have spent time and effort on protecting.
If we move to rust it will be to be faster and more efficient not slower This is the basis of winning and pushing boundaries.
If you look at progress in gihub etc. you will see huge improvements. I expect we will have a lot of the roadmap ticked off very soon as we took a huge gulp of work and with our heads down and bums up just did it.
So thanks for the input, but side-lining & expecting us to not look forward and work hard is not gonna work Jim, we are much better than that. Since the end of last year I have spent time in the code and will make improvements, because this is no wee script that we need to execute and hope works, this a a system developed in systems code for serious work and with a future. It will be right.
A critique will be time, we are slower than I figured which spurned me to take action and look deeper at algorithms to help out, this does include toolsets management tools (like Jira), community engagement and reporting (like here) and more. The Pods are creating apps, the academics, papers and more. So yes we may not be as fast and perhaps Rust improves that, I hope so if we go that way, but not looking is bad management.
This won’t change and if others can code faster then we welcome them with open arms and pay them, but they will need to work as hard as we do
If you do decide to port the code to rust, then maybe this is something that the community can help with. After all, more people == faster coding! As long as 10+ people arn’t working on porting the same part of the code, MaidSafe could be ported over in days instead of weeks or months.

If you read the post
I have to say he either clearly didn’t, or did and decided to spin it in a very negative way and mis-represent the information contained within it.
I can only say, whichever it is, it is not constructive criticism and does not really merit a response. It reminds me of one of those trolls…I wouldn’t feed him myself…I certainly would not have been so gentlemanly and courteous.
Classy…“Manners maketh the man”. - “Distortion maketh the troll”
Edit:
When you think about it, (if a Storj dev) then what are the possible motivations for doing this? Why try to knobble investor/Community confidence in Maidsafe? I think this at least demonstrates Maidsafe is perceived as some kind of threat to Storj - which displays a certain confidence in Maidsafe, which is good news for investors.
So, a big well done for boosting community/investor confidence in making the post.
Nice idea would be interesting if we could and some pods etc. helped out as it’s a great way to spread the knowledge base (one of my goals). I hope this is also another aspect of rust, a larger community of folks who know it.

I’m also not sure any language matches the power of the C++ template language (D?)
Can’t help but note that say Common Lisp or Scheme macros blow the pants off C++ templates.

The project is already going into its seventh month of delayed launch. Still no implementation of SAFECoin, and now the lead is threatening a complete another rewrite in another untested programming language of an 8 year old codebase?
I would call that the founder’s prerogative. They get the right to rewrite core pieces of the platform in a new technology to see how things shake up. Dan Dodge @ QNX is particular devil for doing that, and I think it’s a part of what has kept QNX fresh and in the game after thirty years.
I should add that I don’t believe it is technically feasible to mix together Boost ASIO and Rust. I’m sure it could be done, but it would be painful. As all the core libraries were written around the ASIO concurrency model, my personal belief is that no Rust implementation of any core library can work well with any of the existing core libraries.
That said, I have no experience in Rust and David wants to find out for sure, which as founder he gets to call. This is only David’s personal free time here on this investigation, the rest of us are pressing on as normal. Investors, or anyone else interested, may wish to bear this in mind when drawing conclusions. David’s personal free time projects are a lot more interesting than what I am currently doing in my free time to the annoyance of my wife, but both are about answering personal “what if?” questions.
Niall
Do you know of any library that could easily implement reliable udp ? (he hinted)

The gotcha is not that sometimes this analysis fails down, but rather that it is sometimes too conservative in order to catch all cases. (There are official safe solutions for working around many of these issues though.)
Ok, this is what I was wondering. A cursory glance at the docs suggested that either valid uses of references would be rejected, or it would have the same problems as C++. The most obvious would be a reference into a heap-allocated object that the programmer “knows” will remain active for the lifetime of the reference - that sort of analysis should be difficult in some situations. Unfortunately its difficult to get accurate information on minutia like this. I definitely like the conservative approach here.

The important thing Rust ensures is that those cases are safely checked at all, like in Java, and not have it cause undefined behavior as in C/C++.
However, its considered unidiomatic Rust to directly index or unwrap if it is avoidable, as there are higer-level APIs and language features for avoiding the error cases directly. (For example, iterators for safe, non-bound checked access to all elements of an array, and the chaining methods of Option<T> in combination with pattern matching to handle the None case explicitly.)
So the benefit is largely trying to “force” the programmer into good practices. Even in C++03, you can write a smart pointer type that checks on dereferences, and throws on null-pointer (behaving exactly like Java). If the programmer simply always used that smart pointer type instead of raw pointers, then no undefined behavior would exist either. I suppose C++ programmers don’t want to write some of the utilities that would do these things, or the speed of checking a pointer is too just too much? Or is it just legacy tutorials always use raw pointers, leaving the typical programmer unaware? I do think Rust will be a slight improvement here, but I don’t think its going to be as helpful as some of the promoters claim because there will be plenty of aborts/panics in Rust code due to fault unwrap() statements, etc.
The type checked threading system is very nice though, I think this will have a bigger impact. Its still going to result in racy code, but its going to force better habits/thinking for mutable data ownership. This should be huge for larger projects.
There are also selling points that I would promote higher:
- 1.2 There is no GC
- 1.4 There is no life before or after main (no static ctors/dtors)
- 1.5 The language does not require a runtime
- 1.6 match must be exhaustive
- 5.1 Casting Rust provides no implicit type conversion (coercion) between primitive types.
Although this is probably due to me being a C++ developer. The last bullet point is an absolute train wreck in C++, which is a carry over from C. Implictly converting all built-in numeric types - terrible.

Can’t help but note that say Common Lisp or Scheme macros blow the pants off C++ templates.
Do you have much experience with C++ templates, or is your background primarily in Lisp? I had to look up Lisp macros to even respond to this question, so its possible that I missed something, especially since Lisp is much different in design than most languages.
The first thing I noticed is that types in Lisp are inferred. This is convenient (C++ type system is overly verbose), however the C++ template language allows for computation based purely on types (see boost::mpl). C++11 takes this further, allowing a programmer to inspect the type of a statement (decltype). I don’t recall another language allowing either, and it appears neither does Lisp. This is rarely useful for sane programmers, so I won’t argue with someone who says its not worth the verbosity of C++ templates. But damn is it cool.
Conversely I’m not seeing anything that can be implemented as a lisp macro that can’t be implemented using C++11 variadic templates + lambda/functors. There are several cases that are more verbose in C++, which is due to the aforementioned type system, and the extra work to simulate Lisp lists with variadics. So depending on the use case, Lisp macros could be a much better fit (and is probably better for a surprising number of cases).
→ Actually the one major difference I see is scoping. Since Lisp macros are expanded you can “grab” variables from the caller. IMO this probably causes more problems than it solves, as C macros have the same issue.
→ Further digging found defgeneric
in lisp, which looks like it compensates for lack of type inspection in macros. It still requires an instance of a type, so its an approximation of C++ function overload or dynamic dispatch depending on the situation. But Lisp cannot do pure type computation (unless theres even more lisp!).

But Nick is promising testnet 3 and SAFEcoin in March.
When did this happen Jim?