- Fine grained control over allocation whilst maintaining memory safety (stack, heap, RC, GC, or roll your own)
- No null pointers (with an Option type that compiles down to a nullable pointer)
- Data race free concurrency
- Zero cost abstractions
- RAII and destructors
- No exceptions
- A modern, highly expressive type system
- Generics that throw type errors at the call site, not deep in a template expansion
- True immutability (not `const`)
- An excellent C FFI
- Compiles to native code
- You don't pay for what you don't use
- Safe by default, but you can circumvent the safety checks if you know what you are doing, and those places are clearly marked and easy to audit. Inline assembly is supported.
- Most safety is enforced statically, so you don't pay for it at run time.
I was being a bit flippant when I posted, but I'm really impressed with the list. I need to look into seeing where I can help on the compiler or runtime.
No exceptions is my only objection, but I know they're dubious for a systems language.
No worries, it was a good question! I would highly recommend hopping on the IRC if you'd like some more information or have a chat. The community is very active and friendly. You can see the list of channels here: http://static.rust-lang.org/doc/master/index.html#external-r...
Regarding exceptions: whilst they can be be very useful, unfortunately a significant number of large, performance sensitive C++ projects outlaw them due to overhead and safety concerns (the semantics can become quite hairy when mixed with destructors). The Rust developers felt that it was easier to forgo them entirely.
My understanding is that the old exception code called "SJLJ" (short for setjmp, longjmp, which is what it was) was slow. I think each try/catch required hooks, and yes, it was.
The newer compilers generate something called "DWARF"; resources on it are unfortunately scarce, but my understanding is that you don't pay anything in speed for an exception until you throw one. (You do however pay a bit of disk/memory for data about where try/catch handlers are, I think.)
> safety concerns (the semantics can become quite hairy when mixed with destructors)
I'm assuming that you shouldn't throw in a destructor.¹
This argument, to me, always needs more information attached to it, because by itself it's meaningless. Assuming the alternative is returning either the result, or an error code, you run into exactly the same semantic issues, you're just handling them manually now. Is that better, and how?
In the manual case. If I assume I have some code that returns to me an error code that I can't handle, I need to propagate that error up to a stack frame that can. Thus, I begin to manually unwind the stack, during which, I destruct things. If we're assuming destructors can throw², then I can potentially run into the problem of having two errors: now what do I do?
C++ isn't the only language here: C#, Python, and Java share the problem of "What do you do in the face of multiple exceptions requiring propagation up the stack?", though I think C++ is the only one that solves it by terminating the program. I believe C# and Python just drop the original exception, and I have no idea what Java does. Honestly, if things are that effed up, terminate doesn't sound that bad to me. In practice in C++, most destructors can't/don't throw. (Files are about the hairiest thing, since flushing a file to disk on close can fail: C++'s file classes will ignore failures there, which doesn't exactly sit well with me. You can always flush it manually before closing, but of course, if you do this during exception propagation and throw on failure, you risk termination due to two exceptions.)
Even C has this, in that if you're propagating an integer error code up the stack, and something goes wrong in a cleanup, you've got this problem. In C, you're forced to choose, of course, including the choice of "ignore the problem entirely".
That said, I'll add the answer for Rust here. (I've never used Rust, so correct me if I'm wrong. I'm going abstract away the Rust-specific types, however.) Rust, for a function returning T but that might fail, returns either Optional<T> or a Result<T>-ish object, which is basically (T or ErrorObject). Rust has strong typing, so if there's an error, you can't ignore it directly, because you can't get at the result. And if you try, it terminates the "task". Strong typing is the winner here. (This reminds me why I need to look into Rust.)
¹It's not illegal to do so, but since destructors get called while an exception unwinds the stack, you can potentially run into an exception causing an exception. Two exceptions in C++ result in a termination of the program.
²If we're not, then exceptions are perfectly safe.
Yeah, I will admit I am not an expert in exceptions, so I might have been incorrect in my response. I'm sure there would be better people to talk to on the #rust. Alternatively you could ask on the mailing list or /r/rust.
Honestly, I've made over 15 games in my career and safety with C++ really just isn't an issue with decent developers. The line between what's a programmer for games and what's a designer is narrowing, most designers are competent programmers. Furthermore, there's some great tools out there to help prevent things like memory leaks. Combine that with good company practice, like code reviews, and it becomes a non-issue.
I guess, then, that you weren't part of the Battlefield 4 team [1]. I've discussed the issue of the "no decent programmer" fallacy in the past; yes, in theory if programmers were careful and alert, they could create flawless software, yet this never happens in practice because humans are prone to errors (i.e. not understanding a subtlety of the language or library, thinking that a validation is done at a different level of abstraction, failing to imagine what could be an error scenario and how it could occur, etc.). Languages like Rust offer the same capabilities as C or C++, while eliminating entire classes of bug sources.
If BF4 was written in Csharp, or java (or rust or go?? I'm sure it would still have just as many bugs. One of peoples biggest complaint is the kill shots that you don't see, but that's a design choice (client side hit detection).
Of course, it's impossible to speculate without seeing the codebase, but considering that Rust makes several classes of C++ bugs impossible at compile time, I'd be hard-pressed to imagine that a Rust version wouldn't be less buggy.
If the safer type system gives the devs an unwarranted sense of security, they might write less tests, be less careful in their design, or wait longer between audits and other sanity checks.
If on the other hand the devs understand which classes of bugs aren't ruled out in Rust, then sure, you will end up with fewer bugs.
Rust's type system eliminates the need to test for whole classes of bugs, because they are statically checked for at compile time. This means that tests can be more focused on logic errors rather than standard book keeping. If you look at the example set by the rust repository itself (https://github.com/mozilla/rust/), it is heavily tested and every single PR (https://github.com/mozilla/rust/pulls) is reviewed before merging. This discipline definitely filters down into third party libraries.
> Furthermore, there's some great tools out there to help prevent things like memory leaks. Combine that with good company practice, like code reviews, and it becomes a non-issue.
The security track record of applications written in C++ disagrees with you.
We are talking about new engines written in these languages, though, not multi-decade old codebases still using inline assembler, goto, and pointer arithmetic.
Modern C++ is really safe if you use the subset that involves automatic storage duration, well bounded arrays, etc and use all the warning flags of your compiler, run static analysis, have a robust test framework, etc.
No, modern C++ is not even close to memory safe. This is my favorite meme to destroy over and over on HN. :)
Consider iterator invalidation, null pointer dereference (which is undefined behavior, not a segfault -- and you can't get away from pointers because of "this" and move semantics), dangling references, destruction of the unique owner of the "this" pointer, use after move, etc. etc.
Extraordinary claim- please elaborate. I'm working on 100's of thousands of lines of C++ code with a medium-sized team; memory issues are almost non-existent because of disciplines described above.
I've described this many times in the past, but here are a few things that modern C++ does nothing to protect against:
* Iterator invalidation: if you destroy the contents of a container that you're iterating over, undefined behavior. This has resulted in actual security bugs in Firefox.
std::vector v;
v.push_back(MyObject);
for (auto x : v) {
v.clear();
x->whatever(); // UB
}
* "this" pointer invalidation: if you call a method on an object that is a unique_ptr or shared_ptr holds the only reference to, there are ways for the object to cause the smart pointer holding onto it to let go of it, causing the "this" pointer to go dangling. The simplest way is to have the object be stored in a global variable and to have the method overwrite the contents of that global. std::enable_shared_from_this can fix it, but only if you use it everywhere and use shared_ptr for all your objects that you plan to call methods on. (Nobody does this in practice because the overhead, both syntactic and at runtime, is far too high, and it doesn't help for the STL classes, which don't do this.)
class Foo;
unique_ptr<Foo> inst;
class Foo {
public:
virtual void f();
void kaboom() {
inst = NULL;
f(); // UB if this == inst
}
};
* Dangling references: similar to the above, but with arbitrary references. (To see this, refactor the code above into a static method with an explicit reference parameter: observe that the problem remains.) No references in C++ are actually safe.
* Use after move: obvious. Undefined behavior.
* Null pointer dereference: contrary to popular belief, null pointer dereference is undefined behavior, not a segfault. This means that the compiler is free to, for example, make you fall off the end of the function if you dereference a null pointer. In practice compilers don't do this, because people dereference null pointers all the time, but they do assume that pointers that have been successfully dereferenced once cannot be null and remove those null checks. The latter optimization has caused at least one vulnerability in the Linux kernel.
In particular, note this: "If the newly allocated data chances to hold a class, in C++ for example, various function pointers may be scattered within the heap data. If one of these function pointers is overwritten with an address to valid shellcode, execution of arbitrary code can be achieved." This happens a lot—not all use-after-free is exploitable, of course, but it happened often enough that all browsers had to start hacking in special allocators to try to reduce the possibility of exploitation of use-after-frees (search for "frame poisoning").
Obligatory disclaimer: these are small code samples. Of course nobody would write exactly these code examples in practice. But we do see these issues in practice a lot when the programs get big and the call chains get deep and suddenly you discover that it's possible to call function foo() in one module from function bar() in another module and foo() stomps all over the container that bar() was iterating over. At this point claiming that C++ is memory safe is the extraordinary claim; C++ is neither memory safe in theory (as these examples show) nor in practice (as the litany of memory safety problems in C++ apps shows).
A lot of this just looks to be lacking const correctness. If you declared most of the mutable types const (and the use cases for a non-const unique_ptr are few) you can avoid most of these issues.
I think it is a valid criticism of the language that not all non-primitive types aren't implicitly const, though. But you could never implement that without colossal backwards compatibility breakage. Which I guess is fine, since you could just keep a code base an std= behind until you fixed it.
> Use after move: obvious. Undefined behavior.
This I don't have an answer to though. I've always disliked how this isn't a compiler error.
You can return out references and still get dangling pointers with const values. For example, you can return an iterator outside the scope it lives in and dereference that iterator for undefined behavior (use-after-free, possibly exploitable as above).
Besides, isn't "C++ is memory safe if you don't use mutation" (even if it were true—which it isn't) an extremely uninteresting statement? That's a very crippled subset of the language.
> If you declared most of the mutable types const (and the use cases for a non-const unique_ptr are few) you can avoid most of these issues
Mutability in Rust is perfectly safe because of the static checks built into the type system – the compiler will catch you if you screw things up.
> you could never implement that without colossal backwards compatibility breakage
I cannot express how important immutability as default is. This prevents the issues that C++ has with folks forgetting to mark things as const. There is also lint that warns when locals are unnecessarily marked as mutable, which can catch some logic errors (I say that from experience).
Also note that I said 'immutability' not 'const'. Immutability is a far stronger invariant than const, and therefore is much safer. It could also lead to better compile-time optimisations in the future. I'm sure you know this, but just in case:
- const: you can't mutate it, but others possibly can
- immutable: nobody can mutate it
Right; stl sucks. So its work, but you can make ref-safe containers, even thread-safe ones. We do that; we do audio rendering with audio-chain editing on the fly, with no memory issues. It takes care, more care than other languages. But its far from unsolvable.
And the philosophy of Rust is, what if we encoded that "care" into the language itself? That, to me, is a clear win. It is, to me, good systems language design: codifying decades of hard earned "best practices" into the language semantics itself.
Of course it's possible to write correct C++ code, just like it's possible to write correct assembly code. The point is the extra care required: every piece of code needs to be very carefully authored to ensure it's correct, to avoid the myriad pitfalls.
Or you can just trust the language. And if its not right, or not the way you plan to use it, what then? You're stuck unless the language also permits you to roll your own.
Rust does allow you to implement low-level things in itself, by giving an escape hatch into C/C++-like unsafe code (i.e. risk-of-incorrectness is purely opt-in, rather than always-there).
Examples of things efficiently implemented entirely in the standard library in pure Rust (well, with some calls into the operating system/libc): Vec, the std::vector equivalent. Rc, reference counted pointers (statically restricted to a single thread). Arc, thread-safe reference counted pointers. Mutex. Concurrent queues. Hashmap.
Maybe most C++ applications are low-value as attack targets, so no-one has bothered to find all the corner cases that make them blow up.
The fact that applications like browsers and operating systems (which are known to be high value targets) have a lot of effort & resources put into security but still have attack vectors makes the "C++ is secure" position fairly indefensible.
pcwalton mainly works on web browser development (Servo), which whilst sharing some goals with game development, also differs in some respects. Although online security is more and more important in games these days, the real appeal of Rust in respect to game development is in providing an alternative to the 'death by a thousand cuts' that can plague large C++ projects.
I've posted a list of the things I consider the most relevant to game development: https://news.ycombinator.com/item?id=7587413 Any one or two of them alone wouldn't really be a compelling enough reason to switch, but put together they form a very compelling value proposition.
It seems like it's hard to say if Rust really would have eliminated those bugs, the reports are vague and the ones that aren't, e.g. fixed framerate issues, would be an issue either way.
My argument isn't solely get good developers and be done with it. It's a combination of things, and one of the most important things is getting good practices in place. I don't know if EA did this but having a good auto-test system in place probably would have caught those crash bugs and prevented the server issue, for example.
> one of the most important things is getting good practices in place
That is really important, but still, wouldn't it be better if you could encode at least some of those good practices into the language itself, rather than relying on humans to be constantly on their game? I'm certainly not perfect, so I would rather my sloppiness be caught earlier rather than having it come back to bite me in the future. See: http://thecodelesscode.com/case/116