This is a pretty weak security mitigation since any ROP attack will typically achieve return-to-libc anyway, but yelling at people who make syscalls directly is good.
> but yelling at people who make syscalls directly is good.
Why though? Doesn't this mean that all non-C based languages are going to be treated somewhat as second class citizens, having to link the standard C library (eg. as Go is doing on some platforms) in order to call into the kernel?
While languages such as Go and Rust are aiming to replace/displace C due to it being designed in an age where security was considered less of an issue, it seems counter-intuitive to me that we should insist that they should link in the apparent attack surface of the standard C library. The syscall boundary seems an ideal place to make the delineation between the kernel and userland via an established API, and I would have expected that languages that want to displace C be able to use that interface directly in order to bypass the standard C library. That would seem to allow userlands to be built that include no C code whatsoever. But I'm very obviously no expert.
Ideally, IMO, there should be two libraries, a “Kernel interface library” and a “C library”, but for now, just think of it as two logical libraries in a single file.
Nothing about this prevents applications from being built that do not depend on libc, this is a flag set on elf binaries that the program loader honors, tooling to build things that really want to generate syscalls can just flag all sections as syscalls allowed.
An attacker would have to set such a flag before the binary was loaded, meaning they've already achieved code execution (and syscalls).
This feature is yet another layer of defense against remote exploitation of a currently running program (for example, a web browser). It complements things such as ASLR, stack canaries, the NX bit, ...
Disregarding the condescending nature of your comment - I can't find a reference to the actual whitelisting methodology in this article. Several other comments in this thread claim that this is a flag set on ELF section headers, which can be done entirely before the binary is delivered to the system and executed. So in my opinion you need to try harder than this.
If someone can show this is done by configuring the local dynamic linker, so that the end user has full control of the mechanism, then I'm all ears.
Yes. Microsoft have made a number of silly mistakes with respect to OS design, which has cost them the server space outside of authentication for your and my lifetime. Bloat is not the answer to either security or performance problems.
That's a pipe dream, not an implementation. Let's wait until it actually happens before getting too excited, and then we be actually able to evaluate the drawbacks.
> Current Status - llvm-libc development is still in the planning phase.
Can you help me explain where this second class citizen sentiment comes from? If all programs need to go through libc, doesn't that mean they are all equal? Whether to make system calls go through libc or not is just a matter of where you put the ABI boundary. Putting that boundary "above" the raw system call instruction (like most OSes do) doesn't hurt anyone. Linux does it differently mostly because it just shipped the org chart.
It's not a sentiment, it's a question. I don't have an agenda here, I'm just trying to understand.
> If all programs need to go through libc, doesn't that mean they are all equal?
It means that all programs need to link libc into their binary, whether statically or dynamically. Part of the reason-de-etre for Rust seems to be as a replacement for C and C++ so it would seem peculiar to me that the C library would become a forced dependency for compiled languages like those. But as the other poster pointed out, you can disable it anyway, so no matter.
Linking libc dynamically is essentially free. Every program on the system runs it, so almost all its code and clean data pages are already in memory.
As for static libc: please don't do this. A static libc, from a compatibility POV, is just as bad as embedding random SYSENTER instructions in program text. It makes the system much more brittle than it would otherwise be. I understand the desire to package a whole program into a single blob that works on every system, but we should support this use case with strong compatibility guarantees for libc, not with making SYSENTER the permanent support boundary!
When I am god emperor of mankind, on my first day, I will outlaw both static linking of libc and non-PIE main executables.
Preaching to the converted here, I'm a big fan of dynamic linking. It seems that while Go binaries are generally statically linked (last time I checked, which was a while ago), that libc is generally dynamically linked for the reasons that you have stated, also also because some features like NSS require dynamic linking to work correctly.
Disclaimer: I mostly program in C and C++, not Rust or Go (yet).
OS stable API and ISO C standard library aren't the same thing though, some UNIX just end up mixing both, while others don't even allow documented access to them e.g. Apple one's.
You still need an information leak of exact address of a libc function to achieve that, that will be different on every program invocation due to ASLR, and brute forcing is useless on a 64bit memory space. Even information leaks about the location of other less useful libc functions + offset isn't enough, because OpenBSD randomly relinks libc every boot, so just having the libc of a release to harvest offset information isn't useful.
Fedora 31 compiles everything with -fcf-protection. Of course it requires hardware support before it actually does anything, and there is a lot of missing support in the non-C toolchain and in certain packages. You can use "annocheck" to check if a particular binary is compiled with full control flow protection or not, eg: