Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends what your threat model is and where the container lives. For example, k8s can go a long way towards sandboxing, even though it's not based on VMs.

The threat with AI agents exists at a fairly high level of abstraction, and developing with them assumes a baseline level of good intentions. You're protecting against mistakes, confusion, and prompt injection. For that, your threat mitigation strategy should be focused on high-level containment.

I've been working on something in a similar vein to yolobox, but the isolation goal has more to do with secret exfiltration and blast radius. I'd love some feedback if you have a chance!

https://github.com/borenstein/yolo-cage





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: