Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If only. The biggest problems right now are limited context size and basic security, including having to share such documents with God-knows-how-many third parties.

Tangent, but we use Azure instead of OpenAI due to data-retention concerns. To ensure nobody's inputting anything classified or proprietary, Legal demanded implementation of an "AI safety" tool...so we demoed one that ships all prompts to a third party's regex-retraction API.

So you never know who ends up the recipient of your LLM prompt, where it's getting logged to, who's reviewing those logs, etc. Even some local models require execution of arbitrary code, and Gradio ships telemetry data. Uploading Snowden's docs into a black box is a good way to catch a ride in a black van.



Nowadays even consumer-level hardware can run some decent local LLMs, completely offline.

You might want to browse /r/LocalLLaMA/ if "security" is an issue for you.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: