Because a poorly implemented chatbot using someone else's LLM API is comparable to what you can accomplish with 10^n rounds of inference in a clever way. Computers are useless without error correction, LLMs may be as well. That's not to say that LLMs will form their own goals, but that people in control of them will be welding dangerously capable agents.
Hmm, in theory what if I used a dedicated GPU just for the VM that was disabled on the host. I don't know the space of these exploits well enough to know if there's an obvious attack that still leaves the host open to.
Charging a small amount is more optimal since it mitigates API spam without having to set a low rate limit. It also ties your users to a financial id, which is (probably) harder to get in bulk for nefarious purposes than just requiring a phone number to sign up.
It seems your first attempt to post this was flagged because the domain name sounds like it'll act like a redirect, and the page itself only had an image with no elaboration and just the link to github.