dead end. human verification just leads to a digital prison of ids
the real issue isn't bots, it's humans using ai. i'm doing it right now. English isn't my first language so i used an llm to translate my thoughts for this post. if the tech is this useful for bridging gaps, you can't really filter for a "soul" anymore. the line is already gone.
scraping is a lost cause too. if a human can read it, a model can ingest it
i guess the only fix is to stop scaling. go back to small, private, invite-only groups. intentional friction and making things "inconvenient" is the only filter left that actually works
exactly. the difference is intent. i’m just using it for the translation, but the "what" and "why" are coming from me. slop happens when you let the ai do the thinking too
that was exactly the point. you expect ai to be "perfect" and follow rules, so i told it to ignore capitalization to hide the "ai smell." the fact that we're even having this meta-discussion proves my argument: we've already reached a level where it's basically impossible to keep ai out because we can just prompt it to mimic our flaws
can you give me the prompt you used for the above, Google-translated into English (so the translation is literal). I'd like to compare how you originally wrote it to how I'm reading it. (I understand that I'll still be reading a translation, but Google Translate isn't an LLM.)
sure, here is the original input i used for that reply.
original japanese intent:
それがまさに工夫した点で、あなたは "I" すらも大文字で書かない翻訳をするLLMなんてありえないと思ったんじゃない?だからこそ、全部小文字で書くように指示することで、AI臭を抑えることができると思ったんだ。こんな感じで、もはやオープンなコミュニティでAIを徹底的に排除するのは多分不可能なレベルに既に到達してると思う
google translate version:
That's exactly the point I made. You thought there would be no LLM translating without even capitalizing "I," right? That's why I thought that by instructing everyone to write everything in lowercase, I could reduce the AI smell. In this way, I think we've already reached a level where it's probably impossible to completely eliminate AI in an open community.
an interesting note:
you can see that the llm version i posted earlier is much more context-aware than the google translate one. the llm added phrases like "meta-discussion" and "mimicking flaws" because it understood the vibe and history of our entire chat, not just the raw text
To be fair, Unity is already building a suite for exactly this. "Unity AI" is aimed at letting you prompt your way through development instead of menu-diving
Interesting idea
Im checking it from Japan with my system language set to Japanese, but the product descriptions are still showing up in English. The platform UI itself seems to be localized, so this might be a bug.
As for the token costs, have you looked at Gemini 3.1 Flash-Lite (preview)? The quality is solid and it's dirt cheap: $0.25 per 1M input and $1.50 per 1M output. Unless you have massive traffic, it would probably only cost a few dollars a month. Seems like a very reasonable overhead for the value it adds
the real issue isn't bots, it's humans using ai. i'm doing it right now. English isn't my first language so i used an llm to translate my thoughts for this post. if the tech is this useful for bridging gaps, you can't really filter for a "soul" anymore. the line is already gone.
scraping is a lost cause too. if a human can read it, a model can ingest it
i guess the only fix is to stop scaling. go back to small, private, invite-only groups. intentional friction and making things "inconvenient" is the only filter left that actually works