It's from the rapid exploitation of an asset. If I have a cow, I can milk the cow or kill the cow. If a cow costs $1, maybe I can get $5 worth of milk over the cow's lifespan, or I can kill the cow immediately and get $2 of meat. The man with $100 who buys all the cows in town and kills all of them doubles his money in a short timespan, but now there's a shortage of both meat and milk next season.
It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way. Hacker News has three advantages. First, it is moderated by the same people who build the tooling, so the incentives are aligned. Second, it is an enormous source of soft power for a venture capital firm with the resources, incentives, and likely the competence and capacity to keep it running smoothly. Third, the scale is smaller and is not tied to hardline revenue constraints like CPM, user LTV and DAU-maximization which restrict what Reddit can do.
> It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.
Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.
I quit moderating because it was destroying my mental health.
Getting called a fascist and rehashing how “no, you’re libertarian politics are fine, but can you please just start your own sub” in a long, drawn out, hateful, back and forth gets exhausting after the 200th person who comes to the bicycling subreddit and feels they should be allowed to endorse harming cyclists with their vehicles.
Everyone got mad at spez for having the audacity to fuck with these kids, and there is a point there, but after living with it, I could see myself doing the same damn thing.
Moderating Reddit subs can be a huge money maker. I know people making $100K/year from it. There are cabals, especially in the adult sections. Reddit has tried to address this recently by limiting the number of subs a person can moderate, but that just causes these big accounts to create more user accounts and split all their subs up that way.
On the adult subs, at least, menu links, sidebar links and banner ads, automod reply links, and by limiting your sub to only paying guests or your own managed models.
Plenty of subs blatantly allow certain brands to advertise while banning anyone else. Kind of amazed Reddit themselves haven’t put more effort into to stopping it since it kinda sidesteps their in house advertising.
Because said 100k'er are probably paying off someone inside reddit. Remember when Ebay sent some couple bloody pigs masks? Yeah evil people work at companies.
At scale they will. For now, someone else puts the effort into growth marketing, eyeball capture. Reddit eventually changes the rules, seizing control, thereby acquiring users for less human cost (as opposed to missed revenue opportunity).
> It failed on Reddit because Reddit is maintained by a bunch of volunteers to whom Reddit provides woefully, woefully, horrifically underdeveloped tooling to automate their communities in a more nuanced way.
And on top of that, some of said "volunteers" are power-hungry, petty, useless fucking morons. Especially the large subreddits tend to be run by people I wouldn't trust to boil some pasta without triggering a fire alert, and yes I know people who manage that.
Absolute middlebrow dismissal incoming, but the real thinking atrophy is writing blog posts about thinking atrophy caused by LLMs using an LLM.
It is getting very hard to continue viewing HN as a place where I want to come and read content others have written when blog posts written largely with ChatGPT are constantly upvoted to the top.
It's not the co-writing process I have a problem with, it's that ChatGPT can turn a shower thought into a 10 minute essay. This whole post could have been four paragraphs. The introduction was clearly written by an intelligent and skilled human, and then by the second half there's "it's not X, it's Y" reframe slop every second sentence.
The writing is too good to be entirely LLM generated, but the prose is awful enough that I'm confident this was a "paste outline into chatgpt and it generates an essay" workflow.
Frustrating world. I'm lambasting OP, but I want him to write, but actually, and not via a lens that turns every cool thought into marketing sludge.
Why do you think author used ChatGPT to write this? It has human imperfections and except this 'The "just one more prompt" trap' I didnt think it was written by a prompt
...and I usually come to doubt my own intuitions that this is the case when people say things like this, but my experience is usually that the LLM is doing more heavy lifting than you realise.
> Distill - deterministic context deduplication for LLMs. No LLM calls, no embeddings, no probabilistic heuristics. Pure algorithms that clean your context in ~12ms.
I simply do not believe that this is human-generated framing. Maybe you think it said something similar before. But I don't believe that is the case. I am left trying to work out what you meant through the words of something that is trying to interpret your meaning for you.
Anecdotally, from the AI startup scene in London, I do not know folks who swear by Langfuse. Honestly, evals platforms are still only just starting to catch on. I haven't used any tracing/monitoring tools for LLMs that made me feel like, say, Honeycomb does.
I'd say out of many generative AI observability platforms, Langsmith and Weave (Weights&Biases) are probably the ones most enterprises use, but there's definitely space for Langfuse, Modelmetry, Arize AI, and other players.
While I get what you’re saying, “most enterprises” barely use gen AI in any meaningful sense, and AI observability is an even smaller niche technology.
I think what people miss about indexing on social signals is that convincing social performance is hard. My suspicion when people say things like "ah but if you index on a social signal then everyone will just perform the social signal" are themselves feeling as though they do not naturally signal that thing, and ironically are frustrated by the effort that it takes to appear as though they do.
The context on this one is that we've gone from an environment in which kids were mocked for having curiosity and passion about nerdy things like systems, and it didn't pay that well as an adult, and those people would go home at the end of the day and also write open source code...
To one in which it's now a high-paying career, and a bunch of interview prep manuals coach on faking that, doing open source to promote your career, etc.
So if OG nerds look around at the environment and see the dynamics, of people who just want well-paying jobs (nothing wrong with that) seeming to do a performative dance with interviewers who also just want well-paying jobs (nothing wrong with that), and everyone is being told to project passion and curiosity (when they really just want well-paying jobs) and to look for it in others...
You think the problem is that OG nerds, for example, feel that they do not naturally signal that?
They may signal that just fine, but merely be questioning all the performative theater by people who aren't here for that, but some management fashion told them they should pretend to be.
Since OpenAI patched the LLM spiritual awakening attractor state, physics and computer science is what sycophantic AI is pushing people towards now. My theory is that those things tend to be especially optimised for deceit because they involve modelling and many people can become confused between the difference between a model as the expression of a concept and a model as in the colloquial idea of "the way the universe works".
it's all ai allucination, in a subreddit i once found a tailor asking for how to contact some professors because they found a breakthrough discovery on how knowledge is arranged inside neural networks (whatever that means)
In the essay I linked, there are some instructions you can follow to test out the idea under "step 1". It's really important to follow them exactly and not to use the same ChatGPT instance as you're talking to about this idea so we can test with an independent party what is going on. I'd be curious what the output is.
I took the challenge. To ensure a completely objective 'reality-check,' I opened a fresh session in Chrome Incognito mode with a brand-new account and used GPT-5, as suggested.
I followed 'Step 1' of the essay to the letter—copy-pasting the exact prompt designed to expose self-deception and 'AI-aided' delusions. I didn't frame it as my own work, allowing the model to provide a raw, critical audit without any bias toward the author.
Awesome - now read it really closely and compare it to the version of reality in your OP. And DON'T paste it or this comment into your normal ChatGPT instance and ask it to respond. Really just think for a moment on your own.
> The goal: replace vague legal and philosophical notions of “manipulation” with a concrete engineering variable. [...] formally define the metric
What's the conclusion? Is this a "concrete engineering paper"? Has anything been "formally proved"? From your link:
> The math is conceptual, not formal.
> This is serious, careful, and intellectually honest work, but it is not conventional science.
> The project would be strongest if positioned explicitly as foundational theory + open design pattern, rather than as something awaiting “validation.”
> it is valid as a design pattern or architectural disclosure, not as experimental systems research
Be careful before immediately dismissing this as just imprecise language or a translation issue. There's a reason I suggested this to you.
You are right. This isn't a scientific paper in the conventional sense. It is a proposal of a framework for the co-evolution of AI and humanity. My intention from the beginning has been to bridge the gap between abstract agency and concrete engineering. I am simply trying to bring this Constitution for human agency into the light, utilizing whatever platforms I can to ensure it is discussed.
This is a huge break from the original post you made - take a step back and compare the two. The LLM is tricking you again into thinking that it wasn't trying to make a claim about the world. In the original post, the LLM was causing you to use language like "quantify", "formal proof" and "concrete engineering" to describe what you'd come up with and position it as a mathematical/computational/engineering idea. It wasn't that.
Now that you got some outside input, it's reframing it for you as an abstract philosophical/legal/moral concept, but the underlying problems are the same. The reason it's talking to you using high level abstract words like "concept" and "proposal" and "framework" now is because the process you just went through - the "step 1" - beat back its potential to frame the idea as a real model of the world. This may feel like just a different way to describe the same idea, but really it's the LLM pulling back from trying to ground the concept in the world at all.
If you're continuing to talk to the LLM about the idea, it's going to try and convince you that really this was a moral/theory of mind discovery and not a mathematical one all along. You're going to end up convinced of the importance and novelty of this idea in exactly the same way, but this time there are no pesky ideas like rigor or testability that could falsify it.
If you ask ChatGPT about this comment without this bit I'm writing at the end, it'll tell you that this is fair pushback, but really your work is still important because really you're not trying to write about engineering or philosophy directly, but rather something connecting these two or a new category entirely. It's important you don't fall for this because exaggerating the explanatory power of pattern recognition is how ChatGPT gets you. Patterns and ideas exist everywhere, and you should be able to identify those patterns and ideas, acknowledge them, and then move on. Getting stuck on trying to prove the greatness of a true but simple observation will lead you to the frustration you experienced today.
The repository logs make it clear that this framework was conceived as a "constitution" long before this conversation ever took place.
I didn't "retreat" to the idea of a framework because the scientific argument failed. On the contrary, I designed the engineering variables specifically to give that framework "teeth." My goal isn't to prove a "simple observation"—it is to provide a functional architecture for human agency that conventional science, in its current state, is failing to protect.
One last thing: make no mistake. I didn't start with an algorithm. I built the algorithm out of necessity, purely to ensure that my 'Constitution' would never be dismissed as mere empty theory. The architecture exists to give the vision its teeth.
But I’m done now. I’ve realized that having a meaningful dialogue with the world at this stage is harder than I thought. I’ve planted the seeds in the network. Now I’m walking away. When the future unfolds exactly as I’ve predicted, just remember this moment.
That’s not too bad and mirrored some of the feedback in this thread. Tldr: interesting idea, more worthy of a blog post or a thread in one of your favourite online communities, rather than a paper.
If you have a few minutes I invite you to check what we're doing over at Open Horizon Labs, its exactly the type of thinking we have around the current state of the world. Apologies I feel like I'm stalking you in the comments, but what you're saying absolutely resonates with what I've been thinking, and what I've been trying to build, and its refreshing to finally feel that I'm not insane.
https://github.com/open-horizon-labs/superego is probably the most useful tool we have, but I'm hoping that we can package it and bring it to the people, as it does make all these LLMs orders of magnitude more useful
No apologies needed—I'm just glad to find I'm not the only 'insane' person here. It's easy to feel that way when obsessing over these problems, so knowing my ideas resonate with what you're building at superego is a huge relief.
I’m diving into your repo now. Please keep me posted on your progress or any new thoughts—I'd love to hear them.
As for "proving it statistically"—you're looking for utility, but I'm defining legitimacy. A constitution isn't a tool designed to statistically improve a metric; it is a framework to ensure that the system remains aligned with human agency. I am not building an LLM optimization plugin; I am building a benchmark for human-AI co-evolution