Hacker Newsnew | past | comments | ask | show | jobs | submit | rfv6723's commentslogin

The logistical nightmare of hydrogen makes its production price almost irrelevant. Using surplus wind energy for carbon capture to create synthetic fuels is much smarter because these liquids are compatible with our current global infrastructure. You bypass the need for expensive new pipelines and specialized tanks entirely. By binding green hydrogen into a stable synthetic hydrocarbon, you get a fuel that is easy to move, has high energy density, and won't leak through solid steel.

The price of H2 is a contributing factor to the price of synthetic fuels, though. Just saying. Otherwise I agree with your points on synthetic fuel.

Solid state batteries are overhyped because their production complexity makes them a pricing nightmare for the average consumer. Sodium ion batteries are the practical choice for short distance transport because they are affordable and charge incredibly fast.

When it comes to long distance shipping or aviation, the energy density of liquid fuel is simply too hard to beat. Fossil fuels will stay dominant for decades, likely evolving into carbon captured or bio derived alternatives rather than being replaced by batteries.


This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.

But fiber optic created in 2000 is still very usable in 2026. AI hardware purchases in 2026 is going to out of date very quickly by comparison.

The massive investment in power grids and data centers provides a permanent physical backbone that outlives any specific silicon generation. This infrastructure serves as a durable shell for the model design knowledge and chip architectural IP gained through each iteration. Capital is effectively funding a structural moat built on energy access and engineering mastery.

Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.

The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.

And you imagine these incredibly expensive-to-operate, environmentally damaging, highly specialized, years-outdated GPUs will trigger some sort of technological revolution that won’t be infinitely better served by the shiny new GPUs of the day that will not only be dramatically more powerful, but offer a ton more compute for the amount of electricity used?

The AI use of GPUs didn’t stem from a glut of outdated, discarded units with nearly no market value. All of those old discarded GPUs were, and still are, worthless digital refuse.

The closest analog i can think of to what you’re referring to is cluster computing with old commodity PCs that got companies like Google and Hotmail off the ground… for a few years until they could afford big boy servers and now all of those, and most current PCs on the verge of obsolescence, are also worthless digital refuse.

The big difference is that Google et al chose those PC clusters because they were cheap, commodity pieces right off-the-bat, not because they were narrowly scoped specialty hardware pieces that collectively cost hundreds of billions of dollars.

Your supposition fails to account for our history with hardware in any reasonable way.


Focusing exclusively on the physical decay and replacement cycle of hardware is a classic case of tunnel vision. It ignores the fact that the semiconductor industry’s true value lies in the evolution of manufacturing processes and architectural design rather than the lifespan of a specific unit. While individual chips eventually become obsolete, the compounding breakthroughs in logic and efficiency are what actually drive the technological revolution you are discounting.

Tunnel vision is ignoring the astonishing amount of money and environmental resources our society is dumping into these very physical, very temporally useful chips and their housing because… of what we learn by doing that. We should have dumped 1/100th of that money into research and we’d have been further along.

This isn’t a normal tech expenditure— the scale of this threatens the economy in a serious way if they get it wrong. That’s 401ks, IRAs, pension plans, houses foreclosed on, jobs lost, surgeries skipped… if we took a tiny fraction of this race-to-hypeland and put towards childhood food insecurity, we could be living in a fundamentally different looking society. The big takeaway from this whole ordeal has nothing to do with semiconductors — it is that rich guys playing with other people’s money singularly focused on becoming king of the hill are still terrible stewards of our financial system.


Dismissing massive capital expenditure as "hypeland" ignores the historical reality that speculative bubbles often build the physical foundation for the next century. The Panic of 1873 saw a catastrophic evaporation of debt-driven capital, yet the "worthless" railroads built during that frenzy remained in the ground. That redundant, overbuilt infrastructure became the literal backbone of American industrialization, providing the logistics required for a global economic shift that far outlasted the initial financial ruin.

Divorcing research from "learning by doing" is a recipe for a bureaucratic ivory tower. If you only funnel money into pure research without the messy, expensive, and often "wasteful" reality of large-scale deployment, you end up with an economy of academic metrics rather than industrial power.

The most damning evidence against the "research-only" model is the birth of the Transformer architecture. It did not emerge from an ivory tower funded by bureaucratic grants or academic peer-review cycles; it was forged in the fires of industrial practice.

History shows that a fixation on immediate social utility or "rational" cost analysis can be a strategic trap. During the same era, Qing Dynasty bureaucrats employed your exact logic, arguing that the astronomical costs of industrialization and rail were a waste of resources better spent elsewhere. By prioritizing short-term stability over "expensive" technological leaps, they missed the industrial window entirely. Two decades later, they faced an industrialized Japan in 1894 and suffered a total collapse. The "waste" of one generation is frequently the essential infrastructure of the next.


How much capital was wiped out for it to be cheap after the bust? Someone is going to eat the exuberance loss in the near term, even if there is long term value.

Humanity has never known a world without surveillance. Responsibility cannot exist without being watched. Primitive tribes lived under the constant eye of the group, and agricultural eras relied on the strict oversight of the clan. Modern states simply adopted new tools for an ancient necessity. A society without monitoring is a society without accountability, which only leads to the Hobbesian trap of endless conflict.

Mass surveillance is a relatively recent development. Dense urban civilizations are not. And yet their denizens have not historically devolved into a “nasty, brutish, and short” existence. In fact, cities have been centers of culture and learning throughout history. How does this square with your theory?

The 19th century was the true cradle of mass surveillance. Civil registration, property tracking, and institutionalized police forces provided the systemic oversight required to manage dense urban life. These administrative tools served as the analogue version of digital monitoring to ensure every citizen remained known and categorized. Cities thrived as centers of culture only because these new forms of visibility prevented the Hobbesian collapse that anonymity would have otherwise triggered.

And what about all of the previous ~40-50 centuries where cities were centers of learning and art and not Hobbesian hell holes? Ur is slightly older than the 19th century, I believe.

And note that there is evidence for cities of tens of thousands of inhabitants from 3000 BCE, while Rome reached 1 000 000 residents by 1CE. Again, without becoming some Hobbesian nightmare.


Augustus established the Vigiles Urbani and the Urban Cohorts, creating a state-funded police and firefighting force to replace the chaotic and often violent system of private client-patron justice. These were the bold, persistent experiments in social order that allowed a million people to coexist without descending into a Hobbesian hell.

None of those things are remotely comparable to the surveillance we're talking about. There's a world of difference between, "My city knows who owns what properties and also we have a police force", and "Western intelligence agencies scoop up every bit of data they can grab about anyone on the planet and store it forever"

In my country it wasn't until the late 19th century that someone had the balls to stop going to church on Sunday. It was a huge scandal at the time but it all worked out in the end.

Humans have always done mass surveillance on eachother. You don't need technology for that.


At no point in time before this era was it possible for a random bureaucrat to have a reasonably comprehensive list of everyone in a country who attended church yesterday.

Scale matters.


This is a reduction to absurdity. Those old societies you cite didn't actively surveil with the goal of micromanaging people's daily lives the way that modern ones do.

Rural surveillance was far more suffocating because every single action was subject to the community gaze. This is exactly why classic literature frames the journey to the city as a liberation from the crushing weight of the village eye. The idea of the peaceful countryside is a modern utopian fantasy that ignores how ancient clans dictated every aspect of life including marriage and death. Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management. Ancient society did not just monitor people; it owned their entire existence through inescapable social visibility.

"It was always shit everywhere" is revisionist history born out of the fantasy of statists looking to justify the modern (administrative) enforcement state.

While the lack of anonymity in small towns certainly puts a damper on one's ability to deviate too far from social norms, the list of things and subject that could get you subjected to government violence without creating a victimized party was infinity shorter. Things that get state or state deputized enforcers on your case today were matters of "yeah that's distasteful, he'll have to settle that with god" or it would come back to bite you when something happened 150+yr ago because society did not have the surplus to justify paying nearly as manny people to go around looking for deviance that could be leveraged to extract money. These people had way more practical day to day freedom to run and better their lives than we do now, if constrained by the fact that they had substantially less wealth to leverage to that effect.

> Modern Homeowners Associations prove that localized oversight is often the most intrusive form of management

And they almost exclusively deal in things that historical societies didn't even bother to regulate.

You're beyond delusional if you think running afoul of HOA is worse than running afoul of the local, state or federal government. Yeah they can screech and send you scary letter with scary numbers but they don't get the buddy treatment from courts that "real" governments do (to the great injustice of their victims) and their procedural avenues for screwing their victims on multiple axis are way more limited.

Seriously, go get in a pissing match with a municipality over just where the line for "requires permit" is and get back to me. Unless you want to do something that is more than petty cosmetic stuff and unambiguously in violation of the rules a HOA is a paper tiger for the most part (not to say that they don't suck).


Modern bureaucracy provides the institutional architecture and political recourse needed to check such arbitrary local tyranny. Without a central legal authority, an HOA or a town council becomes a lawless fiefdom. In those "freer" times, falling out with the local elite meant you didn't fight a permit; you simply had to pack your life and leave.

That's an incredibly bullshit argument to defend the indefensible.

Your reaction actually proves the point. Aggression thrives in anonymous spaces because the lack of oversight removes the weight of accountability. When people feel unobserved, they quickly abandon the social friction that once held tribes and clans together. You are essentially providing a live demonstration of why a society without any form of monitoring inevitably slides into the Hobbesian trap.

I don't think a random internet comment proves anything about society at large.

People don't hesitate to be aggressive even when they're not anonymous and there's a threat of accountability - see, all crime, or people just acting shitty toward others.

Mass surveillance does not cause everyone to magically get along.


History shows that whenever surveillance gaps appear, chaos follows. The explosion of crime during early urbanization was the specific catalyst for the creation of modern police forces because traditional social bonds had failed to provide oversight in growing cities. Japan maintains its safety through a deep-rooted culture of mutual neighborhood monitoring that leaves little room for anonymity. Even China successfully quelled the violent crime waves of its early economic boom by implementing a sophisticated surveillance network.

Police forces nor "neighborhood monitoring" are equivalent to mass surveillance though.

Anyway I'm curious why - despite having less anonymity than at any point in history, at least from the perspective of law enforcement - we still see high crime rates, from fraud to murders?


This scenario echoes the fatal flaw of 19th-century Marxist theory by assuming that surging productivity leads to a permanent reserve army of the unemployed and systemic collapse. Marx failed to foresee how the 20th-century economy would elastically adapt through the birth of a massive service sector that absorbed labor displaced by industrial automation.

While this Global Intelligence Crisis assumes a rigid endgame where machines spend nothing and humans lose everything, it ignores the historical reality that human desires are infinite. As AI commoditizes current white-collar tasks, the economy will pivot toward new and currently unimaginable domains of human value. A 19th-century economist could never have predicted the rise of cybersecurity or the creator economy, and we are likely in a similar pre-prediction stage today. Betting against human adaptability has been a losing trade for two hundred years because our social and economic structures have always evolved to find new utility for human agency.


>it ignores the historical reality that human desires are infinite

This is factually false. Human desires are only infinite for things that have positive utility and cost nothing and by nothing I mean nothing. The moment you have to spend even a single second thinking whether you want to buy or not, demand collapses from infinite to finite by definition.

This means people will accumulate infinite quantities of money, stocks, etc, but never infinite quantities of anything concrete that exists in the real world.


Reality might stop a transaction, but it cannot kill a drive. Sublimation reroutes our infinite hunger into the scientist’s obsession or the artist’s lifelong pursuit of beauty. These are not finite market choices. They are the redirection of a psychic energy that no physical object can ever satisfy.

This redirection is precisely what fuels the expansion of the global economy into realms far beyond basic survival. When a primal drive is blocked by the cost of a physical object, it sublimates into the high-end art market, the pursuit of scientific breakthroughs, or the infinite scroll of digital entertainment. Entire industries exist solely to harvest this redirected energy.


Historical records, notably by Herodotus, confirm that the Persian Empire used gold to bribe Greek Oracles, turning "divine prophecy" into a psychological warfare tool.

This mirrors a core flaw in Polymarket: profit maximization is not truth-seeking. Just as Persian bribes manipulated ancient morale, modern "whales" can distort market odds to manufacture narratives or hedge external interests. In both cases, the prediction is a commodity sold to the highest bidder rather than an objective forecast of reality.


and newspapers are owned by fatcats. but we are still interested in what they have to say.

This comparison is flawed because accountability creates a structural divide. A newspaper has a visible masthead and named editors, creating a reputational stake where consistent bias leads to institutional ruin.

In contrast, Polymarket relies on pseudonymous liquidity. A "whale" can use a "Persian bribe" to distort odds and then vanish without consequence. While a newspaper offers a testable argument, Polymarket provides a "math-washed" price signal that allows financial manipulation to masquerade as objective probability.


> objective probability.

i don't believe such a concept exists. if you do, then you have greater epistemic problems that should be resolved first, before reading either the newspaper, or the prediction market.


Dismissing "objective probability" is a convenient philosophical retreat that strips Polymarket of its only legitimate function. If the market isn’t an attempt to aggregate information toward a binary, external "ground truth," then it isn't a forecasting tool—it’s a "Keynesian Beauty Contest" where people bet on what they think others believe rather than what will actually happen.

Without an objective anchor to measure against, concepts like "mispricing" or "alpha" become logically impossible; you cannot have a "wrong" price if you don't believe a "right" probability exists. If we accept that the market signal is just a reflection of whale liquidity and "Persian bribes" rather than a calculated proximity to reality, then the platform is merely a math-washed gambling hall. Ultimately, a prediction market that abandons the pursuit of objective truth loses its epistemic utility and its entire reason to exist.


prediction markets are a useful tool for aggregating information about uncertain events.

The skepticism surrounding AGI often feels like an attempt to judge a car by its inability to eat grass. We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just high-dimensional labels for invariant relationships within a physical manifold. Object constancy is not a pre-installed software patch; it is the emergent realization of spatial-temporal symmetry. Likewise, causality is nothing more than the naming of a persistent, high-weight correlation between events. When a system can synthesize enough data at a high enough dimension, these so-called "foundational" laws dissolve into simple statistical invariants. There is no "causality" module in the brain, only a massive correlation engine that has been fine-tuned by evolution to prioritize specific patterns for survival.

The critique that Transformers are limited by their "one-shot" feed-forward nature also misses the point of their architectural efficiency. Human brains rely on recurrence and internal feedback loops largely as a workaround for our embarrassingly small working memory—we can barely juggle ten concepts at once without a pen and paper. AI doesn't need to mimic our slow, vibrating neural signals when its global attention can process a massive, parallelized workspace in a single pass. This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring. Humans aren't nearly as "general" as we tell ourselves; we are also pattern-matchers prone to optical illusions and simple logic traps, regardless of our IQ. Demanding that AI replicate the specific evolutionary path of a human child is a form of biological narcissism. If a machine can out-calculate us across a hundred variables where we can only handle five, its "non-human" way of knowing is a feature, not a bug. Functional replacement has never required biological mimicry; the jet engine didn't need to flap its wings to redefine flight.


Hey, thanks for responding. You're a very evocative writer!

I do want to push back on some things:

> We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just

I don't feel like I treated them as mystical - I cite several studies that define what they are and correlate them to certain structures in the brain that have developed millennia ago. I agree that ultimately they are "just" fitting to patterns in data, but the patterns they fit are really useful, and were fundamental to human intelligence.

My point is that these cognitive primitives are very much useful for reasoning, and especially the sort of reasoning that would allow us to call an intelligence general in any meaningful way.

> This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

> Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring.

AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!


> The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

Claiming FFNs are mathematically incapable of certain algorithms misses the fact that an LLM in production isn't a static circuit, but a dynamic system. Once you factor in autoregression and a scratchpad (CoT), the context window effectively functions as a Turing tape, which sidesteps the TC0 complexity limits of a single forward pass.

> AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

We haven't "sensed" or directly verified things like quantum mechanics or deep space for over a century; we rely entirely on a chain of cognitive tools and instruments to bridge that gap. LLMs are just the next layer of epistemic mediation. If a solution is logically consistent and converges with experimental data, the "robustness" comes from the system's internal logic.


If human biological intelligence is our reference for general intelligence, then being skeptical about AGI is reasonable given its current capabilities. This isn't biological narcissism, this is setting a datum (this wasn't written by chatgpt I promise).

Humans have a great capacity for problem solving and creativity which, at its heights, completely dwarfs other creatures on this planet. What else would we reference for general intelligence if not ourselves?

My skepticism towards AGI is primarily supported by my interactions with current systems that are contenders for having this property.

Here's a recent conversation with chatgpt.

https://chatgpt.com/share/69930acc-3680-8008-a6f3-ba36624cb2...

This system doesn't seem general to me it seems like a specialized tool that has really good logic mimicry abilities. I asked it if the silence response was hard coded, it said no then went on to explain how the silence was hard coded via a separate layer from the LLM portion which would just respond indefinitely.

It's output is extremely impressive, but general intelligence it is not.

On your final point about functional replacement not requiring biological mimicry. We don't know whether biological mimicry is required or not. We can only test things until we find out or gain some greater understanding of reality that allows us to prove how intelligence emerges.


The "world model" is a convenient fiction. Whether we’re talking about a carbon-based brain or a silicon-based transformer, there is no miniature, objective map of reality tucked away inside. What we mistake for a "model" is actually just the layered residue of experience.

From the perspective of enactivism and radical empiricism, intelligence doesn't "represent" the world; it simply navigates it. A biological organism doesn't need a 3D CAD file of a tree to survive; it only needs a history of sensory-motor contingencies—the "if I move this way, I see that" patterns. It’s a synthesis of interactions, not a library of blueprints.

AI operates on the same logic, albeit through a different medium. It isn't simulating the physical laws of the universe or "understanding" gravity. Instead, it navigates the high-dimensional geometry of human data. It’s a sophisticated engine of association, performing a high-speed synthesis of the patterns we've left behind.

In this view, "knowing" isn't about matching an internal image to an external truth. It is the seamless flow of past inputs into future predictions. There is no world model—only the habit of being.


Does your team have Chinese memebers?

GFW has been able to filter SNI to block https traffic for a few years now.


We do, and from what we know a bigger problem in China is detecting traffic patterns. SNI filtering is not that big of a deal, in order to block your domain it needs to first learn which one you’re using. What for the traffic patterns, people in China prefer to selectively route traffic to the tunnel. For instance, the client apps allow you to route *.cn domains (or any other domains) directly. It makes it harder to detect that you’re using a VPN.


In Fujian province, all foreign domains which aren't in white list are blocked.

This results that proxy server needs to use a fake sni in white list or ditch https.


This is actually supported by both the client and the server.

To use it in mobile clients you need to specify two domain names like that: fake-sni.com|domain.com where “fake-sni.com” is the domain thay will be in the SNI and “domain.com” is the domain in your TLS certificate (used to check the server’s authenticity)


I tried the method you suggested on the Android client, but it doesn't seem to work. After setting the domain name to two domains connected by `|`, the client fails to connect to the server and remains stuck in a “connecting” state.

Is this feature not yet supported on Android?


How do you do this on iOS?


You mean in TrustTunnel apps? You can create a routing profile there and select which domains/ips are bypassed, and then select that routing profile in the vpn connection settings.


>GFW has been able to filter SNI to block https traffic for a few years now.

SNI isn't really the threat here, because any commercial VPN is going to be blocked by IP, no need for SNI. The bigger threat is tell-tale patterns of VPN use because of TLS-in-TLS, TLS-in-SSH, or even TLS-in-any-high-entropy-stream (eg. shadowsocks).


> because any commercial VPN is going to be blocked by IP, no need for SNI.

Proxy server can hide behind CDN like Cloudflare via websocket tunnel.

This is why GFW develops SNI filter, Cloudflare is too big to block.


>Proxy server can hide behind CDN like Cloudflare via websocket tunnel.

cloudflare doesn't support domain fronting so any SNI spoofing won't work.


CDN traffic is quite expensive, don’t believe it would be feasible to provide a VPN product for that. But for individuals, sure.


Emoji and bullet points are easy to read, so it got rewards in RLHF process.

You maybe hate this style at first glance. But if you read lots of text everyday, Emoji and bullet points lower the cognitive load.


Emojis when used like these models do, Mae text way harder for me to read. Its distracting and adds nothing to the text.


I find it makes list easier to read and think it actually looks nice. But it destroys my ability to sort. So, I use this style sparingly because most list information I would be looking at often enough to want to look nice, I also want to sort.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: