Hacker Newsnew | past | comments | ask | show | jobs | submit | awei's commentslogin

It got me super excited!


Strangely reminiscent of "Children of Time"'s beginnings of spider intelligence


I read that book blind recently. Did not expect the spiders, but ended up liking those chapters the most.


I came here to mention the same book.


I see how useful a universal UI language working across platforms is, but when I look at some examples from this protocol, I have the feeling it will eventually converge to what we already have, html. Instead of making all platforms support this new universal markup language, why not make them support html, which some already do, and which llms are already trained on.

Some examples from the documentation: { "id": "settings-tabs", "component": { "Tabs": { "tabItems": [ {"title": {"literalString": "General"}, "child": "general-settings"}, {"title": {"literalString": "Privacy"}, "child": "privacy-settings"}, {"title": {"literalString": "Advanced"}, "child": "advanced-settings"} ] } } }

{ "id": "email-input", "component": { "TextField": { "label": {"literalString": "Email Address"}, "text": {"path": "/user/email"}, "textFieldType": "shortText" } } }


A key challenge with HTML is client side trust. How do I enable an agent platform (say Gemini, Claude, OpenAI) to render UI from an untrusted 3p agent that’s integrated with the platform? This is a common scenario in the enterprise version of these apps - eg I want to use the agent from (insert saas vendor) alongside my company’s home grown agents and data.

Most HTML is actually HTML+CSS+JS - IMO, accepting this is a code injection attack waiting to happen. By abstracting to JSON, a client can safely render UI without this concern.


If the JSON protocol in question supports arbitrary behaviors and styles, then you still have an injection problem even over JSON. If it doesn't support them you don't need to support those in an HTML protocol either, and you can solve the injection problem the way we already do: sanitizing the HTML to remove all/some (depending on your specific requirements) script tags, event listeners, etc.


> A key challenge with HTML is client side trust. How do I enable an agent platform (say Gemini, Claude, OpenAI) to render UI from an untrusted 3p agent that’s integrated with the platform?

Just like you do with your web browser. A web browser is a Remote Code Execution engine.


Perhaps the protocol, is then html/css/js in a strict sandbox. Component has no access to anything outside of component bounds (no network, no dom/object access, no draw access, etc).


I think you can do that with an iframe, but it always makes me nervous


Right this makes sense, I wonder if it would then be a good idea to abstract html to JSON, making it impossible to include css and js into it


Curious to learn more what you are thinking?

One challenge is you do likely want JS to process/capture the data - for example, taking the data from a form and turning it into json to send back to the agent


If you play with A2UIs generator that's effectively what it does, just layer of abstraction or two above what you're describing.


That's what I thought too skimming through the documentation, my thinking is that since it does that, which makes sense to avoid script injection, why not do it with "jsonized" html.


I was thinking that raw html might be too verbose, but canned components have signatures and types.


The EU represents about 24% of the global Saas market for example. (95 billion in 2024 / global SaaS market in 2025 is ~ $408 billion). For comparison, North America leads globally at around 43-50% market share)


The EU represents around 500 million wealthy consumers compared to the US's around 300.


Yeah, which is why the EU has a much larger GDP. Global companies make significantly more revenue in the EU than in the US. And of course the EU economy is growing much faster than the US and doesn’t face any demographic headwinds.

Oh, wait.


That is completely orthogonal to my point.

You can only sell so many iPhones in the US until the market is completely saturated. A market of twice as many people that can afford them sure is tempting.


maybe it’d be cool if this EU market with all these people can find a few smart ones to gang up together and form a company that serves EU people and complies with whatever EU dreams up on any given day - boom - problem solved :)


It's quite amazing how arrogant Americans are despite being a thing for like 10 generations, and how brainwashed they are into believing regulations are bad, to the point of being borderline enraged by the fact that the EU exists. You might want to open a few history books, or books in general, and look up the history of your all American inventions


cool mate, you should then enjoy your inventions and ingenuity from Greek Gods and stuff and maybe read a book or two on AI by Plato :-) Too funny (and I am European)


Clearly nothing happened in Europe between plato and 2025...


The one thing that space has going for itself is space. You could have way bigger datacenters than on Earth and just leave them there, assuming Starship makes it cheap enough to get them there. I think it would maybe make sense if 2 things: - We are sure we will need a lot of gpus for the next 30-40 years. - We can make the solar panels + cooling + GPUs have a great life expectancy, so that we can just leave them up there and accumulate them.

Latency wise it seems okay for llm training to put them higher than Starlink to make them last longer and avoid decelerating because of the atmosphere. And for inference, well, if the infra can be amortized over decades than it might make the inference price cheap enough to endure additional latencies.

Concerning communication, SpaceX I think already has inter-starlinks laser comms, at least a prototype.


You can't just "leave them there" though. They orbit at high speed, which effectively means they actually take up vastly more space, with other objects orbiting at high speed intersecting those orbits. The orbits that are most useful are relatively narrow bands shared with a lot of other satellites and a fair amount of debris, and orbits tend to decay over time (which is a problem if you're in low earth orbit because they'll decay all the way into the atmosphere, and a problem if you're in geostationary orbit because you'll lose the advantage of stationary bit for maintaining comms links). This is a solvable problem with propulsion, but that entails bringing the propellant with you and end-of-life (or an expensive refuelling operation) when it runs out. The cost of maintaining real estate space is vastly more than out right owning land.

Similarly, making stuff have a great life expectancy is much more expensive than having it optimized for cost and operational requirements instead but stored somewhere you can replace individual components as and when they fail, and it's also much easier to maximise life expectancy somewhere bombarded by considerably less radiation.


There is lots and lots and lots of space on Earth where hardly anyone is living. Cheap rural areas can support extremely large datacenters, limited only by availability of utilities and workers.


We also have to build a lot more solar and nuclear in addition of the datacenters themselves, which we need to do anyway but it would compound the land we use for energy production.


Yet a colossal number of servers on satellites would require the same energy-production facilities to be shipped into orbit (and to receive regular maintainence in orbit whenever they fail), which requires loads of land for launch facilities as well as processing for fuel and other consumable resources. Solar might be somewhat more efficient, but not nearly so much so as to make up for the added difficulty in cooling. One could maybe postulate asteroid mining and space manufacturing to reduce the total delta-V requirement per satellite-year, but missions to asteroids have fuel requirements of their own.

If anything, I'd expect large-scale Mars datacenters before large-scale space datacenters, if we can find viable resources there.


It makes sense, I would be curious to see the price computations done by the different space GPUs startups and Big Tech, I wonder how they are getting a cheaper cost, or maybe it is marketing.


Launching a datacenter like that carries an absurd cost even with Starship type launchers. Unless TSMC moves its production to LEO it's a joke of a proposal.

Underwater [0] is the obvious choice for both space and cooling. Seal the thing and chuck it next to an internet backbone cable.

> More than half the world’s population lives within 120 miles of the coast. By putting datacenters underwater near coastal cities, data would have a short distance to travel

> Among the components crated up and sent to Redmond are a handful of failed servers and related cables. The researchers think this hardware will help them understand why the servers in the underwater datacenter are eight times more reliable than those on land.

[0] https://news.microsoft.com/source/features/sustainability/pr...


I like the underwater idea did not think of that


The problem is a lot of people like the underwater idea and I’m worried we’re heading towards something like literally boiling the ocean as they say.


No worries, the oceans are cooked already.

https://www.ipcc.ch/srocc/chapter/technical-summary


Space is not much of an issue for datacenters. For one thing, compute density is growing; it's not uncommon for a datacenter to be capacity limited by power and/or cooling before space becomes an issue; especially for older datacenters.

There are plenty of data centers in urban centers; most major internet exchanges have their core in a skyscraper in a significant downtown, and there will almost always be several floors of colospace surrounding that, and typically in neighboring buildings as well. But when that is too expensive, it's almost always the case that there are satellite DCs in the surrounding suburbs. Running fiber out to the warehouse district isn't too expensive, especially compared to putting things in orbit; and terrestrial power delivery has got to be a lot less expensive and more reliable too.

According to a quick search, StarLink has one 100g space laser on equipped satellites; that's peanuts for terrestrial equipment.


We have tons of space on earth. Cooling in space would be so expensive.


Falcon heavy is only $1,500/kg to LEO. This rate is considerably undercut here on Earth by me, a weasley little nerd, who will move a kilogram in exchange for a pat on the head (if your praise is desirable) or up to tens of dollars (if it isn't).


In exchange for what benefit? There is literally no benefit to having a datacenter in space.


The benefit is capturing a larger percentage of the output of the sun than what hits the earth.


Can that really work? The datacentre will surely be measurably smaller than the earth.


Does your transportation system also have a risk of exploding catastrophically mid-flight? 'cause otherwise no deal. /s


Starship is on a fast track to failure. It is not a cheaper way to get to orbit and will never get there at the current pace. And even if it were, it would not make getting to orbit so cheap that it would somehow make it economically viable to put a datacenter there.

You still have to build the GPUs, etc for the datacenter whether it’s on Earth or in orbit. But to put it in space you also need massive new cooling solution, radiation shielding, orbital boosting, data transmission bandwidth, and you have to launch all of that.

And then, there are zero benefits to putting a datacenter in space over building it on Earth. So why would you want to add all that extra expense?


It will make getting to orbit cheaper, significantly so, but I can't see it being rapidly reusable. Rapidly refurbishable perhaps if Starship were modular and the heat shield could be quickly swapped out on site where necessary. But being able to top off the methalox and fly again? That's a pipe dream. Orbital spaceflight isn't like air travel in any sense.


What use is having lots of space, when to actually build out that space you need mass, which is absurdly expensive to launch?


Why does what it powers matter? As long as it can power something.

The obsolete stuff can be deorbited or recycled in space.


Something weird here, why is it so hard to have a deterministic program capable of checking a proof or anything math related, aren't maths super deterministic when natural language is not. From first principles, it should be possible to do this without a llm verifier.


I think that mathematical proofs, as they are actually written, rely on natural language and on a large amount of implicit shared knowledge. They are not formalized in the Principia Mathematica sense, and they are even further from the syntax required by modern theorem provers. Even the most rigorous proofs such as those in Bourbaki are not directly translatable into a fully formal system.


If you don't mind stretching your brain a bit, Wittgenstein was obsessed with this notion. https://www.bu.edu/wcp/Papers/Educ/EducMaru.htm#:~:text=Witt...


Verifying math requires something like Lean which is a huge bottleneck, as the paper explains.

Plus there isn't a lot of training data in lean.

Most gains come from training on stuff already out there, not really the RLVR part which just amps it up a bit.


> why is it so hard to have a deterministic program capable of checking a proof or anything math related, aren't maths super deterministic when natural language is not.

Turing machines are also deterministic, but there is no algorithm that can decide whether any given Turing machine halts. What you're asking for is a solution to the Halting Problem.

That's the first problem, the second problem is that any such system that didn't support natural language would require a formal language of some sort, and then you would have to convince every mathematician to write their proofs in your language so it can be checked. All attempts at this have failed to gain much traction, although Lean has gotten pretty far.


Maths can be super deterministic but often difficult to compute because of concepts like inferring by induction. I had to personally unlearn and rebase my understanding of math based in computation to 'get' pure maths. Another example is set building. You often don't need to compute the existence of members of sets in pure math you just need to agree that there are some members of a set that meet the criteria. How many or how many things that aren't in the set aren't meaningful often times to accept something and move on with the proof. From the computing perspective this can be difficult to put together.


Checking the validity of a given proof is deterministic, but filling in the proof in the first place is hard.

It's like Chess, checking who wins for a given board state is easy, but coming up with the next move is hard.

Of course, one can try all possible moves and see what happens. Similar to Chess AI based on search methods (e.g. MinMax), there are proof search methods. See the related work section of the paper.


who likely wins, fify


I haven’t read the paper yet, but I’d imagine the issue is converting the natural language generated by the reasoner into a form where a formal verifier can be applied.


such high performance program indeed could potentially be superior, if it would exist (this area is very undeveloped, there is no existing distributed well established solution which could handle large domain) and math would be formalized in that program's dsl, which also didn't happen yet.


Thanks to everyone who replied, I understand it better now!


So AWS S3 Glacier might actually be cold


one issue I see is when steps in a plan depend on one another, when you cannot know all the next steps exactly before seeing the results of the previous ones, when you may have to backtrack sometimes


This is actually good insight and worded in a simple way that clicked in my brain, thanks!


For a moment I thought they were making hobbyist PCBs you can put in your body.


But is it maybe the difference between complicated, which indecipherable definitely is, and complex, in the sense of composed of many things?


You don't think a pure functional program can be composed of many things?


Right, a pure functional program can be composed of many things, so it can be complex even without side effects. Only a program with side effects is necessarily complex, as it includes at least two components if not more. Another interesting thought is that you can always encapsulate a purely functional program in a black box with an input and an output. Doing so with a program with side effects that themselves might be programs with side effects and their own interface is probably much more difficult.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: