Simply put no, 50MW is not the typical hyperscaler cloud size. It's not even the typical single datacenter size.
A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.
When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.
And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.
Could this be about bypassing government regulation and taxation? Silkroad only needed a tiny server, not 150kW.
The Outer Space Treaty (1967) has a loophole. If you launch from international waters (planned by SpaceX) and the equipment is not owned by a US-company or other legal entity there is significant legal ambiguity. This is Dogecoin with AI. Exploiting this accountability gap and creating a Grok AI plus free-speech platform in space sounds like a typical Elon endeavour.
For the sake of an argument, let’s assume "The Outer Space Treaty (1967) has a loophole. If you launch from international waters (planned by SpaceX) and the equipment is not owned by a US-company or other legal entity there is significant legal ambiguity” is 100% true.
To use that loophole, the rockets launched by SpaceX would have to be “not owned by a US-company”. Do you think the US government would allow that to happen?
Untrue. Responsible for any spacefaring vessel is in all cases the state the entity operating the vessel is registered in. If it's not SpaceX directly but a shell company in Ecuador carrying out the launch, Ecuador will be completely responsible for anything happening with and around the vessel, period. There are no loopholes in this system.
This could simply be done by hosting in the Tor hidden service cloud. Accessing illegal material hosted on a satellite is still exactly as risky for the user (if the user is on earth) as accessing that same illegal material through the Tor network, but hosting it through the Tor network can be done for 1/1000th the cost compared to an orbital solution.
So there's no regulatory or tax benefit to hosting in space.
You cannot escape national regulations like that, at least until a maritime-like situation develops, where rockets will be registered in Liberia for a few dollars and Liberia will not even pretend to care what they are doing.
It may happen one day, but we are very, very far from that. As of now, big countries watch their space corporations very closely and won't let them do this.
Nevertheless, as an American, you can escape state and regional authorities this way. IIRC The Californian Coastal Commission voted against expansion of SpaceX activities from Vandenberg [1], and even in Texas, which is more SpaceX-friendly, there are still regulations to comply with.
If you launch from international waters, these lower authority tiers do not apply.
In addition to all the sibling comments explaining why this wouldn't work, the money's not there.
A grift the size of Dogecoin, or the size of "free speech" enthusiast computing, or even the size of the criminal enterprises that run on the dark web, is tiny in comparison to the footer cost and upkeep of a datacenter in space. It'd also need to be funded by investments (since criminal funds and crypto assets are quite famously not available in up-front volumes for a huge enterprise), which implies a market presence in some country's economy, which implies regulators and risk management, and so on.
But the focus on building giant monolithic datacenters comes from the practicalities of ground based construction. There are huge overheads involved with obtaining permits, grid connections, leveling land, pouring concrete foundations, building roads and increasingly often now, building a power plant on site. So it makes sense to amortize these overheads by building massive facilities, which is why they get so big.
That doesn't mean you need a gigawatt of power before achieving anything useful. For training, maybe, but not for inference which scales horizontally.
With satellites you need an orbital slot and launch time, and I honestly don't know how hard it is to get those, but space is pretty big and the only reasons for denying them would be safety. Once those are obtained done you can make satellite inferencing cubes in a factory and just keep launching them on a cadence.
I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
* Everything is being irradiated all the time. Things need to be radiation hardened or shielded.
* Putting even 1kg into space takes vast amounts of energy. A Falcon 9 burns 260 MJ of fuel per kg into LEO. I imagine the embodied energy in the disposable rocket and liquid oxygen make the total number 2-3x that at least.
* Cooling is a nightmare. The side of the satellite in the sun is very hot, while the side facing space is incredibly cold. No fans or heat sinks - all the heat has to be conducted from the electronics and radiated into space.
* Orbit keeping requires continuous effort. You need some sort of hypergolic rocket, which has the nasty effect of coating all your stuff in horrible corrosive chemicals
* You can't fix anything. Even a tiny failure means writing off the entire system.
* Everything has to be able to operate in a vacuum. No electrolytic capacitors for you!
So I guess the question is - why bother? The only benefit I can think of is very short "days" and "nights" - so you don't need as much solar or as big a battery to power the thing. But that benefit is surely outweighed by the fact you have to blast it all into space? Why not just overbuild the solar and batteries on earth?
The main reason is that generating energy in space is very cheap and easy due to how ridiculously effective solar panels are.
Someone mentioned in the comments on a similar article that sun synchronous orbits are a thing. This was a new one to me. Apparently there's a trick that takes advantage of the Earth not being a perfect sphere to cause an orbit to precess at the right rate that it matches the Earth's orbit around the sun. So, you can put a satellite into a low-Earth orbit that has continuous sunlight.
Is this worth all the cost and complexity of lobbing a bunch of data centers into orbit? I have no idea. If electricity costs are what's dominating the datacenter costs that AI companies are currently paying, then I'm willing to at least concede that it might be plausible.
If I were being asked to invest in this scheme, I would want to hear a convincing argument why just deploying more solar panels and batteries on Earth to get cheap power isn't a better solution. But since it's not my money, then if Elon is convinced that this is a great idea then he's welcome to prove that he (or more importantly, the people who work for him) have actually got this figured out.
Let's assume your space solar panel is always in sun - so 8760 kWh per year from 1kWp.
In Spain, 1kWp of solar can expect to generate about 1800 kWh per year. There's a complication because seasonal difference is quite large - if we assume worst case generation (ie what happens in December), we get more like 65% of that, or 1170 kWh per year.
That means we need to overbuild our solar generation by about 7.5x to get the same amount of generation per year. Or 7.5kWp.
We then need some storage, because that generation shuts off at night. In December in Madrid the shortest day is about 9 hours, so we need 15 hours of storage. Assuming a 1kW load, that means 15kWh.
European wholesale solar panels are about €0.1/W - €100/kW. So our 7.5kWp is €750. A conservative estimate for batteries is €100/kWh. So our 15kWh is €1500. There's obviously other costs - inverters etc. But perhaps the total hardware cost is €3k for 1kW of off-grid solar.
A communications satellite like the Eurostar Neo satellite has a payload power of 22 kW and a launch mass of 4,500 kg. Assuming that's a reasonable assumption, that means about 204kg per kW. Current SpaceX launch costs are circa $1500 per kg - but they're targeting $100/kg or lower. That would give a launch cost of between $300k and $20k per kW of satellite power. That doesn't include the actual cost of the satellite itself - just the launch.
I just don't see how it will make sense for a long time. Even if SpaceX manage to drastically lower launch costs. Battery and solar costs have also been plummeting.
Is it reasonable to use Neo as a baseline? Modern Starlink satellites can weigh 800kg, or less than 20% of Neo. I see discussions suggesting they generate ~73kw for that mass. I guess because they aren't trying to blanket an entire continent in signal? Or, why are they so much more efficient than Neo?
Interestingly the idea of doing compute in space isn't a new one, it came up a few years ago pre-ChatGPT amongst people discussing the v2 satellite:
Still, you make good points. Even if you assume much lighter satellites, the GPUs alone are very heavy. 700kg or so for a rack. Just the payload would be as heavy as the entire Starlink satellite.
You can't increase the size of the radiator and reduce the mass of the satellite. How is that supposed to work?
You're also forgetting that Starlink satellites aren't in a sun synchronous orbit which means they have to overbuild the energy generation capacity (low capacity factor) and can simultaneously take advantage of earth's shadow to cool down.
That could be one reason they want to do it. Maybe by using data from Palantir or harvested from Elon's work with DOGE, along with twitter user data and whatever else they can get, they want their AI to be the all-seeing eye of Sauron. (Which isn't too far from what the whole ad-tech industry is about in the first place.) Or they want to make sexually explicit deepfakes of everyone Elon doesn't like. Or they want to flood the internet with AI generated right-wing propaganda.
If one kilogram of stuff consumes just 100Wt, then in one month it consumes about 300 MJ. So as long as things works for a year or more energy cost to put them into orbit becomes irrelevant.
To keep things in orbit ion thrusters work nicely and require just inert gases to keep them functioning. Plus on a low Earth orbit there are suggestions that a ramjet that capture few atoms of atmosphere and accelerates them could work.
Radiative cooling scales by 4th power temperature. So if one can design electronics to run at, say, 100 C, then calling would be much less problematic.
But radiation is the real problem. Dealing with that would require entirely different architecture/design.
It would make more sense to develop power beaming technology. Use the knowledge from Starlink constellations to beam solar power via microwaves onto the rooftops of data centers
I guess in terms of the relative level of stupidity on display, it would be slightly less stupid to build huge reflectors in space than it is to try to build space datacenters, where the electricity can only power specific pieces of equipment that are virtually impossible to maintain (and are typically obsolete within a few years).
Almost none of the parent’s bullet points are solved by building on the Moon instead of in Earth orbit.
The energy demands of getting to the 240k mile Moon are IMMENSE compared to 100 mile orbit.
Ultimately, when comparing the 3 general locations, Earth is still BY FAR the most hospitable and affordable location until some manufacturing innovations drop costs by orders of magnitude. But those manufacturing improvements have to be made in the same jurisdiction that SpaceXAI is trying to avoid building data centers in.
This whole things screams a solution in search of a problem. We have to solve the traditional data center issues (power supply, temperature, hazard resilience, etc) wherever the data centers are, whether on the ground or in space. None of these are solved for the theoretical space data centers, but they are all already solved for terrestrial data centers.
But none of those are usable, right? It will take decades of work at least to get a commercial grade mining operation going and even then the iron, titanium, aluminum would need to be fashioned...
Ah, I see the idea now. It is to get people to talk about robotics and how robots will be able to do all this on the moon or wherever.
That's a hard problem to solve. Invest enough in solving that problem and you might get the ability to manufacture a radiator out of it, but you're still going to have to transport the majority of your datacenter to the moon. That probably works out more expensive than launching the whole thing to LEO
Sounds more difficult. Not only is the moon further, you also need to use more fuel to land on it and you also have fine, abrasive dust to deal with. There’s no wind of course, but surely material will be stirred up and resettle based on all the landing activity.
And it’s still a vacuum with many of the same cooling issues. I suppose one upside is you could use the moon itself as a heat sink (maybe).
And 2.5s is best case. Signal strength issues, antenna alignment issues, and all sorts of unknown unknowns conspire to make high-integrity/high-throughput digital signal transmissions from a moon-based compute system have a latency much worse than that on average.
Still a vacuum so the same heat dissipation issues, adding to it that the lunar dust makes solar panels less usable, and the lunar surface on the solar side gets really hot.
Yeah, carrying stuff 380k km and still deploying in vacuum (and super dusty ground) doesn't solve anything but adds cost and overhead. One day maybe, but not these next decades nor probably this century.
Because the permitting process is much easier and there are way, way fewer authorities that can potentially shut you down.
I think this is the entire difference. Space is very, very lightly regulated, especially when it comes to labor, construction and environmental law. You need to be able to launch from somewhere and you need to automate a lot of things. But once you can do this, you escaped all but a few authorities that would hold power over you down on Earth.
No one will be able to complain that your data center is taking their water or making their electricity more expensive, for example.
The satellite is built on Earth, so I’m not sure how it dodges any of those regulations practically. Why not just build a fully autonomous, solar powered datacenter on Earth? I guess in space Elon might think that no one can ban Grok for distributing CSAM?
There’s some truly magical thinking behind the idea that government regulations have somehow made it cheaper to launch a rocket than build a building. Rockets are fantastically expensive even with the major leaps SpaceX made and will be even with Starship. Everything about a space launch is expensive, dangerous, and highly regulated. Your datacenter on Earth can’t go boom.
Truly magical thinking, you say? OK, let's rewind the clock to 2008. In that year two things happened:
- SpaceX launched its first rocket successfully.
- California voted to build high speed rail.
Eighteen years later:
- SpaceX has taken over the space industry with reusable rockets and a global satcom network, which by itself contains more than half of all satellites in orbit.
- Californian HSR has spent over thirteen billion dollars and laid zero miles of track. That's more than 2x the cost of the Starship programme so far.
Building stuff on Earth can be difficult. People live there, they have opinions and power. Their governments can be dysfunctional. Trains are 19th century technology, it should be easier to build a railway than a global satellite network. It may seem truly magical but putting things into orbit can, apparently, be easier.
That’s a strange comparison to make. Those are entirely different sectors and sorts of engineering projects. In this example, also, SpaceX built all of that on Earth.
Why not do the obvious comparison with terrestrial data centers?
Now how about procuring half a gigawatt when nearby residents are annoyed about their heating bills doubling, and are highly motivated to block you? This is already happening in some areas.
From individual POV yes, but already Falcons are not that expensive. In the sense that it is feasible for a relatively unimportant entity to buy their launch services.
"The satellite is built on Earth, so I’m not sure how it dodges any of those regulations practically."
It is easier to shop for jurisdiction when it comes to manufacturing, especially if your design is simple enough - which it has to be in order to run unattended for years. If you outsource the manufacturing to N chosen factories in different locations, you can always respond to local pressure by moving out of that particular country. In effect, you just rent time and services of a factory that can produce tons of other products.
A data center is much more expensive to build and move around. Once you build it in some location, you are committed quite seriously to staying there.
> I also strongly suspect, given some background reading, that radiator tech is very far from optimized. Most stuff we put into space so far just doesn't have big cooling needs, so there wasn't a market for advanced space radiator tech. If now there is, there's probably a lot of low hanging fruit (droplet radiators maybe).
You'd be wrong. There's a huge incentive to optimized radiator tech because of things like the international space station and MIR. It's a huge part of the deployment due to life having pretty narrow thermal bands. The added cost to deploy that tech also incentivizes hyper optimization.
Making bigger structures doesn't make that problem easier.
Fun fact, heat pipes were invented by NASA in the 60s to help address this very problem.
ISS and MIR combined are not a "large market". How many radiators they require? Probably a single space dc will demand a whole orders of magnitude more cooling
ISS cost $150B and a large factor driving that cost was the payload weight.
Minimizing payload at any point was easily worth a billion dollars. And given how heavy and nessisary the radiators are (look them up), you can bet a decent bit of research was invested in making them lightweight.
Heck, one bit of research that lasted the entire lifetime of the shuttle was improving the radiative heat system [1]. Multiple contractors and agencies invested a huge amount of money to make that system better.
Removing heat is one of the most researched problems of all space programs. They all have to do it, and every gram of reduction means big savings. Simply saying "well a DC will need more of it, therefore there must be low hanging fruit" is naive.
The ISS is a government project that's heading towards EOL, it has no incentive to heavily optimize anything because the people who built it don't get rich by doing so. SpaceX is what optimization looks like, not the ISS.
> has no incentive to heavily optimize anything because the people who built it don't get rich by doing so.
Optimization is literally how contractors working for the government got rich. Every hour they spent on research was directly billed to the government. Weight reduction being one of the most important and consistent points of research.
Heck, R&D is how some of the biggest government contractors make all their dough.
SpaceX is built on the billions in research NASA has invested over the decades. It looks like it's more innovative simply because the USG decided to nearly completely defund public spending in favor of spending money on private contractors like SpaceX. That's been happening since the 90s.
It's a private company, is profit motivated, and thus has reason to optimize. That was the parent poster's point.
Starship isn't largely a government project. It was planned a decade before the government was ever involved, they came along later and said "Hey, this even more incredible launch platform you're building? Maybe we can hire SpaceX to launch some things with it?"
Realistically, SpaceX launches far more payload than any government.
Lockheed, Boeing, Northrop, Raytheon, and all the others are private companies, too. NASA and others generally go through contractors to build things. SpaceX is on the dole just like them.
A puzzling statement, could you explain? Most of their revenue now comes Starlink which is mostly private clients. Also it's trivial to look at their launch history and see they have plenty of private clients. For sure the USG is their most important client but "entirely" is flat out wrong.
There is a lot of hand waiving away of the orders of magnitude more manufacturing, more launches, and more satellites that have to navigate around each other.
We still don’t have any plan I’ve heard of for avoiding a cascade of space debris when satellites collide and turn into lots of fast moving shrapnel. Yes, space is big, but low Earth orbit is a very tiny subset of all space.
The amount of propulsion satellites have before they become unable to maneuver is relatively small and the more satellite traffic there is, the faster each satellite will exhaust their propulsion gasses.
> We still don’t have any plan I’ve heard of for avoiding a cascade of space debris when satellites collide and turn into lots of fast moving shrapnel.
What do you mean we don’t have any plans to avoid that? It is a super well studied topic of satelite management. Full books have been written on the topic.
I am very aware that the US Air Force / Space Force monitor’s trajectories and calls satellite owners when there is an anticipated collision but that method doesn’t scale, especially with orders of magnitude more satellites in the same LEO shells.
And it still doesn’t solve the problem of a cascade causing shrapnel density to increase in an orbit shell which then causes satellites to use some of their scarce maneuver budget to avoid collision. But as soon as a satellite exhausts that budget, it becomes fodder for the shrapnel cascade.
>There is a lot of hand waiving away of the orders of magnitude more manufacturing, more launches, and more satellites that have to navigate around each other.
This is exactly like the Boring Company plans to "speed up" boring. Lots of hand waving away decades of commercial boring, sure that their "great minds" can do 10x or 100x better than modern commercial applications. Elon probably said "they could just run the machines faster! I'm brilliant".
All of those “huge overheads” you cite are nothing compared to the huge overhead of building and fueling rockets to launch the vibration- and radiation-hardened versions of the solar panels and GPUs and cooling equipment that you could use much cheaper versions of on Earth. How many permitted, regulated launches would it take to get around the one-time permitting and predictable regulation of a ground-based datacenter?
Are Earth-based datacenters actually bound by some bottleneck that space-based datacenters would not be? Grid connections or on-site power plants take time to build, yes. How long does it take to build the rocket fleet required to launch a space “datacenter” in a reasonable time window?
This is not a problem that needs to be solved. Certainly not worth investing billions in, and definitely not when run by the biggest scam artist of the 21st century.
Good point - the comms satellites are not even "keeping" some of the energy, while a DC would. I _am_ now curious about the connection between bandwidth and wattage, but I'm willing to bet that less than 1% of the total energy dissipation on one of these DC satellites would be in the form of satellite-to-earth broadcast (keeping in mind that s2s broadcast would presumably be something of a wash).
I am willing to bet that more than 10% of the electrical energy consumed by the satellite is converted into transmitted microwaves.
There must be many power consumers in the satellite, e.g. radio receivers, lasers, computers and motors, where the consumed energy eventually is converted into heat, but the radio transmitter of a communication satellite must take a big fraction of the average consumed power.
The radio transmitter itself has a great efficiency, much greater than 50%, possibly greater than 90%, so only a small fraction of the electrical power consumed by the transmitter is converted into heat and most is radiated in the microwave signal that goes to Earth's surface.
Unfortunately this is not the case. The amplifiers on the transmit-side phased arrays are about 10% efficient (perhaps 12% on a good day), but the amps represent only ~half the power consumption of the transmit phased arrays. The beamformers and processors are 0% efficient. The receive-side phased arrays are of course 0% efficient as well.
I'm curious. I think the whole thing (space-based compute) is infeasible and stupid for a bunch of reasons, but even a class-A amplifier has a theoretical limit of 50% efficiency, and I thought we used class-C amplifiers (with practical efficiencies above 50%) in FM/FSK/etc. applications in which amplitude distortion can be filtered away. What makes these systems be down at 10%?
Nowadays such microwave power amplifiers should be made with gallium nitride transistors, which should allow better efficiencies than the ancient amplifiers using LDMOS or travelling-wave tubes, and even those had efficiencies over 50%.
For beamformers, there have been research papers in recent years claiming a great reduction in losses, but presumably the Starlink satellites are still using some mature technology, with greater losses.
Is the SpaceX thin-foil cooling based on graphene real? Can experts check this out?
"SmartIR’s graphene-based radiator launches on SpaceX Falcon 9" [1]. This could be the magic behind this bet on heat radiation through exotic material. Lot of blog posts say impossible, expensive, stock pump, etc. Could this be the underlying technology breakthrough? Along with avoiding complex self-assembly in space through decentralization (1 million AI constellation, laser-grid comms).
This coating looks like it can selectively make parts of the satellite radiators or insulators, as to regulate temperature. But I don't think it can change the fundamental physics of radiating unwanted heat and that you can't do better than black body radiation.
Indeed, graphene seems capable of .99 of black body radiation limit.
Quote: "emissivity higher than 0.99 over a wide range of wavelengths". Article title "Perfect blackbody radiation from a graphene nanostructure" [1]. So several rolls of 10 x 50 meters graphene-coated aluminium foil could have significant cooling capability. No science-fiction needed anymore (see the 4km x 4km NVIDIA fantasy)
What radiators look like is foil or sheet covering fluid loops to spread the heat, control the color, and add surface area.
They are usually white, because things in a spacecraft are not hot enough to glow in visible light and you'd rather they not get super hot if the sun shines on them.
The practical emittance of both black paint and white paint are very close to the same at any reasonable temperature-- and both are quite good, >90% of this magical material that you cite ;)
Better materials -- with less visible absorption and more infrared emittance -- can make a difference, but you still need to convect or conduct the heat to them, and heat doesn't move very well in thin materials as my sibling comment says.
The graphene radiator you cite is more about active thermal control than being super black. Cheap ways to change how much heat you are dumping are very useful for space missions that use variable amounts of power or have very long eclipse periods, or what move from geospace to deep space, etc. Usually you solve it on bigger satellites with louvers that change what color they're exposing to the outside, but those are mechanical parts and annoying.
Aluminum foil of great surface will not work very well, because the limited conductivity of a thin foil will create a great temperature gradient through it.
Thus the extremities of the foil, which are far from the satellite body, will be much cooler than the body, so they will have negligible contribution to the radiated power.
The ideal heatsink has fins that are thick close to the body and they become thinner towards extremities, but a heatsink made for radiation instead of convection needs a different shape, to avoid a part of it shadowing other parts.
I do not believe that you can make an efficient radiation heatsink with metallic foil. You can increase the radiating surface by not having a flat surface, but one covered with long fins or cones or pyramids, but the more the surface is increased, the greater the thermal resistance between base and tip becomes, and also the tips limit the solid angle through which the bases radiate, so there must be some optimum shape that has only a limited surface increasing factor over the radiation of a flat body.
> I do not believe that you can make an efficient radiation heatsink with metallic foil.
What radiators look like is foil or sheet covering fluid loops to spread the heat, control the color, and add surface area.
In general, radiators are white because there's no reason for them to absorb visible light, and they're not hot enough to radiate visible light. You want them to be reflective in the visible spectrum (and strongly absorptive/emissive in the infrared).
A white surface pointing at the sun can be quite cool in LEO, < -40C.
It's not as exciting as you think it is. "emissivity higher than 0.99 over a wide range of wavelengths" is basically code for "it's, like, super black"
The limiting factor isn't the emissivity, it's that you're having to rely on radiation as your only cooling mechanism. It's super slow and inefficient and it limits how much heat you can dissipate.
Like the other person said, you can't do any better than blackbody radiation (emissivity=1).
Lets assume an electrical consumption of 1 MW which turned into heat and a concommitant 3 MW which was a byproduct of acquiring 1 MW of electrical energy.
So the total heat load if 4 MW (of which 1 MW was temporarily electrical energy before it was used by the datacenter or whatever).
Let's assume a single planar radiator, with emissivity ~1 over the thermal infrared range.
Let's assume the target temperature of the radiator is 300 K (~27 deg C).
What size radiator did you need?
4 MW / (5.67 * 10 ^ -8 W / ( m ^2 K ^4 ) * 300 K ^4) = 8710 m ^2 = (94 m) ^2
so basically 100m x 100m. Thats not insanely large.
The solar panels would have to be about 3000 m ^2 = 55m x 55m
The radiator could be aluminum foil, and something amounting to a remote controlled toy car could drive around with a small roll of aluminum wire and locally weld shut small holes due to micrometeorites. the wheels are rubberized but have a magnetic rim, on the outside theres complementary steel spheres so the radiator foil is sandwiched between wheel and steel sphere. Then the wheels have traction. The radiator could easily weigh less than the solar panels, and expand to much larger areas. Better divide the entire radiator up into a few inflatable surfaces, so that you can activate a spare while a sever leak is being solved.
It may be more elegant to have rovers on both inside and outside of the radiator: the inner one can drop a heat resistant silicone rubber disc / sheet over the hole, while the outside rover could do the welding of the hole without obstruction of the hole by a stopgap measure.
As I've pointed it out to you elsewhere -- how do you couple the 4MW of heat to the aluminum foil? You need to spread the power somewhat evenly over this massive surface area.
Low pressure gas doesn't convect heat well and heat doesn't conduct down the foil well.
It's just like how on Earth we can't cool datacenters by hoping that free convection will transfer heat to the outer walls.
Lets assume you truly believe the difficulty is the heat transport, then you correct me, but I never see you correct people who believe the thermal radiation step is the issue. It's a very selective form of correcting.
Lets assume you truly believe the difficulty is the heat transport to the radiator, how is it solved on earth?
> Lets assume you truly believe the difficulty is the heat transport, then you correct me, but I never see you correct people who believe the thermal radiation step is the issue
It's both. You have to spread a lot of heat very evenly over a very large surface area. This makes a big, high-mass structure.
> how is it solved on earth?
We pump fluids (including air) around to move large amounts of heat both on Earth and in space. The problem is, in space, you need to pump them much further and cover larger areas, because they only way the heat leaves the system is radiation. As a result, you end up proposing a system that is larger than the cooling tower for many nuclear power plants on Earth to move 1/5th of the energy.
The problem is, pumping fluids in space around has 3 ways it sucks compared to Earth:
1. Managing fluids in space is a pain.
2. We have to pump fluids much longer distances to cover the large area of radiators. So the systems tend to get orders of magnitude physically larger. In practice, this means we need to pump a lot more fluid, too, to keep a larger thing close to isothermal.
3. The mass of fluids and all their hardware matters more in space. Even if launch gets cheaper, this will still be true compared to Earth.
I explained this all to you 15 hours ago:
> If this wasn't a concern, you could fly a big inflated-and-then-rigidized structure and getting lots of area wouldn't be scary. But since you need to think about circulating fluids and actively conducting heat this is much less pleasant.
You may notice that the areas, etc, we come up with here to reject 70kW are similar to those of the ISS's EATCS, which rejects 70kW using white-colored radiators and ammonia loops. Despite the use of a lot of exotic and expensive techniques to reduce mass, the radiators mass about 10 tonnes-- and this doesn't count all the hardware to drive heat to them on the other end.
So, to reject 105W on Earth, I spend about 500g of mass; if I'm as efficient as EATCS, it would be about 15000g of mass.
Well acttshually, it's 100% efficient. If you put 1W in, you will get exactly one watt out, steady state. The resulting steady state temperature would be close to watts * steady state thermal resistance of the system. ;)
I don't think you could use "efficiency" here? The math would be based on thermal resistance. How do you get a percentage from that? If you have a maximum operating temperature, you end up with a maximum operating wattage. Using actual operating wattage/desired operating wattage doesn't seem right for "efficiency".
Yes, graphene appears to offer a negligible improvement over other kinds of paints based on black carbon, e.g. Vantablack.
The research article linked above does not claim a better emissivity than Vantablack, but a resistance to higher temperatures, which is useful for high temperature sensors (used with pyrometers), but irrelevant for a satellite that will never be hotter than 100 Celsius degrees, in order to not damage the electronic equipment.
I think you missed the point. If you have a 100 MW communicstion satellite and a 100 MW compute satellite those are very different beasts. The first might send 50% of the energy away as radio communication making it effectively a 50 MW satellitefor cooling purposes.
No, they didn't. You can't "send away" thermal energy via radio waves. At the temperatures we're talking about, thermal energy is in the infrared. That's blackbody radiation.
Nobody describes a satellite by specifying the amount of heat that it produces, but by the amount of electrical energy that it consumes.
In a communication satellite, a large fraction of the consumed electrical energy goes into the radio transmitter. Radio transmitters are very efficient and most of the consumed power is emitted as radio waves and only a very small part is converted into heat, which must be handled by the cooling system.
So in any communication satellite, a significant fraction of the consumed energy does not become heat.
Your answer makes it seem like you too missed the point. If a Starlink sends a 1000W signal to Earth, that is 1000W of power that does not heat the satellite.
The owner is a Google employee, but for the sake of safety it should be owned by a real Google org. I've just asked them to migrate it to their OSS org.
Unfortunately the app creation flow on GitHub makes it impossible (for now) for a normal org user to create an app for the org, so apps end up getting created on personal accounts and become load bearing. We've got a work item to make it possible to give app creation rights to your org members, I've got it high on the priority list for the next six months.
Re:payment
As I understand it each org that uses the gemini cli agent puts their api key in their actions secrets, which gets picked up and used to call Google inference APIs. So the org these comments are in is paying for the inference.
Dear god. This reminds me of all of the things in Google that are "load bearing" and have to be owned by random gmail accounts instead of formal service accounts or org accounts.
How long has this one been on the roadmap for? (since you actually work for github)
Tbc apps can be owned by orgs today, but the process is annoying - devs create the app and then transfer it to the org, and then are made managers of the app. Really high overhead.
It's part of the push we've been making over the last year or two to improve custom roles and finer-grained authorization for resources.
"Amazon announced it" in some back alley press report and certainly not in a proactive outreach way to tell these folks they were listing their products. At the very least there's a trademark issue here because these sellers in no way gave Amazon permission to reuse the images and descriptions of their products.
If I announce in my local paper (you get to guess which one) that you'll be throwing a party outside your house, I don't think you'll be on the side of "just tell them to go away and my neighbors will totally understand it wasn't really me"
> “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard’.
Using images of something you are selling is nominative use of the trademark. Whether their actual listings contains something that falls outside the nominative use test I don't know.
But at the end of the day you can't stop someone from reselling your stuff no matter how much you hate it, that has been clearly established by the First Sale Doctrine.
> If I announce in my local paper (you get to guess which one) that you'll be throwing a party outside your house, I don't think you'll be on the side of "just tell them to go away and my neighbors will totally understand it wasn't really me"
The comparison here would be that you are selling tickets to a party outside your house, which I don't think anyone would bat an eye at if the local newspaper announces?
I've always wondered at the company cultures between Google and Microsoft - Gcal supports ending meetings five minutes early while Outlook supports starting five minutes late.
At Microsoft it was obvious how five minutes late was optimal - meetings usually dragged on past their end time anyhow, but never started early so it gave folks time to eg get to their next meeting (in person), coffee, bio break, etc.
Does Google have a culture of meetings ending on the dot with finality? I just don't see that working with human nature of "one last thing" and the urge to spend an extra few minutes to hammer something out.
It's just laughable to me to bother with a "ends five minutes early" option. It just doesn't work - you know you're not cutting into anyone's next meeting by consuming those last five minutes. But you can't know that if you push into the next half hour block - maybe they have a customer call up next that starts on time, so you have to wrap up.
> while Outlook supports starting five minutes late
This contrast is an incorrect assumption. Outlook does allow starting meetings late as well as ending meetings early, with somewhat arbitrary durations. [1] I have definitely seen these options in Outlook settings (on web, since I hate Outlook).
However, I haven’t used it because the teams one works with need to be alerted and reminded of it before it sticks in their minds (if nobody else is using such settings).
> Does Google have a culture of meetings ending on the dot with finality?
Whenever I'm having remote meetings with people using a Google meeting room, right at the hour they'll say "I'm getting kicked out", because the next person is waiting to use the booked meeting room.
The solution I like best is to "pin" issues that would cause the meeting to run long, with select personnel needing to stay late to address the pinned issue but everybody else leaving on time.
You've got an extra actor in the mix that makes for a different argument and actually supports the idea that it's racist, I think.
Namely - I think most agree that it's racist to mindlessly assume race and poverty are correlated. The argument here is that the AI companies made that assumption - in other words, they're being called racists.
I don't think it's racist to speculate that a corporation, that made choices that specifically impact black neighborhoods, is racist.
Expanding the timeline a bit, CRISPR was known as a possible gene targeting/editing tool by 2008 at least - I distinctly recall learning about it then in a guest lecture.
Are you saying those are additional possible meanings of Pennsyltucky, or that you've heard people use it to mean all of those?
I have only ever heard it used to mean the rural areas between the two cities, in keeping with the saying "Pittsburgh on one side, Philly on the other, and Kentucky in between", which has of course confused people not familiar with the stereotypes or geography.
The other famous use of Pennsyltucky is the character in Orange is the New Black, which I've always taken to mean "she acts like she's from Pennsyltucky".
I guess we need to wait for the term to be used enough to get into a dictionary to get it well defined
I’m not sure what you’re referring to here. Google are the file distributor for content from their store.
These rules aren’t for linking out from the store to a third party site, but rather for installing an app from the store and then linking out to a third party payment.
The apparent information gathering and brutal review process is unbelievable here. If I'm understanding this correctly, the requirement is that eg Epic Game Store must register and upload every single APK for every app they offer, and cannot offer it in their store until Google approves it, which may take a week or more - including every time the app updates.
Meanwhile they get full competitive insight into which apps are being added to Epics store, their download rates apparently, and they even get the APKs to boot, potentially making it easier for those app devs to onboard if they like, and can pressure them to do so by dragging their feet on that review process.
> Provide direct, publicly accessible customer support to end users through readily accessible communication channels.
This is an interesting requirement. I want to see someone provide the same level of support that Google does to see if it draws a ban.
their Play store review practices are such a joke. Apps review is a completely obscure process, no clear way to see that the app is in review state, if they reject - amount of information why it was rejected is minimal and you have to second-guess; appealing is not trivial; most of the reviews are done by AI which gets triggered in totally random places from time to time (e.g., in my case, some pictures which looked fine for kids for years and went through many previous reviewed, suddenly seem too violent).
I have healthcare apps. The review process for me consists of some reviewer deciding what set of healthcare features I should have picked from their list and rejecting on that basis. But subsequent reviewers have different opinions. In one app version release I got rejected 5 times for picking the wrong set of healthcare features as either the reviewer changed their mind or I got different reviewers. The app has been on Google play for 13 years.
I'm not subscribed to too many Youtubers. But it's insane that I still need 2 digits to count how many of those creators tried to work for over a week to address some urgent issue brought upon by one of Google's automation tools. Then simply resorted to Twitter to get their fanbase to rile up YouTube for them.
absolutely, i get this. i assume it's going to be a relatively small subset that go open in order to jump to an open platform. i'm not super familiar with the f-droid publishing ecosystem (or mobile publishing at all, admittedly).
i do wonder if there's regardless going to be some kind of (perhaps overwhelming) inundation.
A single AI rack consumes 60kW, and there is apparently a single DC that alone consumes 650MW.
When Microsoft puts in a DC, the machines are done in units of a "stamp", ie a couple racks together. These aren't scaled by dollar or sqft, but by the MW.
And on top of that... That's a bunch of satellites not even trying to crunch data at top speed. No where near the right order of magnitude.
reply