Hacker Newsnew | past | comments | ask | show | jobs | submit | bradfa's commentslogin

Some solar inverter systems already have a data connection to get live pricing information from the grid operator. It’s not that big of a problem to implement, although it definitely isn’t pervasive yet.

Minute by minute pricing is not crazy to expect and integration with HVAC, battery systems, and inverters isn’t crazy to expect to occur.


I think pulling for live pricing by inverters and appliances is not realistic on a grand scale. Using time of day pricing is much simpler imo.

With a higher voltage you can reduce your copper needs by a substantial amount. Seems if copper cost was a concern this would be what these data centers would do.

Agreed. I was kind of surprised to see 54 VDC mentioned. I am assuming this is low enough to meet some threshold for some kind of safety regulation. In other words, it doesn't shock you just 220 VAC would. I'm not entirely convinced of that however as it turns out bus bars are really dangerous in general. A 54 VDC bus bar won't shock you, but if you drop even a paperclip between the bus bar and a metal part that is grounded it basically disappears instantly in a small blast of plasma. The injury from that can be far worse than any shock you'd receive.

My experience has been that SELV (safety extra low voltage, less than 60V peak and less than 240VA) is considered safe and anything exceeding that needs certain levels of protection.

But bus bars generally should be protected regardless of voltage as they carry currents from high current capacity sources so even a lower voltage can be a safety concern.

Many server power supplies can take AC or DC input, with the DC input in the 300-500V range as this is comparable to the boost voltage for the AC power factor correction circuit. I just assumed most data centers using DC would be distributing around 400V within each rack.


I would like to. I haven't yet found a solution that works well.

The problems with datasheets is tables which span multiple pages, embedded images for diagrams and plots, they're generally PDFs, and only sometimes are they 2-column layout.

Converting from PDF to markdown while retaining tables correctly seems to work well for me with Mistral's latest OCR model, but this isn't an open model. Using docling with different models has produced much worse results.


I've been working on a tool specifically to handle these messy PDF-to-Markdown conversions because I ran into the same issues with tables and multi-column layouts.

I’ve optimized https://markdownconverter.pro/pdf-to-markdown to handle complex PDFs, including those tricky tables that span multiple pages and 2-column formats that usually trip up tools like Docling. It also extracts embedded diagrams/images and links them properly in the output.

Full disclosure: I'm the developer behind it. I’d love to see if it handles your specific datasheets better than the models you've tried. Feel free to give it a spin!


Cool! But given that often electronics documentation is covered by NDAs, my preferred solution is local-first if at all possible.

Your README references a file named LICENSE which doesn't seem to exist on the main branch.

Fixed. Thank you!

It would also depend on the purchase cost and cooling infrastructure cost. If this costs what a 3x H100 box costs then it’s a fair comparison even if not a direct comparison to what customers currently buy.

Having only 48GB of RAM per card seems low. The full server system with 8 cards barely has enough RAM to run modern large open models. And batching together user requests eats quite a lot of memory, too. Curious to see how these machines and cards are received by the market.

What's missing from these parts which makes people reach for ESP32 by default instead? (I don't have any experience with ESP32.)

The TI parts seem a bit expensive in small quantities, but the Microchip and SiliLabs parts are like $6-7 in single units from Digi-Key. Is it just that the dev kits are in the >$50 price range which puts people off compared to ESP32?


> The TI parts seem a bit expensive in small quantities, but the Microchip and SiliLabs parts are like $6-7 in single units from Digi-Key. Is it just that the dev kits are in the >$50 price range which puts people off compared to ESP32?

It helps to separate hobbyist use from professional product development.

The hobby market is driven by quick, cheap, and easy: low up-front cost, abundant tutorials, and inexpensive dev boards. In that context, ESP32 shines, and expensive dev kits can be a real psychological barrier.

For commercial, industrial, or professional products, however, small-quantity pricing is often irrelevant. Sample or single-unit prices rarely reflect real production costs. Without getting into specifics, it’s common for the ratio between sample pricing and volume pricing to be 10× or more.

A part that costs $20 in onesies can easily be a $2 part at scale. This doesn’t apply universally, but it does mean that judging a device’s suitability for mass production based on Digi-Key single-unit pricing is usually a mistake.

There are also system-level considerations beyond the MCU’s line item price. For example, the RP2040 could be very inexpensive (around $0.50 in modest volumes when we used it), but that ignores the required external flash, which adds cost, board space, and supply-chain complexity. More importantly for many products, it offers no meaningful code security (the external flash can simply be read out—which can be a non-starter in commercial designs).

Guaranteed long-term availability can be crucially important as well; with design support requirements in commercial/industrial settings often extending past ten year timelines.

Tooling and ecosystem maturity also matter. At the time, the RP2040 toolchain was notably hostile to Windows, and Raspberry Pi support reflected that attitude. In reality, most product development (EE, MCAD, manufacturing, test, PLM/ERP) is Windows-centric. Asking an organization to bolt a Linux-only toolchain onto an otherwise Windows-based workflow just to save a dollar on an MCU is rarely a winning argument.

So while cost absolutely matters, it’s often not the dominant factor in professional design. Security, tooling, vendor support, long-term availability, and integration into existing workflows frequently outweigh a few dollars of MCU price, particularly once production pricing enters the picture.


> What's missing from these parts which makes people reach for ESP32 by default instead?

I didn’t directly answer that question before.

Strictly speaking, nothing essential is missing from many of these other parts. In fact, in professional contexts they often have better documentation, support, longevity guarantees, or security features than ESP32.

One of the biggest differentiators is simply pricing strategy. Espressif has used aggressively low pricing (what many would reasonably call predatory pricing) to capture mindshare and market share. That playbook is hardly new; it’s been used successfully across industries for decades. Ultra-cheap silicon, combined with inexpensive dev boards, dramatically lowers the barrier to entry and makes ESP32 the default choice, especially for hobbyists and startups.

Price pressure also creates a feedback loop: more users means more tutorials, libraries, examples, and community support, which in turn makes the platform feel easier and safer to choose, even when alternatives might be technically superior.

For teams operating in cost-driven markets, this can become unavoidable. If your product lives or dies on BOM cost, reaching for the cheapest viable part may not be optional. I spent several years in that environment myself, and while it’s a valid constraint, it tends to push decisions toward short-term cost optimization rather than long-term engineering value.

So the answer isn’t that these parts lack features, it’s that ESP32 combines good-enough capabilities with exceptionally aggressive pricing and a massive ecosystem, which together make it the default choice in many contexts.


Cook very large or numerous portions. Use what you need for 1 meal, freeze the rest to save for future meals. Based on how much your family eats in that first meal, divide up the remaining amount into that sized portions when freezing. Warm up the frozen food in the oven (still may take an hour, but you can do other things during that time).

Frozen vegetables are pretty cheap and easy to warm up quickly in the microwave or an air fryer. They may not be as good for you as fresh produce, but that can be a reasonable tradeoff based on the season and free time.

Chest freezers are reasonably cheap to buy (new or used) and cheap to operate, assuming you have the physical space and an open electrical outlet. They don't consume much electricity, mine uses about 75W for the compressor (when it's running, which is less than 50% of the time) and about 250W for the defrost heaters (which seem to turn on for about 15 minutes roughly once per day.


Yes, batch cooking!

One extra thing to consider is preparing something that can transform easily into many dishes.

We cook a "big meal" every weekend (now in winter time is chickpea+meat stew - "cocido madrileño"). It takes around 1 hour to make, but the time is not proportional to the quantity. So we make enough for 3-4 meals for my family of 3 on a big pot.

The nice thing about this stew in particular is that you can reserve the liquid, meat and chickpeas in separate containers in the fridge. The liquid is a very good base broth for soups (heat up, add some noodles, done in minutes).

The meat can be consumed cold, or can be the meaty base of other things (croquettes). We can also rebuild the dish by adding broth, chickpeas and meat into a plate and microwaving it (again, minutes). Or we can add some rice and have a "paella de cocido" (that takes a bit longer, around 25 minutes).

You have to adapt this idea to whatever is available to you in your area and your personal tastes. Perhaps you can prepare a big batch of mexican food, to eat in tacos/wraps/with salad. Or some curry base that can double up as a soup.


It may affect school lunch menus as much of the funding for school lunch programs is guided by the USDA. So, yes, lots of kids' diets may be affected by this during the school year.

Have a look at comma ai


George Hotz has done some interesting work, but Comma is far too indie/hacker. It's not at a scale where it can be 100% autonomous.

I think a fully autonomous car has to be designed around LiDAR and autonomy from the ground up. That's a hugely capital intensive task that integrates a lot of domains and data. And so much money and talent.

This is more in the ballpark of Google Waymo, Amazon Zoox, Tesla/xAI, Rivian, Apple, etc.

And as the other folks have mentioned, this becomes a really good prospect if one company can manage the autonomy, insurance, maintenance, updates, etc. A fully vertically integrated subscription offering on top of specially purposed hardware you either lease or purchase.


Hey @echelon,

Interesting thoughts on Comma. Ever thought of working in AV? We're quite similar to Comma but for construction vehicles. We have a couple customers, just raised our seed, and are now expanding our team with some very critical high-ownership founding engineers.

Some info about us: - https://crewline.ai/ - https://crewline.ai/blog/crewline-manifesto

LMK if this is of interest. -Freddie [email protected]


I would hope geohot is exploring options to partner with one of the automakers. Because it sure looks like the future is not bright for their device. Cars are steadily switching to encrypted canbus and don't work with Comma. It's a dead end unless they work a deal with someone to be allowed on the bus.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: