Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed, I do sometimes wonder what it'd be like if they made the entire processor out of E cores.


the concept of “e-cores” is a rebranding of Atom. The Atom name is extremely tarnished from the days when it was super slow in-order netbook cores but Intel started seriously revamping the design in 2012-2014 with silvermont and airmont. They have been good for several years now - Goldmont (eg J5005) is between core2quad and Nehalem i5 (non-SMT) level performance despite running lower clocks (ie IPC is higher), and there have been 2 subsequent generations since - but they haven’t been able to escape the stigma of the atom branding until now.

Denverton is actually a very interesting platform for the wattage - that’s 16 Goldmont cores in a 32W package (despite being 14nm!) with full RDIMM support and QuickAssist acceleration. And as mentioned there is a new atom-based server platform with newer cores (I think they’re not the Gracemont but actually the next iteration?) coming in the next year or two.

https://en.wikichip.org/wiki/intel/cores/denverton

If you want something with a lower core count now, look for something with a J3405, J5005, or Tremont-based processor. Intel has some NUCs with those chips and there are also itx-form factor motherboards from other companies like Asrock and gigabyte.

Actually a lot of the 2-bay synology NASs come with those chips, I believe it’s the ones ending in J. They have fairly nice video encode/decode support for their price bracket which makes them nice for plex/etc.


In my analysis, only Gracemont (Atom 5th gen) is good and that’s for two reasons:

1. It has roughly the IPC of Skylake. 2. It’s the first Atom with AVX2. Even in 2021 Intel released brand new Atom cores without AVX2, which was introduced with Haswell (Haswell New Instructions). And in most tests, merely enabling AVX2 will yield 10% higher performance. That’s why Clear Linux has higher performance as they’re all tuned for Skylake. That’s also the reason why Red Hat couldn’t target x86-64-v3 with AVX2 for their new releases.

All previous Atom cores are slower than you imply:

Tremont (Atom 4th gen) has the IPC of Sandy Bridge. Goldmont (Atom 3rd gen) has the IPC of Core. It’s not exceeding Core like you imply. And Atom first gen is dog slow. It is like running a Pentium II/III CPU or a first generation Raspberry Pi. I find them an unsuccessful attempt of Intel of trying to create a mobile chip. In essence they just brought back a Pentium era chip.

This new Atom is fantastic though.


No, it’s really doing OK for what it is. A Xeon Bronze 3104 is a 6C6T 1.7 GHz Skylake that gets a 88W tdp and a J5005 is a 2.8 GHz 4C4T Goldmont running in 15W. That is 10.7 core-cycle-units for the Skylake and 11.2 core-cycle-units for the atom. The Skylake is not really all that far ahead - maybe 50% ahead on average in this suite, so the J5005 is running about 2/3 of Skylake IPC. I assume c-ray is probably using AVX there? And the J5005 still does fine.

There is obviously a dearth of actual practical benchmarks of real tasks apart from STH and a few others, but if you look at Passmark or other generic benchmarks, like I said, it’s certainly faster than core2quad enthusiast desktop, which is completely reasonable and even impressive given its power budget. Actually in Passmark it’s even faster than a full Nehalem Core i7 with SMT, it actually works out just a bit under a Sandy Bridge i5 of similar clock - so I was a bit conservative there. And Passmark corroborates my “3104 is about 50% faster” guesstimate.

(UserBenchmark’s actual measurements - not the speed rating - is likely more accurate than Passmark but I know I’ll get my head bitten off for it!)

https://www.servethehome.com/intel-pentium-silver-j5005-benc...

https://www.cpubenchmark.net/compare/Intel-Pentium-Silver-J5...

(Do remember that Intel progress through that era wasn’t really as bad as people say… I’ve seen people say “5% a generation” and the only generation close to that low was Broadwell. Skylake is a ton faster than Sandy Bridge clock for clock, and J5005 still does fall a bit below Sandy Bridge. There’s your two generations of progress since Goldmont - Atom went from below Sandy Bridge to matching Skylake during the Tremont/Gracemont generations.)


Userbenchmark.com is worse than useless. It’s straight up lies.


And like I said, I knew I was gonna get my head bitten off for that. This is where you distinguish the bandwagon hangers-on from the people actually interested in technical discussion.

The commentary UserBenchmark provides is terrible and the “effective speed” composite rating is terrible. The actual int/fp benchmarks are perfectly fine and actually tend to be more reflective of actual benchmarks like spec than Passmark has been, in my experience. Passmark does sometimes have oddities, UserBenchmark hasn’t.

But bandwagoners can’t accept that, they just see “UserBenchmark” and their vision goes red and they shake as they struggle to type out “nobody should EVER use UserBenchmark”. I do apologize for being colorful but that’s how it is.

Like I said, there’s a reason I didn’t link it, no matter what you say, there’s a significant number of people who just can’t act mature enough to handle separating the data from the editorial positions.

It would be nice if we had good Spec2017 benchmarks for everything. I would prefer that. If you want broad comparisons of completely obscure hardware “what does this ultra-budget Xeon Bronze look like against this atom vs a desktop processor from 2007” the options are limited. You have Passmark, you have UserBenchmark, and you have Geekbench. And people still lose their shit over Geekbench too.

Even Passmark has only seen that xeon a total of 14 times.

The situation is even worse for GPUs where UserBenchmark is the only reasonably broad benchmark database available for truly ancient stuff that 3Dmark won’t run on. What if you want to compare a GTX 9800 to a Vega 7 and a Intel HD 605? It’s UserBenchmark or nothing. It sucks but it is what it is, and people need to just grow up. Including the owner of UserBenchmark. But the effective speed is just a composite benchmark and he’s never actually tampered the underlying int/fp benchmarks.

Phoronix is also not particularly good and has some serious methodological problems too. But nobody else does linux benchmarks apart from STH and Phoronix, and Phoronix pumps them out like crazy and just survives on volume even if his testing is kinda shit. Look at his “Linux games” test and he’s mixing multiple resolutions into a single result set, including the same game multiple times, etc, and then pulling out averages, everything is framerates and not frametimes (meaning, averages and not minimums), etc. From what I’ve heard his application testing isn’t any better but I don’t remember specifics.

Just goes to show: you can be scum and put out good data, and you can be a pillar of the community and put out bad data. Reddit tone police don’t change that.

There is a awful lot of incredibly useful data in the world that came from people who are extremely disagreeable on a personal level - or much, much worse.


Yeah life’s too short to see what part of lies are true.


As an aside, I don't think even the early Atom processors were as terrible as their reputation branded them. I had an early Atom-based netbook, and it was totally acceptable... if running Linux (or XP).

Kind of like the "Vista debacle", I think Atoms were just unsuited to running Windows 7. Maybe I'm too forgiving or forgetful, but I used that netbook for many years, and it wasn't that slow with Linux. (7 was pretty sluggish, though)

...I do remember intensely disliking the introduction of early gnome 3, while running Fedora on that netbook. Not kind to the integrated graphics :-)


I had an eeepc 901, used it for years with linux. First it ran gnome 2 I think, then xfce. Perfectly useful little machine. In later years I switched to lxde to eke a little more out of a little less.

In that form factor, and costing what it did, I was massively impressed.


I had a college professor who loaned me one for a project and it was absolutely unusable in windows. Took like 10 minutes to boot to the desktop (although maybe he had viruses, wouldn’t have surprised me, and he certainly didn’t have an SSD in it).

Lxde is great though and sure id believe that. I ran lxde on a single-core Phenom low power thing and with a SSD it was marginally usable even with that complete shit hardware.


Yeah I think the solid state storage card really made it workable, I got an add-on 64Gb card at no small expense, doubled the RAM (to 2GB!), and linux was generally very happy on there.


100%, SSDs make a massive difference in perceived performance. That laptop (it was an AMD V140, it was the Compaq CQ56 shit-top they sent out to replace laptops after bumpgate) was completely unusable with a HDD in windows but between a SSD and lxde it became marginally tolerable. It certainly wasn't fast but it could browse the web and it could play hardware-accelerated youtube if you used H264, etc.

I actually would not be surprised if, at the extreme low-end like the V140 or the Atom netbooks, that a SSD actually produced measurable increases in CPU performance. Not only do you have iowait which generally doesn't count as "idle" time according to linux (and probably windows) cpu utilization metrics, but even if you account for iowait, swapping off for another task while you iowait likely does have a performance hit in terms of caches that are no longer hot and so on, so performance is likely diminished when you come back to it too. And with no SMT, and only a single thread, context switching does become comparatively much more expensive in that fashion.


I agree, the Atom processors have tended to be fine. Lots of questionable accompanying technology choices or marketing practices though.

1st gen Atoms with the chipset (945?) that was bigger and hotter than the processors was kind of amusing.

That Intel can't help itself from making confusing product names is pretty inexcusable. If a 'Pentium' is an Atom or not makes a big difference in what the specifications mean, and it's not clear to regular people that a dual-core mainstream Intel is about performance equal to a quad-core Atom (assuming similar age chips).

Laptop makers love to combine atom CPUs with amazingly slow spinning hard drives, and they're usually pretty small too. Windows 10 performance on a spinning drive is trash, and it's even worse when it's a low rpm laptop drive. Doesn't help that they're often as little ram as possible and soldered down (although even with 8GB ram, Windows 10 will flog a spinning drive just sitting at the desktop; who knows what it's doing).

It would be interesting to see a 4p+24e or 6p+16e setup for parallelized loads though. Not sure there's currently a need for that on the desktop though.


> Denverton is actually a very interesting platform for the wattage - that’s 16 Goldmont cores in a 32W package

That's pretty neat, but I'm thinking about, say, a 60W-95W package that would make sense in a normal desktop.

How many E-cores could you fit in such a package? For highly-parallel workloads, would it stomp all over anything else on the market? (I'd guess yes)


I believe Sierra Forrest will be first in the new line of E only CPUs to do just that [1]

[1] https://www.servethehome.com/intel-sierra-forest-the-e-core-...


that's the Phi.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: