Founder of REX here, and surprised to see this posted here. Happy to answer any questions, and you can check my comment history for some of my prior posts on REX.
We've had some really great progress that we hope to share in the near future, so stay tuned.
EDIT: Since this article is over a year old, we have made a lot of progress, and have recently taped out our first chip. We haven't officially posted a job opening, but we are very shortly going to be looking for software engineers that would love to work on our architecture. Feel free to shoot me an email if you're interested!
I'm genuinely curious and really like that you're giving this a shot, but am very sceptical, as hardly any idea in computer architecture is new: if you dig enough you'll find it has been tried, but failed. You'll have to understand why it failed. If it was timing, maybe you can succeed today, but if it wasn't timing, you'll have to understand why it failed and don't repeat the same mistakes. It would be great to see more comparisons.
Firstly, your claims about virtual memory in general purpose CPUs is misleading: its purpose is memory virtualization and I wouldn't want a system without it in the presence of multiple processes (how can you trust every process not to shoot down another by accidentally accessing a wrong memory location?).
Ultimately, our hardware will become more specialized/heterogeneous, and we'll have many accelerators for various tasks, but there will likely always be a general purpose CPU at the heart of the system (that will have virtual memory, caches, etc.); for an overview, I enjoyed [1]. I see what you're building as another accelerator for inherently parallel latency-insensitive workloads (like you find in HPC). In a way, GPUs (+ Xeon Phi) cater to these markets today (benchmarks against these would be useful).
Second, I remember the previous post [2], where you claimed the system you are building relies on a RISC ISA, but now you claim it has changed to VLIW. You said yourself before
"[...] stick to RISC, instead of some crazy VLIW or very long pipeline scheme. In doing this, we limit compiler complexity while still having very simple/efficient core design, and thus hopefully keeping every core's pipeline full and without hazards [...]"
What is the rationale behind this? Do you think you'll be able to manage compiler complexity now?
I would be just as skeptical as you and everyone else should be of our claims. While I have talked informally about our architecture to many people on and offline, we have not posted much when it comes to the actual architecture that we have proceeded with to silicon (which is very close, but not exactly what we will be eventually bringing to market). I don't honestly believe any random person to take us seriously based on a couple of online postings, but I will say that most are decently convinced (or at least intrigued enough to withhold immediate doubt) after a rundown of the architecture.
As for why we think "this time is different" is a mix of a combination of good ideas and timing. I 100% agree with you that in the 50 years of von Neumann derivatives, basically all the low hanging fruit (and many higher up) have been attempted, and thankfully I can saw I've learned from a lot of them. Rather than be an entirely new concept, I think we have gone back to some fairly old ideas in going back to the time before hardware managed caches, and thought about simplicity when it comes to what it takes to actually accomplish computational goals. A lot of the hardware complexity that was starting to be added back in the mid/late 80's around the memory system (our big focus at REX) was before much attention was put into the compiler. While I am proud of what we have done on the hardware side, I think most of the credit will go to the compiler and software tools if we are successful, as that is what enables us to have such a powerful and efficient architecture. Ergo, we have the advantage of ~30 years of compiler advancements (plus a good amount of our own) where we have the luxury to remake the decision for software complexity over hardware complexity... plus 30 years of fabrication improvement. Couple that with Intel's declining revenues, end of easy CMOS scaling, and established portability tools (e.g. LLVM, which we have used as the basis for our toolchain) and I think this is the best time possible for us.
When it comes to virtual memory: Why would you need to have your memory space virtualized (which requires address translation) in order to have segmentation? We use physical addresses since it saves a lot of time and energy at the hardware level, but that doesn't mean you can't have software implement the same features and benefits that virtual memory, garbage collection, etc provide. The way our memory system as a whole (and in particular our Network On Chip) behaves and its system of constraints plays a very large role in this, but I can't/don't want to go into the details of that publicly right now. It may seem a bit hand wavy right now, but we do not see this as a limitation/real concern for us, and unless you want to write everything in assembly, the toolchain will make this no different than C/C++ code running on todays machines.
In the case of GPGPUs for HPC, we have the advantage of being truly MIMD over a SIMD architecture, plus a big improvement in power efficiency, programmability, and cost. We'd win in the same areas (I guess tie on programmability) for the Xeon Phi for benchmarks like LINPACK and STREAM, but the one benchmark I am especially looking forward to is HPCG (and anything else that tries to stress the memory system along with compute). While NVIDIA and Intel systems on the TOP500 list struggle to get 2% of their LINPACK score on HPCG[0], we should be performing 25x+ better[0]. Based on our simulations, we should be performing roughly equally across all 3 BLAS levels, which has been unheard of in HPC since the days of the original (Seymour designed) Cray machines.
Of course, my naivety from 2 years ago haunts me now ;) When the linked comment was written, I had yet to "see the light". Once I understood (through my co founder, the brilliant Paul Sebexen) the 'magic' that is possible when a toolchain has enough information to make good compilation decisions, did I realize that the simplicity of a VLIW decoding system made the most sense (and gave us a lot of extra abilities). It was about ~3 months after I made that comment that we started to go down this path, which early prototyping that applied to existing VLIW and scratchpad based systems led to our DARPA and later seed funding. It is only because of the fact that our hardware is so simple (and mathematically elegant in its organization) that the compiler can efficiently schedule instructions and memory movement. While I've only lived through a small fraction of the last 50 years of computer architecture, I think of myself as a very avid historian of it, and it really shocks me that no one has gone about thinking of the memory system quite like we have. I totally agree with my younger self on long pipelines though.
TL;DR: We think we'll succeed because we are combining old hardware ideas with new software ideas to make (in our opinion) the best architecture, plus this is the best time for a new fabless semiconductor startup. We have actually built the mythical "sufficiently smart compiler" due to some very clever (but simple) hardware that enables people to actually effectively program for this. We think we will be more energy efficient, performant, and easier to program for than our competition in our target areas (HPC, high end DSP).
I wish you and your project all the best. Hardware, and especially CPUs and alike are tough and rare. We haven't seen much new competitors (any) in that area, especially relevant ones.
When you say you rest your high hopes on toolchain, aren't you a bit scared of what happened to Itanium? Intel had toolchain under their r&d and it failed because they couldn't deliver. I'm interested to hear more about "mythical 'sufficiently smart compiler' and how it relates to your architecture.
Based on our software results so far, I wouldn't say I'm scared, but am definitely anxious. Since our main focus up to this point has been building the first test chip along with software tool prototyping, our progress in compiling "real" libraries and small applications is fairly early, but we're happy with the results. Now that we've taped out, we can devote more resources, and once we have real hardware, we will be able to test our applications ~1000x faster than the cycle accurate software simulation capabilities we have right now.
All that being said, we have good reason to believe that our approach is valid and won't suffer the flaws of "Itanic" that I've mentioned on this page and many times elsewhere. Unlike any prior VLIW (Intel called their bastardized version implemented in Itanium "EPIC"), our hardware was built with an emphasis on hard real time guarantees and strict determinism at every level of the design, which allows for a level of optimization that is impossible on any other architecture.
Basically, if the compiler has to make worse case assumptions almost all the time to prevent control and data hazards (as did Itanium due to a very convoluted design), how do you expect to have any compiler generated programs to be at all performant/efficient?
Does this means that users have to recompile the world for every cpu generation because of microarchitectural changes? I.e. is the pipeline exposed? Are you planning a Mill-like intermediate level bytecode?
Yes, and in certain cases of the same generation of chip (e.g. same microarchitecture but fewer number of cores and/or less memory per core; no problem if you compiled for a small number of cores/less memory and it is run on a "bigger" chip) as the compiler would need to remap the program and data location based on the global address map.
It is a very simple pipeline, and we expose the exact latencies required for all operations, along with things like branches with delay slots. As I have mentioned ad infinitum, determinism is a key part of our architecture, and having a fixed pipeline is necessary. Plus, we want anyone crazy and skilled enough who wants to hand write assembly the freedom to be crazy ;)
For the applications (HPC and DSP-like stuff) we are targeting, source code is always available, there are very long periods between when you have to recompile due to source code change, and optimization is a key factor. Our customers aren't only accepting with recompiling for every new generation of hardware, they expect it and want to be able to take advantage of any new improvements that the compiler would be able to make.
Did you stick with the parallel, SERDES-less interfaces for your interchip I/O? 48 GB/s implies a pretty high signalling rate to not have a CTLE, DFE, etc.
Why 3 interchip links? What network topology are you planning to use to scale to large numbers of chips? If you're still using parallel I/O, how are you planning to communicate beyond a single PCB?
What memory interface are you using? The article seems to confuse your interchip links with your memory controller.
We have partnered with a startup (we'll announce who soon enough) who shared a lot of ideas about chip to chip I/O with myself. While they call it a SerDes, it is infact a source synchronous (clock forwarded) link that is 5 bits over 6 wires. It is silicon proven, and is capable of up to 125Gb/s over 12mm while being a little over 10x more energy efficient (in terms of pJ/bit) than other available VSR SerDes. Obviously it is short reach over PCB, but we imagine (yet to be tested) we can extend that reach a bit more using a more exotic PCB laminate (Megtron, Rogers, etc), or going over wire (tested to go over 6 inches using a HuberSuhner SMA cable). Right now, we are only using it to go between chips in a Multi Chip Module, or under 12mm on a PCB. Big bonus is as of a month ago, it is a JEDEC standard!
Most of the information in the linked article is very outdated (~16 months old), so we have decided to ditch the idea of having a separate DRAM and "External I/O" and just have our chip-to-chip on all four sides of the chip. The chip-to-chip interface uses the same protocol as our Network On Chip, and expands in the same 2D mesh. We are also looking into (with a sketched out plan) on how to directly interface this I/O with HBM dies that can be in the same MCM package. As far as supporting other memories/IOs, we are leaning towards having "adapter chips" that would convert our chip-to-chip interface to DDR4, Ethernet, Infiniband, etc.
As far as bandwidth numbers, our aggregate bandwidth for this test chip we have just taped out (16 cores + 2 chip-to-chip I/O macros on TSMC 28nm, 12mm^2 in size) is 60GB/s though for the planned production chip, we will be over 256GB/s. I have a good feeling we will be a fair margin higher than that, but I would rather under promise and over deliver.
25 gbps for a very short reach interconnect sounds possible, although having to go through an adapter chip is going to kill your latency from a system perspective. If you haven't already, you should check out the DE Shaw Research Anton 2 chip. It is an older process, but it has 66 4-way processor cores running at 1.65 Ghz and a roughly comparable network (although 6-way rather than 4-way), in addition to all of the md-specific hardware. It uses a similar memory hierarchy (although it does use non-coherent caches). Getting good performance out of software managed caches is very difficult in practice, even if you know your problem extremely well. With very carefully written software (and a sufficiently friendly problem) good performance is possible, but it definitely isn't easy.
I highly doubt that a direct interface would be possible with either of them, though if you really wanted it, you could make an adapter (though fat chance Intel would open up QPI enough to allow for it). We haven't officially announced the partnership, though I can point you at JESD247.
have you published any white papers detailing any of the following: architecture, instruction set, software availability, benchmarking / application porting and performance etc.
I read a couple of times that you got funding from various govt agencies. Most of these funding agencies publish rfp responses or slide decks unless you insisted on an NDA and was approved. I couldnt find any documents talking in depth about your work.
I am in the HPC space (academic, research) and am genuinely interested in learning more about your work.
We'll be releasing a whitepaper by September covering the architectural basics, which will coincide with a public release of a SDK. We did have a paper[0] at last years Memsys conference that goes over some of the basic ideas of our compiler, though it is pretty vague (due to our reluctance to share prior to having patent protection at that time).
We'll send an announcement on the mailing list when tools (software based and FPGA based simulation, along with actual silicon) will be available. We will only be getting 200 chips back from this initial test run, so we have to be fairly stringent in who will be getting hardware eval units in the coming months, but if you have a compelling application idea, feel free to send me an email (in my HN profile) and let me know.
This sounds good. One of the big problems with Mill CPUs is there is that they don't have working silicon yet. I would say getting as much people as possible to play with it, is crucial for a new architecture to get traction.
Even better than that would be a open architecture like RISC-V. Though, open architecture has its own drawbacks.
Also, as a side note, what do you think about the possibility of using Genetic Algorithms and Machine Learning to generate more efficient types of interconnect architectures.
From what I understood, a lot of the software stack would require rewriting. As it is, it doesn't look like it would be friendly to a Linux environment running natively on it, but could be more amenable to a coprocessor-like environment where the host would load programs and the Neo would run them.
In the near term, yes, though that is primarily a business reason for us. Supporting Linux is technically possible (old projects such as uCLinux were built around running on MMU-less systems like ours; Mainline Linux 4.2 started to have limited support for a couple of MMU-less systems), though our target areas (HPC and DSP-like tasks) don't necessarily need anything more than a microkernel/RTOS. A full OS like Linux kind of gets in the way if you get the basic stuff like memory allocation, garbage collection, and job scheduling handled separately (by our software tools). Since we are a small startup and focusing on a small area, we want to take off a part of the problem we can easily chew, rather than trying to immediately jump at Linux, so we chose our target applications/market accordingly.
I am perfectly fine with the idea to have a supercomputer running a specialized OS and a front-end machine running the sysadmin-friendly OS. It feels like a Connection Machine with fewer blinking lights.
We've had some really great progress that we hope to share in the near future, so stay tuned.
EDIT: Since this article is over a year old, we have made a lot of progress, and have recently taped out our first chip. We haven't officially posted a job opening, but we are very shortly going to be looking for software engineers that would love to work on our architecture. Feel free to shoot me an email if you're interested!