The OP is advocating for MESO - magneto-electric spin–orbit logic.
From Wikipedia - Magneto-Electric Spin-Orbit (MESO) is a technology for constructing scalable integrated circuits, which utilize spin–orbit transduction of electrons. It is intended as a replacement for the CMOS technology.
Compared to CMOS, MESO circuits require less energy for switching, lower operating voltage, and feature a higher integration density.
The article states that MESO was developed by the author's research group at Intel. To be clear, there's nothing especially new about research into beyond-CMOS tech. But as we all know, it's rare for this sort of research to have real consequences on actual products.
While this is a great improvement, this is most needed on mobile devices.
This was especially a problem with Ingress and Pokemon Go games where you need the map to be correct where it isn't.
Even more difficult is using those apps in places like the Arctic or Antarctic, there's no way to accurately place GPS coordinates, deal with game portals or have directions other than the compass.
We're not going to get very far letting machines guess what we want by making models we can't interpret.
Explainability is important, and critical in applications where lives are on the line.
Explainability where we're guessing what's happening in a black box model won't do either. Nothing but complete transparency of the model and why it's doing what it's doing. Its source code, that makes sense to humans, is needed. Full on model audit. No guessing.
I can think of only one company that's attempting to do this, and it's not anyone you hear working on explainability, including DARPA.
If the visual cortex were a machine model, people would be complaining about how we can't explain it and how it's a dangerous black box. They'd probably tout the many optical illusions as demonstrations of this danger.
Yet we don't demand that other humans explain how their visual cortex work. There is a double standard here.
I do not approve of the red herring. Either it is visual cortex, or the intelligence and decision making/planning. One works very differently from the other.
What's the difference between the output of a network and what an expert says? Unless you can probe mathematically why networks or human mind works, there is no explanation for any of both methods. I can argue that humans learnt from examples the same ways neural networks do, you can say the opposite. But we have no way to say that that any claim is false or true.
You can't make a full on model audit on a neural net, but there are architectures such as the Transformer (an attention scheme) that can give a lot of insight into what the neural net thinks. We can also visualise what inputs maximally activate a deep neuron in a CNN. Not all DL models are truly black boxes.
The problem is the models have no reflection capabilities unlike people.
The explanation is always done by a really different external system and sometimes by an actual intelligence.
State models (Markovian) are sometimes able to explain things but not always really, especially in complex cases.
On the other hand, humans tend first to take a decision and later retrofit an explanation to it, even if it is completely wrong, and we assume the explanation is the reason and not the effect.
There are a lot of reasons, some of which are cultural. With the concept of failure being shameful, those people bury that experience and try completely different ones to find success they can feel comfortable with sharing.
Not all societies view a shortcoming or failure as a learning opportunity and stepping stone to success.
Self absorbed folks and VCs often ask this exact question, "why are you the only one with this idea?" which is short sighted and impossible to answer.
Either everyone else is stupid, or too lazy to come up with a plan to do it, or a myriad of other reasons. It doesn't matter why.
Like someone else pointed out the industry timing, state of technology and funding for that matter all drive these possibilities.
In the end only execution matters, and that comes with a long list of prerequisites aligning just right to even begin to form a possible positive outcome. It's luck and determination. The rest fail anywhere in-between, only to try again a decade or so later when the cycle repeats.
Funding goes to popularity, not new research. This is even true at DARPA where they ignore new technology because it doesn't fit some preconceived notion or don't have a framework to evaluate it.
Case in point for XAI, explainable artificial intelligence. The algorithms we use today give us black box models we can't interpret directly. So instead of fixing the algorithms, they focus on modeling the models and "guessing" which ones come close enough via simpler more intuitive stacks of models. Guesses upon guesses.
There has been research in new algorithms that generate open models where the weights make sense and are editable. There is one company working on this, but it's not nearly enough.
There's another set of research that has managed to convert black box models into open ones, giving full transparency.
Then there's asynchronous circuits research which do not require a clock. These can reduce power usage and boost efficiency on low power devices. Not much going on here.
There's one group building a RISC5 architecture with these, based on 30+ year old research with the inventor who still has not seen his life's work commercialized.
Then there's various types of imaging and tracking with signals we use every day, such as BT, Wi-Fi and Cellular among others, and being able to locate devices or people.
You can find several universities doing this, none have made it commercially.
I actually think that's a pretty hot topic right now. Tons of people are working on commercializing it. Heck, a couple years ago I even did a contract with a major government where I helped them fix their recommender and it was built with explainability in mind because explainability allows politicians to understand the risk and into install human-review safeguards.
Just look at this PR that is 3years old and counting: https://github.com/moby/moby/issues/28400