The ideas that I, as a programmer in the traditional sense, want to communicate to computers are not the ideas I want to communicate to humans.
I might ask my friend to move the report he's working on to a shared network location so I can load it into my computer and read it: "Hey Joe, can you move the report to the share?"
Joe might ask the computer to do the same thing: "cp /home/joe/reports/cool_report.pdf /network/share/reports/cool_report.pdf"
The actual ideas that are communicated are very similar, but not the same. English is good for communicating one idea while bash/GNU is good for communicating the other.
Just because English has some established formalism doesn't mean it's good at communicating the ideas we want to communicate to computers.
BTW, I don't care which field you put the issue under; it's the same issue and anyone who cares about it might contribute to the discussion.
you very solidly supported the argument you are against.
"Hey Joe, can you move the report to the share?"
is a great way to communicate something you want done. it doesnt matter if its to a computer, a person, or a dog. If it can't operate on those bounds then its not sophisticated enough to actually meet the needs of the user. One day computers will get there, they haven't done so not because its a 'bad way to talk to a computer' but because computers have not yet become that sophisticated.
In Li Deng's and Dong Yu's book on Deep Learning (March 2014) version they briefly relate Hierarchical Temporal Memory (HTM) to the Convolutional Neural Networks which are popular for Deep Learning.
It's worth noting that most people doing Deep Learning aren't trying to replicate the brain, but just want to do a better job at Machine Learning (ML) and Artificial Intelligence. Here's how I see it as someone working on Deep Learning; someone correct me if I'm wrong.
Deep Learning:
Trying to do ML - yes
Trying to replicate brain - no (for the most part)
Numenta (HTM/CLA):
Trying to do ML - yes (not sure how much they succeed)
Trying to replicate brain - yes, but
(i) we don't know exactly how the brain works
(ii) they make approximations
Projects like Nengo (http://nengo.ca/):
Trying to do ML - no
Trying to replicate brain - yes
It seems like there ought to be a level between "simulating the brain" and just coming up with your own algorithm. I would imagine that level as "seeing what the brain can do at a particular low level, seeing how close you can come to duplicating that, see what unique approach you can derive there, apply to other other, repeat". That level would be "inspired by the brain without trying to simulate it". It seems like in his popular talks Hawkins implies he's doing that but that in his actual software, as you mention, he winds-up doing just a variation of standard machine learning.
It would be nice if he had postponed deciding he had a solution and instead kept banging on the problem of what algorithms can be kind of like X or Y thing that the brain appears to do. I'd like to think you could mine a bunch of ideas from this.