Hacker Newsnew | past | comments | ask | show | jobs | submit | tiger10guy's commentslogin


I'll just leave [this](https://www.youtube.com/watch?v=m5tGpMcFF7U) here.

Dan Dennett has said similar things.


Yes!

... but which variables?

Permutations grow exponentially as you add dimensions, so you can't have too many.


You want Systematic Inventive Thinking: http://en.wikipedia.org/wiki/Systematic_inventive_thinking


Their method for training a shallow network requires that one first train a deep network.


The ideas that I, as a programmer in the traditional sense, want to communicate to computers are not the ideas I want to communicate to humans.

I might ask my friend to move the report he's working on to a shared network location so I can load it into my computer and read it: "Hey Joe, can you move the report to the share?"

Joe might ask the computer to do the same thing: "cp /home/joe/reports/cool_report.pdf /network/share/reports/cool_report.pdf"

The actual ideas that are communicated are very similar, but not the same. English is good for communicating one idea while bash/GNU is good for communicating the other.

Just because English has some established formalism doesn't mean it's good at communicating the ideas we want to communicate to computers.

BTW, I don't care which field you put the issue under; it's the same issue and anyone who cares about it might contribute to the discussion.


you very solidly supported the argument you are against.

"Hey Joe, can you move the report to the share?"

is a great way to communicate something you want done. it doesnt matter if its to a computer, a person, or a dog. If it can't operate on those bounds then its not sophisticated enough to actually meet the needs of the user. One day computers will get there, they haven't done so not because its a 'bad way to talk to a computer' but because computers have not yet become that sophisticated.


I'm pretty sure there's not an autoencoder involved, it just looks like a vanilla conv net.

This is the implementation: http://torontodeeplearning.github.io/convnet/


I don't think information about how Vicarious systems work is publicly available. Presumably they're using something similar to HTM.


In Li Deng's and Dong Yu's book on Deep Learning (March 2014) version they briefly relate Hierarchical Temporal Memory (HTM) to the Convolutional Neural Networks which are popular for Deep Learning.

http://research.microsoft.com/apps/pubs/default.aspx?id=2093...

It's worth noting that most people doing Deep Learning aren't trying to replicate the brain, but just want to do a better job at Machine Learning (ML) and Artificial Intelligence. Here's how I see it as someone working on Deep Learning; someone correct me if I'm wrong.

Deep Learning: Trying to do ML - yes Trying to replicate brain - no (for the most part)

Numenta (HTM/CLA): Trying to do ML - yes (not sure how much they succeed) Trying to replicate brain - yes, but (i) we don't know exactly how the brain works (ii) they make approximations

Projects like Nengo (http://nengo.ca/): Trying to do ML - no Trying to replicate brain - yes

I'm not very familiar with Nengo.

Edit: formatting


Well,

It seems like there ought to be a level between "simulating the brain" and just coming up with your own algorithm. I would imagine that level as "seeing what the brain can do at a particular low level, seeing how close you can come to duplicating that, see what unique approach you can derive there, apply to other other, repeat". That level would be "inspired by the brain without trying to simulate it". It seems like in his popular talks Hawkins implies he's doing that but that in his actual software, as you mention, he winds-up doing just a variation of standard machine learning.

It would be nice if he had postponed deciding he had a solution and instead kept banging on the problem of what algorithms can be kind of like X or Y thing that the brain appears to do. I'd like to think you could mine a bunch of ideas from this.


How did you get those numbers?


Answers to real world questions almost always lie somewhere between 0 and 1. Your question is no exception.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: