Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hah, nice catch!

It seems to me (though I am very biased!) that most of the landmark papers in the aggregate behavior of neural networks are mostly in physics (eg: Hopfield networks) and in computer science (eg: Perceptrons, Minsky).



You're right, but it quickly moved away from physics. McClelland and Rumelhart also did the seminal work in cognitive science in the 80s.

Here's an upcoming review: http://www-psych.stanford.edu/~jlm/papers/McClellandIPTOPiCS...

Then there are folks like Sejnowski (http://www.salk.edu/faculty/sejnowski.html; http://en.wikipedia.org/wiki/Terry_Sejnowski) who pushed the field more into biology.

Still, I'm really surprised by the current chasm, if that map is reflective of the state of the field.

EDIT: For spelling.


Sejnowski pushed it back into physics (sort of) with his information maximization work, very interesting...

http://papers.cnl.salk.edu/PDFs/An%20Information-Maximizatio...


Thanks for the links.


Happy to help. I learn so much around here, it's rare I can return the favor.


Those are artificial neural networks -- not necessarily anything like the real ones.


Some are. Some aren't. It depends on the instantiation details. Folks though seem to be much more ready to implement at a biologically-plausible level these days than they were even a few years ago.


My point is that we learn much more about neurobiology from a biological viewpoint than by studying artificial networks from a computer science perspective. If anything, the biology informs the computer science as you suggested.


It's more bidirectional than that. Higher-order cognition (language, memory, even perception and attention) isn't so easily reducible. Take for instance the hippocampus. Sure, we can simulate circuitry with precision but that doesn't explain memory formation and retrieval. More "artificial" approaches can help to explain systems from the top down even as biological constraints are more rigid from the bottom up. Both modeling approaches are likely to meet somewhere in the middle. The computational shortcuts in the more abstract models (e.g. backprop) are really just a shorthand to allow investigators to focus on less biologically-driven details or those that are not now understood in biological terms.

For instance, I know of one group using analytic techniques from social networks to correlate brain regions in fMRI data. Is the brain a massive social network? I don't think any one would say that literally. But right now, that approach is as good as any other to examine n-dimensional relationships in highly complex data.


This discussion reminds me of Paul Krugman's argument for cartoon models. I personally think that we can isolate, and therefore explain, simple parts of aggregate neural behavior by artificial construction more easily than we can by doing careful biology.

Incidentally, you might be interested to know that restricted boltzmann machines are much more biologically plausible than backprop, and seem to work faster and better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: