Yeah, that's definitely a nice, practical book for machine learning stuff (but not really AI proper). For a practical book on AI itself, Norvig's Paradigms of Artificial Intelligence Programming:
"... Yeah, that's definitely a nice, practical book for machine learning stuff (but not really AI proper). For a practical book on AI itself, Norvig's Paradigms of Artificial Intelligence Programming ..."
It has been well funded, attracted the best minds (Minsky, McCarthy, Newell) at places like MIT, Stanford and resulted in lots of useful languages & tools. [0] From Cyc to Powerset companies have tried to use these techniques trying to assign human like intelligence to machines yet have failed.
Why?
Bruce Stirling reckons it has to do a lot with the language. [1] Assigning intelligence to what is essentially a 21'st century equivalent of knitting mills or steam engines of yesteryear. Maybe thats why Google has taken the approach it has. Instead of creating intelligent software to find the meaning, take an insight into users who have pre-assigned intelligence to documents. And use well known sorting, linking techniques and software engineering along with not-so-well applied math techniques. Thus make google appear to make sense of what we search for.
I think that some progress in sensory systems would make AGI easier. Obviously, humans spend most of their time structuring the complex, physical world around them, not a stream of text like most AI approaches dictates.
Our genetic code isn't that huge, but the amount of information contained in our brains is. All that information must be coming for somewhere. I refuse to believe that evolution has managed to create _that_ awesome compression algorithms.
Maybe this is what Google does, in a way. They happen to have a huge corpus of machine-readable data, which is in a sense equivalent to what a sophisticated sensory apparatus would give you. I intended to disagree with you when I started writing this comment, but now I'm not so sure. Human intelligence obviously doesn't spend its time reading petabytes of information on other people's search habits, but maybe Google is actually onto something.
I think that what Google is doing is along the same lines of what human brains do. Google is crunching a load of mainly textual data to create an accurate statistical model of context. Humans use a much smaller amount of text but use visual, audio, and touch sensory input to create context. Humans also have the ability to refine their mental model by doing experiments on the world, whereas Google, as an observer, has to just use what people create. This means that Google (or any AI) needs a lot, lot, lot more of data to train its accuracy, which is now possible due to the internet.
What is "AI proper", though? I am fairly certain that when the true AI will be built, it won't be an expert system or based on first order logic.
The classical AI stuff is still interesting, of course, and still has practical applications (ie A* search for games). But I tend to count "the other stuff" (ie Data Mining) as AI, too.
I'd agree with machine learning falls within the purview of AI -- I'm just saying if all you want to learn is machine learning, then a book on just that subject would probably be more relevant than AIMA. I just got "Pattern Recognition and Machine Learning" by Bishop, and it's a great book.
http://norvig.com/paip.html
is a great book, and a nice complement to Norvig and Russell, which is somewhat more theory-oriented.