So I was thinking, I really like AI, but I don't want to be an academic. So I'm wondering, are there any industries where AI is actually being used? I've heard of some projects like tumor detection and ever some use in finance, but do people actually trust it enough to use it?
I'm getting my PhD in Machine Learning right now, so I'll try to give you some ideas.
First of all Machine Learning will be useful to you, because you'll understand much about statistics, data mining, optimization and efficient programming. If you don't get to work on an ML project in industry directly, some of the enabling techniques might come in handy.
Now, ML is also interesting for industry. Google is essentially a Machine Learning company. They use statistics to evaluate and display data, be it website ranking, recommender systems, or whatnot. Some goes to some extend for MS, Facebook and Yahoo.
Now, there is also increasing demand in the computational biology community for ML methods. Pharma companies are producing terrabytes of data that will need to be analysed with intelligent algorithms, so that might be an interesting direction to consider.
In times of unlimited data access, studying how to teach computers to learn from data seems like the right thing to do.
Are you kidding me? Statistical AI is the hottest thing in computing now. You might have heard of it as "Machine Learning".
However, you wont see any of the human emulators that were being conjured up in the 70s and 80s. The new AI is in the business of synthesizing massive data into intelligence.
Right, AI isn't recognizable as the Scifi thing you might have in mind... it's automatic financial trading software, or it's robotic path finding or target acquisition.
Thanks for the answers. I guess I was really asking if anyone is actually using the techniques.
E.g. in finance, I'm under the impression most people still use standard pricing models as opposed to something like an ANN (or other statistical learning algorithm). The reason that I was told is that people are afraid of using some generic black box and instead prefer something like black-scholes (or whatever) that has an explanation they can understand.
Online advertising uses all kinds of machine learning.
When companies are running billions of impressions across multiple ad networks and using different creative variants, it takes serious algorithms to cut through the noise and find the inventory that's performing well. It's a tricky problem because typically you'll get tiny click-through rates, meaning you're looking to group a few points of data among millions of impressions.
Contextual analysis (figuring out the context of the page where the ad appears) is a big deal now also. That's a lot of natural language processing on some of the most diverse (read: difficult) content available.
I happen to work for a company that does both of these. :)
Heh. I'm assuming your calling black-scholes the lamp post and the more generic technique the alley.
I suppose it comes from having to justify what your doing to higher up managers who don't have the time or inclination to learn something new. I mean, if you can't explain it in 30 sec they won't like it.
He did a paper on the price of cotton in exchanges obeying a 1/6 power law. When confronted with this, the powers that be said, effectively, "we don't have the tools for that" and didn't use any different tools than they were using to measure risk. See The Black Swan for more examples.
There's plenty of applied AI out there, but since the '70s people prefer to call it something that sounds more practical, like 'machine intelligence' or 'soft computing'.
Part of the reason people don't notice it is that many of the applications are 'hidden' in embedded systems, especially control systems and computer vision. Also, any practical application is likely to be limited in its ambitions and there are always disputes over how 'intelligent' something has to be to qualify as AI.
The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term "artificial intelligence" even when their company's products rely on some AI techniques. Why not? It may be because AI was oversold in the first giddy days of practical rule-based expert systems in the 1980s, with the peak perhaps marked by the Business Week cover of July 9, 1984 announcing, Artificial Intelligence, IT'S HERE.
James Hogan in his book, Mind Matters, has his own explanation of the AI Effect:
"AI researchers talk about a peculiar phenomenon known as the "AI effect." At the outset of a project, the goal is to entice a performance from machines in some designated area that everyone agrees would require "intelligence" if done by a human. If the project fails, it becomes a target of derision to be pointed at by the skeptics as an example of the absurdity of the idea that AI could be possible. If it succeeds, with the process demystified and its inner workings laid bare as lines of prosaic computer code, the subject is dismissed as "not really all that intelligent after all." Perhaps ... the real threat that we resist is the further demystification of ourselves...It seems to happen repeatedly that a line of AI work ... finds itself being diverted in such a direction that ... the measures that were supposed to mark its attainment are demonstrated brilliantly. Then, the resulting new knowledge typically stimulates demands for application of it and a burgeoning industry, market, and additional facet to our way of life comes into being, which within a decade we take for granted; but by then, of course, it isn't AI."
Fair enough. It doesn't really bother me if its called "AI" or "statistical learning" or "that thing that math guy over in the corner does" but I'm wondering what industries its used in?
This started as a general question, but since I've posted it I've been wondering if there is a market place for some generic statistical learning service. Something beyond just least squares regression, even if it doesn't use cutting edge ideas.
I'm not really up to date on video games. I realize there is a lot of AI in the sense of smart opponents, but is there a lot going on with learning? I.e. enemies that get better at defeating you by seeing your tactics?
I've thought about this could be applied in RPGs... if you have a recurring opponent that learns how you fight each time you face off and gets better. Is anyone doing something like this (even at a basic level like "he uses magic a lot")?
I have only programmed very little AI for games (standard Alpha Beta Pruning), but I think I can already understand why AI in games is not better. It simply takes a lot of time to create a very good AI. It is not done with having a good baisc idea (like Alpha-Beta Pruning for chess), you then also have to tweak it endlessly.
I think if you can come up with an effective way to create good AI, game developers would be delighted to use it. It is only time constraints that makes most of them stick to "good enough".
First of all Machine Learning will be useful to you, because you'll understand much about statistics, data mining, optimization and efficient programming. If you don't get to work on an ML project in industry directly, some of the enabling techniques might come in handy.
Now, ML is also interesting for industry. Google is essentially a Machine Learning company. They use statistics to evaluate and display data, be it website ranking, recommender systems, or whatnot. Some goes to some extend for MS, Facebook and Yahoo.
Now, there is also increasing demand in the computational biology community for ML methods. Pharma companies are producing terrabytes of data that will need to be analysed with intelligent algorithms, so that might be an interesting direction to consider.
In times of unlimited data access, studying how to teach computers to learn from data seems like the right thing to do.