Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What the History of Math Can Teach Us About the Future of AI (scientificamerican.com)
81 points by jkuria on May 17, 2018 | hide | past | favorite | 37 comments


As the founder of the patent troll Intellectual Ventures the author (Nathan Myhrvold) has hurt our community deeply and I'm surprised how willing we are to give him a platform.


Because we already do give a platform to patent trolls.

"The new" Microsoft trolls Android manufacturers and makes ~$1 billion from it. Then, they have the nerve to say "Microsoft <3 Linux".

And they are still playing the embrace, extend and extinguish game. For example, according to Microsoft, you should stop using R CRAN and move to MRAN instead. https://mran.microsoft.com/

btw, I don't care if I lose karma because of this, I have karma to spend.


Android is so open to begin with. Let's face it anything that has a stock market symbol or plans to be a stock market symbol or be bought by a stock market symbol is a cannibalistic partner.


Shun him you say? Shun Gates for what he did to Apple, Netscape etc etc. Shun Googlers for what they're doing to competitors, shun Larry for suing about Java....and so on. Will anyone knowledgeable be left?


Uhh, none of those are patent trolls. Sure they may sue their competitors but they actually make products. Patent trolls by definition do not.


So "patent troll" must be for tech to shun and declare upon them? Crushing your competition with everything possible is any better?


I can’t reallly understand what you are trying to say.

But I think most people here would agree that running a company that does something useful is better than just sitting on patents and suing people for infringing on them.


I think the point would be that Microsoft certainly makes products for a profit. And let's be honest: they're pretty useful products.

At best, in their case, it's their side projects that we have issues with.


My point is that we disagree with a lot of founders. Now he wrote an article, he either has a point on that article or he doesn't. If we started to shun smart people for something they did or didn't do (he says, I bought the patents and I'm monetizing them in legal ways) a lot would be lost.


> Theorists have proved that some mathematical problems are actually so complicated that they will always be challenging or even impossible for computers to solve. So at least for now, people who can push forward the boundary of computationally hard problems need never fear for lack of work.

That applies equally well to humans. Consider the complexity class of Go.

> Meanwhile, many of the tasks that seem most basic to us humans—like running over rough terrain or interpreting body language—are all but impossible for the machines of today and the foreseeable future.

Wow no.


For some reason the field of AI has become an area where famous men who have no expertise in the field feel they have an obligation to tell us all about their opinions on it. Equally oddly, prestigious outlets then decide that these opinions deserve to be published so we can all read them.

What the actual opinions are is not important - since they're not based on any actual understanding of the capabilities or limitations of the technology, even if sometimes some of them are accidentally correct, that doesn't make them interesting.

(I guess as long as Nathan Myhrvold is pontificating on AI at least he's not spending that time on patent trolling, so there's some upside.)


> > Meanwhile, many of the tasks that seem most basic to us humans—like running over rough terrain or interpreting body language—are all but impossible for the machines of today and the foreseeable future.

> Wow no.

...wow indeed. No research at all. Often I just read the comments here and skip the article, but that made me go and look to be sure.

For anyone somehow still unaware (BigDog creepiness factor was nearly viral back when I was in college around 2008), BigDog has been handling rough terrain for over a decade: https://www.youtube.com/watch?v=W1czBcnX1Ww

Here's BigDog getting abused to see what it can recover from (including being kicked in the side): https://www.youtube.com/watch?v=4PaTWufUqqU

Here's a running cheetah robot jumping over unexpected obstacles (2015): https://www.youtube.com/watch?v=_luhn7TLfWU

And here's a two-legged robot walking around outside (2017): https://www.youtube.com/watch?v=Is4JZqhAy-M


My "running on rough terrain" video from 1995.[1] On non-flat surfaces, traction control dominates the problem.[2][3] This was before Boston Dynamics. (There's much more that could be done in this area, but there's no market. BD still doesn't do speed changes fast. Their machines start by walking, running or trotting in place and then extend the gait. Humans start by falling forward, for a faster start, and go far off vertical for fast direction changes.)

As for interpreting body language, here's the code on Github.[4]

[1] https://www.youtube.com/watch?v=kc5n0iTw-NU [2] http://animats.com/papers/leggedrun/leggedrun.html [3] http://animats.com/papers/articulated/articulated.html [4] https://github.com/shahqaan/kinect-body-language-analysis


And for the "interpreting body language" part I recall seeing demos of transcribing sign language, and also of measuring speaker stress levels from video from all kinds of body factors. If it had some widespread commercial application there's no reason why we wouldn't have machines that read body language better than many humans.



The latest video I saw was a two-legged robot doing a backflip.


> That applies equally well to humans. Consider the complexity class of Go.

Incidentally, my arguments against this general class of attempts to neuter AI via computational complexity arguments: https://www.gwern.net/Complexity-vs-AI


But the article says nothing about the history of math. Whatever technique NASA used to do calculations in the 1960s, it was an engineering concern not a math concern. And I doubt if it has any relevance to the progress of AI today.


The important point to be made is that the "simpler" tasks will be automated early, pushing human required labour to generally be smarter and better educated.

There are only so many people for any required value of "smart". The historical human computers in the article required less education and base intelligence than the mathematicians that are employed today.*

Sooner or later a significant segment of the population will simply not be able to train to most tasks that still require human labour. And slowly (or quickly) that bar will rise. When I hire and train people I simply need a certain baseline mental capacity, otherwise they will never get good enough to keep, no matter how long they train.

*) Note though that this does not mean they necessarily were less intelligent than the modern work force.


I kind of disagree. I think the pace of "smarter" is progressing faster and cheaper then "simple tasks"


Well made point, and I could be wrong. I assume you refer to the automation inroads into accounting, legal, organisation tracking (lower management) etc. I personally see those as specific low hanging fruit since they are highly rule based.

But soon we will reach strong automation on diverse simple manual labour, transportation, stocking/handling, etc. Even if automation just manages to handle 90% of everyday tasks it will crash enormous employment numbers, far beyond what we see today in accounting, legal, etc.


FTA: "Meanwhile, many of the tasks that seem most basic to us humans—like running over rough terrain or interpreting body language—are all but impossible for the machines of today and the foreseeable future."

https://www.zdnet.com/article/boston-dynamics-set-to-sell-it...


This article gives very little evidence to support the claim that AI will never be capable of doing everything a human can do. On the other side Of the argument I’m still very convinced by the fact that our brains are physical things doing some sort of computation, so computers will one day be able to achieve anything that our brain computation can achieve.


So if the brain is just doing some sort of computation then do you also agree with the statement that any flow of fluid (river etc) is just solving fluid dynamics differential equations?


Flow of fluid is governed by physical laws, for which Navier–Stokes is an approximation. Brain is governed by physical laws too, of course. But its structure facilitates processes which can be described as information processing.

If water fails to solve Navier-Stokes, we change the equations, if a human fails at reasoning, we correct the human.


My point was that we use mathematics to model physical phenomenon and by definition every model is an approximation of the phenomenon which means that you cannot say that the phenomenon is actually doing what the model says. Instead we say that the model is useful only to answer certain set of questions to certain degree of accuracy.


If a brain isn't doing what formal logic says, then it's doing it wrong. Sometimes models are more important.


> On the other side Of the argument I’m still very convinced by the fact that our brains are physical things doing some sort of computation, so computers will one day be able to achieve anything that our brain computation can achieve.

The author seems to suggest that figuring out this brain computation is an upper bound on possibilities.


The map is not the territory. Just because our models of computation mimic some of what our brains can do, it doesn't necessarily mean that they describe the _essence_ of what our brains are doing.


There's no evidence for a violation of physical Church–Turing thesis in brains.


> Meanwhile, many of the tasks that seem most basic to us humans—like running over rough terrain or interpreting body language—are all but impossible for the machines of today and the foreseeable future.

I don't think the author ever ran over rough terrain, that's all but a basic task. And for the interpreting body language, that fails as soon as you get to a different culture or just to a different animal. The author is vastly overestimating human capabilities.


I think two things go on with automation that are good and bad

1) things get cheaper and markets tend to be elastic so demand goes up. Good for whole economy 2) particular jobs disappear completely, very bad for particular sections of the community for a while.

It is sort of inevitable though.


Apart from no overdose danger, demand for computation is economically like drug addicts'. So vendors call their clientele users.

> It turns out that human intelligence is not just one trick or technique — it is many.

Disagree. Though humans have many varied talents, some inherited from mammals like walking and seeing, some developed culturally and practiced like go and chess, I think strong intelligence is just one trick. It might need quite a lot of background processing and memory in order to do anything useful or even vaguely "intelligent", but the key trick itself might not need much processing power, and could be quite simple.

> people who can push forward the boundary of computationally hard problems need never fear for lack of work

So, full employment!


> interpreting body language

https://arxiv.org/abs/1802.05521


this paper is about lip reading, which is indeed a difficult task (i cant do it well, to be honest) and worthy of study. however, lip reading is not body language. lip reading maps movements to words in a natural language. recognizing body language requires knowing e.g. that crossing arms maps to the idea of closed-ness and thus that the interlocutor is closing themselves off from the conversation to some degree. or maybe the interlocutor has very long arms and has never been able to know what to do with them in a conversation or something.


Multimodality isn't really my field, but there's also a lot of research on emotion detection. E.g. https://arxiv.org/pdf/1801.07481.pdf is a recent survey about commonly used methods that I found by quick googling.

We definitely can combine things like detecting crossed arms (and knowing that it's correlated with closedness) with emotion and stress signs in your voice, sentiment mapping of the words you say, micro-movements and your pulse rate (that a machine can detect from video if it's sufficiently good) and various other things to infer your likely emotional state.

The trouble is that in-depth analysis requires excessive external context and a shared worldview - i.e. "being of the same tribe" and knowing how a particular real world event "should" make one feel (and why), which is pretty much a general AI problem; but purely reading what the body language of this moment is telling about your emotions is a hard task but somewhat solvable even right now.


You’re right. I guess lip-reading is quite the stretch to be called body language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: