Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They let other's use and sharpen their tools.


One of the goals of this project is:

"Share the benefits of machine learning with the world"

It doesn't say who shares what :)


Google's stated goals seem to be very altruistic and to their credit, they do genuinely contribute heavily to good causes and open science and research. I kind of find it hard to believe a for-profit corporation is purely altruistic, but their play here seems to be to basically encourage research and make it more accessible to more people so that more human minds come up with novel ideas that Google itself might some day use in the future. e.g. I can see how investing millions of $'s in funding PhD students can pay off if even a single one of them discovers an obscure algorithm that increases efficiency of some process by just 0.1%, but at Google's scale that might still save millions of $'s more.


I believe Google is playing long-term here -- if they make it really easy for someone to do ML research using their frameworks, they'll collect rent from the researcher and possibly get something more innovative out of their work later.

Not everything they do is about short-term data plays.


Would it be possible for a pass-through of 'bad' or 'faulty' data to mess up their model at a large enough scale?

If the data can be used to improve the model it seems like data could be used to damage it, in theory.


I think ihws2 was saying sharpening tools to mean filing bug reports and feature requests for tensorflow and the cloud service.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: