Having read the article, only 54% of Googlers view their compensation is better than what they could get from another company. I find it surprising. Who else pays more than Google does? Can only think of Facebook, but even then not sure.
I used to work at another BigCorp (around $150,000/yr) and really thought Google pays competitively, but boy was I wrong. Here are two offers that I received recently:
1. Small company: around $160,000/yr + equity
2. Google: no exact number but I was told that for the level I was approved, the base is around $120,000/yr. Apparently my coding skills on a Google doc were equivalent to that of a new grad and they are really trying to low-ball me. Somehow they are under the assumption that I'm "dying" to work there.
If you interview at Google make sure to have other competing offers, otherwise you'll be up for a surprise. Also remember that although they'll tell you that the hiring committee looks at a candidate holistically, only the onsite interviews will dictate your level and the compensation. They don't seem to care what products you've built previously, years of experience, or education. How you code in a Google doc is what seems to matter.
Is there a better way of making sure someone can actually code? It’s easy to take credit for past projects when it’s unclear how much help you had or what part you played. Isn’t the best way to judge a candidate asking them to create/produce something for you?
- A non-trivial take-home exercise and give them a week or two to complete. This should have higher weight on the decision instead of the "Google doc coding" as most likely that's the type of code they'll be shipping to production.
- Use the onsite interviews to improve upon the exercise and/or to get into the nitty gritty details, and also to make sure this is the type of person people would enjoy working with.
- Allow candidates to run the code and to look things up (even Einstein didn't remember how to do long division, he looked it up).
- Give people the benefit of the doubt and assume they are not liars or thieves. If you end up having certain doubts, ask yourself why you have those doubts and take appropriate steps to remove doubts.
And let's be real. Do you think someone with X years of experience having worked at multiple companies (small and big corps) was hired because they couldn't code?
> even Einstein didn't remember how to do long division, he looked it up
Nitpick: the idea that Einstein struggled with school-level mathematics is a myth[1]. I would be very surprised if he ever had to look up the procedure for long division.
I personally hate take home tests that you aren’t compensated for, but I see your point. As to your last point, depends on your definition of “couldn’t code”. I think we tend to give the hiring and HR at companies like google too much credit. Things still slip through the cracks, especially at an organization of that size.
One thing I do with take-home exercises after I'm done interviewing with a company is adapting them and then open-sourcing them (obviously removing identifying information about the company). This has really helped me and I have more side projects to show later on. With that said, I've gone above and beyond with the implementation of these exercises so typically many of them are worth open-sourcing.
People can cheat on take home exercises. At the scale of google, a nontrivial number of people will.
If you'll end up having to do a face to face analysis to make sure someone didn't cheat on a big take home assignment like that, why not just skip to the face to face analysis?
I think it's more a reflection of thinking that if a lot of employees complain that they're not paid well, maybe the company will increase their salaries in response. But there's no potential benefit at all to claiming that the pay is good (regardless of whether it is or not).
Netflix (which from my understanding just pays very high salaries but minimal stock or stock you buy at discount)? also a lot of people that are boasting of high comp have high comp because the value of the stock they were initially granted increased over time with rising stock prices. Will be very very curious to see how total comp numbers trend during a recession.
Lot of SV based companies compensate quite generously. The competition for good talent in SV is immense. When there's money on the table, companies _will_ fork it up.
Netflix, or Amazon if you're really top-tier, maybe LinkedIn or Apple (counting RSU packages). There are a good handful of companies that are offering sky high comp right now.
Even if Google has the highest medium/average salary (which is so not true), still not necessarily means other companies won't pay much higher to an individual.
The facts that so many "former Google employees" out there and people left Google in like 4 years on average also hint it.
As many people already indicated, even not FAANG, many smaller companies are generous.
Salesforce pays me much more than what Google talked to me about, So this isn't true. MSFT also was pretty close the SF offer for me as well. I also have friends at Netflix and they make much more than what Google offered them. I'm in Security so it may be different for Devs.
I think Netflix does and I believe it’s all cash unlike the cash/stock split that google does. I got that from levels.fyi so not sure how accurate the information is.
Many smaller companies do. If you can lead and multiply the existing staff you can earn more. The downside is that you have limited runway and might have to hop around.
I think it's more about transitioning from the founder-type CEO driven by vision to the maintainer-type CEO who only cares about pleasing the board to keep his position as long as possible.
Tbf while Cook doesn't have the product vision of Jobs, he does have a clear strategic/what we want Apple to be vision. He's a post founder CEO for sure, but one of the best and a different league from Pichai by all accounts.
Ballmer was employee #30 at microsoft.
While not technically a founder, he had been with the company 20 years out its 25 years of existence before becoming MSFT's CEO.
+1. It takes a certain kind of people to raise through the ranks like that, so you automatically filter out people who might be more akin to the original founders. Vision _might_ be an asset when rising through the ranks, but pleasing superiors, hitting metric, politics and luck probably overwhelm it.
All corporations grow, decay and rot on the similar trajectories, so in this sense it's similar.
I worked at MS at this time, and I think BigG now is like MS in 2003 roughly speaking. I'm waiting for miniggl to appear, then the circle will be complete. :)
Having worked at both. Google is far far ahead. Any young dev reading this, you will get good engineering foundations at Google that will keep you in good stead for the future.
People said the same thing about Microsoft in the late 1990s - The Windows NT kernel was supposedly a thing of beauty, while Cairo was way ahead of its time. Problem is, computer science advances, so the type of programming that was state-of-the-art in the 1990s paled in comparison with what Google developed 10 years later. And the Google stack of 2003 is remarkably outdated by 2018 standards.
As a Xoogler I use & value the engineering chops I learned at Google every day, but I'm not naive enough to think that'll last forever. There are some really exciting developments in multiple areas of computer science - notably blockchains, Rust, GPGPU, serverless - that Google is poorly positioned to take advantage of, as well as others (machine learning, search, big data, distributed systems, capability security) that Google has historically been the market leader at but that are rapidly being commoditized by very high quality open-source projects.
Microsoft's hiring pretty much went down the drain because there was no common hiring bar. I can point different orgs with different level of devs. This is not the case at Google no matter how much HN complains about Google it consistently hires smart and above average engineers. Google is growing, it will meet its challenges, wait and watch.
Right Google is losing in ML or not caught up to speed ?
I don't know when you left but even outside Google it's pretty much known that ML innovation is Google's strength.
Two big teams use Rust at Google in production. I guess Google didn't make the TPU or Tensor Flow as well. Take your pitch forks out but once you're done have a look at some facts.
Not losing, but commoditized - in those fields, there are now perfectly good open-source alternatives.
I was at Google from 2009-2014. When I joined, Google was literally the only place you could work if you wanted to do data science on web-scale data sets. Nobody else had the infrastructure or the data. Now if you want to do Search, ElasticSearch has basically the same algorithms & data structures as Google's search server, with Dremel + some extra features thrown in. (The default ranking algorithm continues to suck, though.) If you want to do deep learning, you reach for Keras, and it'll use TensorFlow behind the scenes but with a much more fluent API. Hadoop was a major PITA to use when I joined Google; now in many ways it's easier & more robust than MapReduce, and the ecosystem has many more tools. Spark compares well with Flume. Zookeeper over Chubby. There are a number of NoSQL databases that operate on the same principle as BigTable, though I'd pick BigTable over them for robustness. Take your pick of HTML5 parsers (I even wrote and open-sourced one while I was at Google). Google was struggling mightily with headless WebKit for JS execution when I left, now you can stand up a Splash proxy in minutes or use one of the many SaaS versions. Protobufs, Bazel, gRPC, LevelDB have all been open-sourced, as have many other projects.
The big advantage of big companies like Google is that they have lots (and I mean lots) of data and for them that data is comparatively cheap to store and manipulate.
I mean, I first wrote a text-categorization algorithm using a k-NN algorithm about 12-13 years ago, and in order to make it run with acceptable results I only needed to manually categorize about 200 articles for each category training set. That was very doable, both in terms of time spent for constructing the training set and in terms of storage costs. Now, I have been thinking for some time to write a ML algorithm that would automatically identify the forests from present-day satellite images or from some 1950s Soviet maps (which are very good on the details). I’m pretty sure that there already is some OS code that does that, but the training set requirements I think would “kill” the project for me. I read a couple of days ago (the article was shared here in HN) about some people at Stanford implementing a ML algorithm for identifying ships included in satellite images, and I remember reading that they used 1 million high-res images as a training set. Now, for me as a hobbyist or even for a small-ish company there’s no cheap way to store that training set. Never mind the costs of labeling those 1 million training images. Otherwise I totally agree with you, we live in a golden age of AI/ML code being made available for the general public, but unfortunately is the data that makes all the difference.
They kicked off the deep learning trend when they bought deep mind I guess. Otherwise what innovation are you talking about?
Switching from KNN to DL in machine translations is impressive as a technical achievement ... but not really an innovation, and I doubt all this "innovation" impacts their bottom line in any way.
> Otherwise what innovation are you talking about?
Quantity: Google has the highest number of deep learning papers accepted into top conferences among all institutions, even when papers from DeepMind are not counted in Google's.
Quality: Transformer and the recent BERT have, pun intended, transformed the entire NLP field. Batch normalization is now a staple of all neural networks, as are its descendants instance normalization, group normalization, etc.
These are just on top of my head. Google may have done many things wrong these days, but it definitely has not lost any edge in machine learning.
While I don't know real numbers, back of the enveloper estimations for hardware costs alone (based on GCP TPU/GPU pricing) give order of hundreds thousands for BERT, and tens of millions for AlphaGo and friends. Notice how very few organizations in the world are in position to commit these kind of resources to AI problems, and that only Google and China are choosing to do so.
And? Most scientific breakthroughs after the World War II require expensive equipments and materials. That fact doesn't make the achievements from Google, from Bell Labs, from CERN, from Fermilab less innovative.
There's a lot of interesting stuff going on with computational blockchain platforms (Ethereum, Stellar, EOS). Basically they make it possible to write and deploy code - with nobody's permission, no approvals or policies or corporation necessary - that can ensure that when a user performs an action, they receive something of value. And they can do this without the user needing to trust that the terms of the transaction won't change later.
One of the hardest parts in many software markets is in designing incentives, making sure that the user has a reason to perform the action you want them to perform. And for startups, there's the added problem of getting users to trust that the incentives you advertise will actually hold. I might trust Stripe or Google to actually deliver the money they say they are collecting on my behalf to me because they are big established companies, but I'm certainly not going to trust a random payment processor who just started up and is advertising on a forum somewhere. But once platforms like Ethereum actually have decent UIs and reasonable transaction processing rates, you can just inspect the code of the smart contract that collects Ether (or Dai is the new payment hotness, now) from users and disburses it to the parties that were involved in producing whatever service they use.
The permissionless aspect of this whole system is very similar to the early WWW, where you could just stand up a website to do something useful and if it was good users would flock to it. That's why I'm excited. The cryptocurrency world gets a lot of bad press because a lot of the early users were quite gullible and a lot of the early use cases were in finding better ways to scam them, but there's real, fundamental technological innovation behind it.