If you’re not providing secure MFA as an option and invalidating breached credential pairs via HIBP, you’re negligent as an idp. 23andme failed hard, and they should own it.
FFS, default to magic link login via email if you have to. At least then you're relying on Google, Apple, or someone else for auth (in most cases of unsophisticated users).
It is an option. Every user has the option to setup MFA when they set up their account. The fact that people reused their passwords and chose not to setup MFA is not 23andme’s fault.
As a 23andme user who has filed a complaint with the FTC, purposely opted out of arbitration and intending to join a class action, and is responsible for customer IAM at a fintech, I politely disagree. Poor IAM and AAA decisions are a choice, and there must be consequences for resulting harm.
I absolutely have an axe to grind against consumer harm incurred by lazy and/or negligent technology companies (all companies, really, just scoping for this convo). Guilty as charged. When good behavior is not forthcoming, spin up regulators and the legal framework.
EDIT: I do not believe this is an unreasonable position to take. Years ago, I interviewed with the CTO of 23andme and almost took an infra job there (comp too low) ~12 years ago. I am a customer. I have mostly good things to say about them as an org. That is not a free pass when you do harm. Do better, it is not hard.
It’s not lazy or negligent on the part of the website when they offer additional security and users choose not to use it. 23andMe asks multiple times for users to set up 2FA and apps like 1Password and Bitwarden recognize that it’s available and prompt users to set it up.
It is when those users' passwords unlock not just their own data, but that of millions of other users as well.
Alice could have set up 2FA and adhered to all the best practices, but she still got her data stolen because Bob used "hunter2" and was hacked.
14,000 accounts compromised, 7 million users' data taken. There's no way 23andMe should be able to offload their responsibilities to Alice's cousin Bob.
That's not what happened. The 7 million users didn't have their data stolen. The compromised accounts had access to data that those users opted-in to share with those accounts.
Imagine that you have a bank account and you share access to it with a family member. If they use "Password1" for their password and someone gets into their account and then, by extension, has access to whatever level of access you've provided them to your account, is that the bank's fault? Is it yours? Is it your family member's?
Your analogy doesn't fit here. There is no scenario where accessing the accounts of 14,000 banking clients would then blow up to several million clients' accounts. Any bank that even offered this "feature" would, yes, be at fault.
There seems to be some transitiveness going on here. Let's go with the banking scenario: I give my son access to my checking account, and I also give my business partner access. My son is a dumbass, and uses the same password for everything. Now my business partner's info is taken. His parents get hacked as well.
From 14,000 to 7,000,000 is quite the amplification. That's on 23andMe and nobody else.
The analogy does fit. You're just mischaracterizing it. To continue on with your example, that's not what happened with 23andMe. If you gave your son access to your checking account via some account info sharing feature and someone gets access to his account, they have access to the same accounts he does and only those. Your business partner's info is safe unless he also shared his account with your son and his parents' info is safe unless they also shared with him.
The only info that was available form the 7 million accounts was specific info that they chose to share with the other account. If they chose to share everything, then everything would be available. 23andMe can't prevent their users from being idiots.
> The new NIST recommendations mean that every time a user gives you a password, it’s your responsibility as a developer to check their password against a list of breached passwords and prevent the user from using a previously breached password.
This assumes the breached password occurrence was known in advance and, from what I have read so far about this, was not the case with the 23andMe accounts.
You won’t have to. They could have forced MFA and been done with it. That doesn’t make it their fault that they didn’t. It just means they could have done better and assumed that at least some users (read: most) are ignorant about best practices with sensitive data. It’s not something they would be legally culpable for, though.
I agree that is a good idea, but that doesn't lay the blame of this so fully at their users' feet. This won't always catch password reuse attacks (now called "credential stuffing", I think), and is only a partial mitigation.
Unfortunately the majority of people aren't very tech literate. We have to remember HN is far from average. The company I work for forces MFA and I think if you have sensitive data like this, yes, you should force MFA. Truth be told, it's not going to enter the public lexicon until some big players start forcing adoption. Rule of thumb: if my grandma wouldn't know to do it, I shouldn't expect my users to do it. If you expect your users to use bad practices, then you're not doing your job well. Idk if we should say it's somebody's fault when that somebody is a non-expert and is making a reasonable choice.
But they did provide secure MFA as an option, and it seems the credential pairs hadn't shown up in HIBP because they had been privately purchased via the hack of a different site. The logins were even using locations that matched previous ones.
So how did 23andme fail so hard here? Literally nothing you've suggested would have prevented this.
> So how did 23andme fail so hard here? Literally nothing you've suggested would have prevented this.
They made MFA mandatory after getting popped, at the same time they changed their Terms of Service to attempt to evade liability. Why did they wait to get popped? Either negligence or an active decision was made to avoid support costs and engineering time for mandatory MFA was made. Also, a magic link I suggested would've solved for this, unless attackers were going to get into everyone's inbox with leaked creds to get the link to login and get that session token. Definitely more effort than credential spraying 23andme login endpoints.
A magic link is just a form of 2FA. And the reason not to make 2FA mandatory isn't about engineering costs -- they'd already built it. It's because a lot of users don't like it. I personally despise sites that require a magic link rather than a password, because it takes me 30s to log in instead of 1s.
There are lots of commenters here on HN in this story saying they don't think sites should make 2FA mandatory. There are lots of usability problems with 2FA as well -- if you lose a device or when traveling.
You're basically saying that sites that allow you to log in with just a password, if you choose, shouldn't be allowed to exist. That seems unreasonable to me.
> You're basically saying that sites that allow you to log in with just a password, if you choose, shouldn't be allowed to exist. That seems unreasonable to me.
I'm saying sites that host information of value, such as genetic information, should not be allowed to support login with just a password. That seems reasonable to me, and a regulatory gap to be closed. If you don't want to use MFA or other secure auth systems on Reddit or Twitter, by all means, I'd agree that secure auth for low value systems might be overly burdensome to a user population. There are well worn paths if you lose MFA (remote identity proofing, mailing an OTP to known addresses, dinging a credit card $1, etc) that are all reasonable and affordable to implement.
Is your argument that the data 23andme hosts is not of value or sensitive and it should not matter if their security story is lacking ("just passwords are fine, yolo")?
EDIT: I think we fundamentally disagree on the issue.
> such as genetic information, should not be allowed to support login with just a password. That seems reasonable to me
But that isn't obviously reasonable to me, that we need a law for that.
What if I don't think a bunch of estimates based on a bunch of my gene readings is all that valuable? Why not let me choose to use just a password?
But if I do think it's super valuable, then I can use 2FA. (And also obviously choose not to share any of my information with anyone else on the site.)
Why should it be the government's job to remove that choice from me?
How about a middle ground, where if I set up MFA on my account, I automatically disable the access from "distant relative" who haven't setup MFA, even if I want to share my data with them. Because fundamentally this incident is not serious if such transitive access was not employed in the first place.
And since this is a specific access pattern for 23andme, I agree we shouldn't involve government here.
Google defaults to Passkeys now [1], and has very aggressive heuristics around logging in [2]. They also maintain their own version of HIBP internally [3], and will force a password change [4] under certain circumstances.
They are doing this because when they have high assurance of your identity (and your account hasn't been taken over), that is the best time to issue the cryptographic credential (the Passkey) which improves go forward security of the account. Over time, accounts should filter over to Passkeys, and at some point, they will likely deprecate passwords (or require high confidence you are you to login with just username and password, vs a Passkey). I've had a discussion with someone on the project at Google, and they could only say "stay tuned" about what comes next. To be clear, I'm not divulging anything beyond what Google made public in their blog post and a bit of speculation on my part.
> Do you think google is deactivating people based on HIBP? If not why do you think everyone else should?
TLDR "password resets and account lockouts vs deactivating users" and "because it is good practice to protect your users and their data from compromise"
[4] https://support.google.com/accounts/answer/98564?hl=en ("If there’s suspicious activity in your Google Account or we detect that your password has been stolen, we may ask you to change your password. By changing your password, you help make sure that only you can use your account.")
I just created a new gmail account to test this - it asked me to create a password (minimum 8 characters, I used lowercase letters and numbers only) and didn't say anything about MFA or passkeys. I'm not going to fact check every other claim since the first one failed so utterly.
> This means the next time you sign in to your account, you’ll start seeing prompts to create and use passkeys, simplifying your future sign-ins. It also means you’ll see the “Skip password when possible” option toggled on in your Google Account settings.
Did you even look at their provided links? You took the time to create a new account, why not actually look at the provided links to see what is being claimed in the first place?
I read the link - I don't think "we will hassle people about this eventually but not even give them the option at signup" is the traditional definition of "default" though. Do you?
Prompting on first sign in is pretty “default” to me.
I highly doubt you read the link, otherwise you wouldn’t have gone through the whole sign up process just to prove something isn’t a “default” according to you. You’d have just referenced the article and made the exact same point.
FFS, default to magic link login via email if you have to. At least then you're relying on Google, Apple, or someone else for auth (in most cases of unsophisticated users).