> Cox seem to have acted like the very model of responsible security response in this kind of situation
It's hard to imagine, but I wish they would have taken advantage of him walking in with the compromised device in the first place.
I once stumbled upon a really bad vulnerability in a traditional telco provider, and the amount of work it took to get them to pay attention when only having the front door available was staggering. Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
>Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
I can't really blame them. The number of customers able to qualify that a device has actually been hacked is nearly zero. But do you know how many naive users out there that will call/visit because they think they've been hacked? It's unfortunately larger than the former. And that'll cost the business money. When 99.9% of those cases, the user is wrong. They have not been hacked. I say this as someone who supported home users in the 2000s. Home users that often think they'd been "hacked".
I work for a support org for a traditional telco. We have "contacts" but they're effectively middlemen.
If you dropped this in my lap, and I'm pretty savvy for a layman, I wouldn't know how to get past my single channel. I think it would require convincing the gatekeeper.
Some people truly believe the computer is hacked every time there is behaviour they didn't expect. Only the craziest, least capable ones show up to scream at you like you caused the whole thing.
That is the problem. He should have contacted them like he did the second time. When he went into their shop, it all depended on that particular employee, and you can't blame that person for not recognizing the issue.
yeah the false positive problem is huge here. For every legitimate security professional there are probably 10-100 schizos who believe they are “hacked”
I was mentioned in the media once for an unrelated internet protocol vulnerability and I had people contacting me about their "hacked" internet connections.
For a major cable ISP, I can't imagine how many customers walk in to replace their "hacked" boxes on a daily basis.
> Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
He probably should have gone the responsible disclosure route with the modem too. Do you really expect a minimum wage front desk worker to be able to determine what’s a potential major security flaw, and what’s a random idiot who thinks his modem is broken because “modern warfare is slow”?
He wasn’t off the internet. He just determined his modem was hacked. Given it had been hacked for who knows how long, what’s one more day? They responded to his api submission in 6 hours.
Have you ever worked as a front-line support agent? I'm guessing not. I have many years ago, and for an ISP too. If I bought an Amazon share back then for every time a customer called support because they were "hacked", I'd not be posting here during a boring meeting because I'd own my own private island.
The two best conversations I can recall were when we changed a customer's email address about a half dozen times over a year because "hackers were getting in and sending them emails" (internal customer note: stop signing up for porn sites), and a customer's computer could barely browse the web because they were running about 5 software firewalls because they were "under surveillance by the NSA" (internal customer note: schizophrenia).
The expected value of processing requests like this any way other than patting the reporter on their head and assuring them the company will research it, then sending them along their way with a new device while chucking the old one in the "reflash" pile isn't just zero, it's sharply negative.
The author's mistake was not posting somewhere like NANOG or Full-Disclosure with a detailed write-up. The right circles would've seen it, the detailed write-up would've revealed that the author wasn't an idiot or paranoid, and the popped device might've been researched.
> The author's mistake was not posting somewhere like NANOG or Full-Disclosure with a detailed write-up.
This is an organizational equivalent of a code smell. Something is off when support people aren't writing up the anomalies and escalating them.
Some of the most serious security issues I've ever had to deal with started with either a sales rep getting a call or a very humble ticket with a level one escalating it up. Problem is for every serious security issue that gets written up, forty-two or so end up getting ignored because the support agent is evaluated on tickets per hour or some other metric that incentivizes ignoring anything that can't be closed by sending a knowledge base article.
> Something is off when support people aren't writing up the anomalies and escalating them.
What is described in the article is a fantastic hack. Given my organization's structure and skills, you'd need to send it straight past three layers of support and several layers of engineering before you find someone who'd be able to assemble a team to analyze the claims. We'd spend four figures an hour just to confirm the problem actually exists - then we'd all go "oh shit, how do we get in touch with the FBI, because this is miles above our paygrade."
An average cable internet user walks into a retail ISP location, sets a cable modem on the counter, and says "this is hacked". What is the probability you'd assign to them being correct? How much of your budget are you willing to spend to prove/disprove their theory? How often are you willing to spend that - remembering Cox has 3.5 million subscribers.
Friction is good. Hell, it's underrated! Introduce it to filter out fantastic claims: the stupid and paranoid are ignored quickly, leaving the ones that make it through as more likely to be real.
"Code smell" as a programming term is often a red herring that causes conflicts within development teams (I've seen this happen too many times), because anyone can call anything they don't like about a coworkers code as a "code smell". Your comment is a "code smell". See how easy that was?
And "code smell" doesn't apply in a similar or metaphorical way towards cable modem support personnel. Those people aren't supposed to know how to escalate a case of a customer bringing in suspected hacked modem. If they did that for every idiot customer that brought in a "suspicious" modem, the company's tech support staff wouldn't be able to get anything done. 99.999999999999% of the cases would not in fact be a hacked modem, so there really shouldn't be any pathway to escalate this as a serious issue.
I’ve been in this industry for 15 years and I’ve never had to deal with the code smell situation, in that I don’t use that term and I’ve never interacted with anyone at work who uses that term.
I think after reading this I’ll continue that habit. Putting the phrase “code smell” in a review is like using the dark side of the force: you’re just being an ass
If code smell is being weaponized to assault a coworkers code, then you are using the phrase incorrectly and have some cultural issues on your development team. The phrase came from Martin Fowler's book "Refactoring" https://martinfowler.com/bliki/CodeSmell.html and is intended to indicate that you might benefit greatly by refactoring that code. What I was saying is that the support organization needs to be worked on if it isn't collecting anomalies and reporting them up the line.
The support staff probably wouldn't get anything done with any given bad modem ticket, but the analyst looking at support data for the week/month might notice that we've had 82 reports of defective modems of a specific model in a short time frame, and this is a new problem... one that we should probably grab a defective modem, get the vendor and take a look to make sure we don't have a big problem (the assumption might be defective hardware, but that's why you gather evidence and investigate further).
Oh, I'm well aware of where "code smell" came from.
The description in the link you provided is pretty wishy-washy:
>"smells don't always indicate a problem."
Okay, so does it smell like flowers or does it smell like shit?? It's a stupid term, and yes, it's been abused by programmers that often think it makes them seem smarter than they are.
Maybe "code stink" would be a better designation for code that's actually a problem. But even that would be stupid and I'd never use it to describe code. Putting down someone else's code as "smelly" is a great way to make a team dysfunctional. And code is often messy for plenty of good reasons (PoC code is perfectly fine if it's messy, no reason to call it "smelly"), there's no reason to anthropomorphize it and assign it a "smell". It's just a rude way to talk about code.
I haven't even dealt with the general public in a support role but I have enough examples just in my, not very large, social circle.
The aunt who is convinced she has a stalker who is hacking all her devices and moving icons around and renaming files to mess with her (watching her use the computer, she has trouble with clicking/double-clicking and brushing up against the mouse/trackpad. call her out on it, she says she didn't do it)
The coworker who was a college football player, who now has TBI-induced paranoia. He was changing his passwords about 3 times a day. Last thing I heard about him before he got cut out of my social circle was he got in a car accident because he was changing his password while he was driving.
Meanwhile I know zero people who have found any real vulnerabilities.
I have escalated customer security issues while working as a support agent. I have also found and been paid what could be considered a bounty (in the form of a bet made by the lead dev to another person) while working support.
Admittedly, this is anecdotal, and it was a small company, and my skillset was being very underutilized at the time. However, I don't think it's hard to imagine a me that would have been closed minded enough to normalize my experiences and expect it of others. In fact, I'd say I still fight with it regardless of having seen it.
Every third person who comes in has their router hacked, that's the problem. We know that Sam is good at what he does and to not be wrong about this, but Cox can't rely on everyone being that good, nor on their very poorly paid front-desk worker to have the ability to tell if they are an idiot or a expert.
Source: was a volunteer front-desk person at a museum. Spent a lot of my life dealing with people. They were sure of incorrect things all the time and could not be relied on to know.
In retrospect, Sam should definitely have hit the responsible disclosure page (if such a thing even existed in 2021) but I don't fault anyone for the choices they made here.
We really need to work on this definition of "expect". It's expected from them to have such training but we know that in practice that is not what happens. So we "expect" they to be trained, but what we "expect" will happen in practice is very different.
> the amount of work it took to get them to pay attention when only having the front door available was staggering.
I've seen this across most companies I've tried reporting stuff to, two examples.
Sniffies (NSFW - gay hookup site) was at one point blasting their internal models out over a websocket, this included IP, private photos, salt + password [not plaintext], reports (who reported you, their message, etc), internal data such as your ISP and push notification certs for sending browser notifications. First line support dismissed it. Emails to higher ups got it taken care of in < 24 hours.
Funimation back in ~2019(?) was using Demandware for their shop, and left the API for it basically wide open, allowing you to query orders (with no info required) getting Last 4 of CC, address, email, etc for every order. Again frontline support dismissed it. This one took messaging the CTO over linkedin to get it resolved < a week (thanksgiving week at that).
> Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Sounds to me like their support org was reasonably effective at their real job, which is keeping the crazies away from the engineers.
It's even harder for me to imagine them saying "Oh, gee, thanks for discovering that! Please walk right into the office, our firmware developer Greg is hard at work on the next-gen router but you can interrupt him."
I have a cloned key from a spare modem that I use with my router (Unifi) to allow it to connect directly to the ONT, minimizing devices in my rack.
I’ve found that this usually confuses first line support enough that they’ll listen to me if I need them to do some specific action.
To be clear, I’m not stealing internet access or anything of the sort. I didn’t want a useless modem / AP that I’d end up bridging anyway, so I extracted a key from another one, and my router uses it to auth with my ISP.
It quickly becomes the Service Animal problem. When no one can or is allowed to verify your infosec credentials, everyone becomes a three star infosec General with a simple purchase from Amazon/Alibaba.
> Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.
They were presented with some random person who wanted to get a new modern on their rental but also keep the old one, for free. They had no way of knowing if they were an actual security professional.
It's hard to imagine, but I wish they would have taken advantage of him walking in with the compromised device in the first place.
I once stumbled upon a really bad vulnerability in a traditional telco provider, and the amount of work it took to get them to pay attention when only having the front door available was staggering. Took dedicated attempts over about a week to get in touch with the right people - their support org was completely ineffective at escalating the issue.
Cox's support organization was presented with a compromised device being handed to them by an infosec professional, and they couldn't handle it effectively at all.