It's a severe discredit to the major operating system vendors that plugging in a USB stick can still compromise a system.
If a USB device identifies itself as a keyboard, the system shouldn't accept its keystrokes until that keyboard has typed the user's login password (EDIT: or the user explicitly authorizes the device using a different keyboard). If it identifies itself as a storage device, the filesystem driver should be hardened. If it identifies itself as an obscure 90s printer with a buggy driver written in C, it should prompt the user to confirm the device type before it loads the driver.
It's 2019. Why the f* haven't Windows, MacOS and Linux all implemented these basic precautions?
Recently I tried out some USB temperature sensors. They present as both a proprietary temperature sensor and also as a USB keyboard. In the event you don't have a driver for the sensor, you can still get your readings by toggling the caps lock which sends a "turn on caps lock lamp" signal to the "keyboard", which responds by "typing" the temperature data.
I'd rather this device presented itself as a drive containing various virtual files that contain temperature data in them, but the cat's out of the bag, so to speak.
The keyboard trick is quite a hack, but creative. At the same time afaik most barcode scanners also act as keyboards, you scan a number, it "types in" those numbers.
I can't see how the filesystem hack would work, if the OS has the drive mounted, it would cache files in memory, and not notice the file contents changing. You can't even modify the metadata, because most of that might also be in memory.
Don’t think that’s a vector per se. The ATM accepts untrusted USB keyboard input (THAT is the bug)—the barcode reader is just a product that happens to make it easy to type in the right series of characters. You could have done the same thing with a normal keyboard (or an Arduino, if you wanted the convenience)
I wouldn't be terribly surprised if you could create a barcode that caused a barcode reader to send <windows key>+r and run some arbitrary command. So perhaps it wasn't a vector for an ATM, but maybe some other barcode reader where workers scan in arbitrary things they are handed...TSA maybe?
Also, perhaps folks working in data centers can and confirm/deny, but from what I know it's usually strictly forbidden to bring any USB devices into a data center area.
We use USB drives as installers and, in some cases, as boot volumes. (And of course keyboards and mice on crash carts and USB serial ports for laptops.)
We’re not a cloud provider, but I’ve been in lots of DCs and seen plenty of USB devices.
Before chip embedded credit/debit cards were prevalent, most magnetic strip reader (MSR) peripherals would often operate as a USB keyboard. It allows them to work with web app based POS systems without requiring things like ActiveX.
Same but different... I was working to get a Hotel property management web application running on iPad so host could check in people away from desk. The web application supported MSR swipe keyboard entry, but you can’t plug in a generic USB MSR device into iPad.
I wrote a custom iOS keyboard that interfaces with lightning MSR and its API and the “typed” the characters into Safari.
It was nice to be able to use generic Safari and not some App wrapper. And it wasn’t too difficult on host to change keyboards to swipe.
In high school we messed around with this in the crappy P.O.S. system at the place I worked (windowed app, running on Windows) to see what data was recorded on all our various student ID cards, gift certificates etc.
I was delighted about this when a client wanted a barcode scanner integrated with a web app. I envisioned major difficulties but instead it only took 5 minutes to implement.
Yep! I have a webapp that's been handling physical print-based photography awards for 7 years now. It generates a PDF label with barcode that the entrant sticks on the back of their print, and then they're shipped to the judging location and the award staff scan 4000+ entries over a couple of days. The barcode scanning was the easiest part of the whole project.
Emulate a MTP device (often used by cameras) and mount with a fuse driver. Since the content on the remote device can change the driver shouldn't be caching it.
Or emulate a network, generate a DHCP response for your favorite /31 and don't send a router, and point a public domain name at the other address in that /31.
I'm not sure if you are serious. My goal was to use an USB device standard for which open source drivers exist and which doesn't open security holes by allowing it into your system.
MTP devices are ~mostly harmless~ and relatively easy to trust. Network devices are not.
Can you elaborate? I understand the concept of RNDIS or CDC devices, but if you've sent an IP address only without a router, how is any traffic going to make it back to the other IP in that subnet? I figure it will go back over the default route, but how are you in control of the traffic itself?
FTDI chips have their driver deployed by default on most Windows, Mac and Linux installs. It's nice being able to buy a USB/serial cable and it just work, without needing to deploy any drivers at all. Check it out!
IIRC the FTDI chips use a non-standard com driver on Win 7. IF you want your device (with your VID and PID) to show up as a serial port, you need to associate your device with the usbserial driver, which at the very least requires a custom .inf.
That's kind of ingenious, but is sending temperature data over USB really such a hard problem in the first place? I'm not really familiar with the USB protocol.
USB doesn't work without a driver and sometimes you don't want to (or can't) install a driver. This sounds like the kind of hack that a clever (but arguably unwise) engineer would shove in to help them remotely troubleshoot a device.
"Sensor not detected? OK, open up Notepad and hit Caps Lock three times quickly. Did some text appear? The sensor is fine, the problem is with your computer."
It's not that hard to pick some standard class, like CDC and have a userspace app that uses it just like a serial device. You can get info on which serial device to use via sysfs on Linux.
It was a long time ago, but I’m pretty sure CDC doesn’t auto-enumerate on Windows. Mac and Linux is fine. I think you still need a .inf for Windows, and for it to work generally, you need WHQL signing ($$, time).
The free (money-wise) approach we ended up doing was to use WinUSB and marking the device as “vendor specific”, and using libusb to talk directly to it. That was a bit awkward, but covered Windows, Linux, and OSX for us.
This was 5 years ago though. Windows 10 might directly support generic CDC devices, but Win 8.1 didn’t.
Edit: sibling mentions HID. HID does work like this, but we needed more bandwidth than HID provided. CDC was perfect for what we were doing but it didn’t auto install. Mass storage auto-installs but didn’t fit what we were doing.
Because it could lead to all manner of weirdness that the user doesn't expect. Imagine someone trying to type some data into a spreadsheet and every time they hit the capslock key a strange text string appears in the cell, maybe leading to data corruption. Or even worse, it happens in some program where the typed characters are interpreted as hotkeys and instantly perform some unknown combination of actions which the user may or may not even know occurred.
You can actually use the USB hid class to present pretty much any data in any way you want. The reason they present as a keyboard is probably so they don't need to worry about drivers. With newer versions of windows I think you can work with such hid devices without special drivers though.
Seems like the core problem is a single standard for many different kinds of devices, which makes it possible for devices to act totally different from what it physically appears to be.
Maybe we should have stuck to PS/2 keyboards after all.
Sure, you can fix it so devices don't appear as unauthorized keyboards... you still leave yourself open to a near infinite number of other attacks. What stops me from creating a USB device that appears as a storage medium, yet contains a transmitter which slowly exfiltrates any data written? What about a USB-powered microphone or camera posing as a flash drive? Hell, it would be of great value to just have an software defined radio which could execute arbitrary bluetooth and WiFi attacks while allowing remote control via RF.
Am I the only one old enough to remember 'disk bombs' from the 90s where you filled 3.5" floppies with paste made from strike anywhere match heads so when the disk spun up it melted? You could do similar things with a USB stick. You could have a high voltage converter which fries your PC the second you plug it in.
Basically, it is always a bad idea to plug in unknown peripherals to your computers. The OS isn't going to save you in all cases.
I suspect you're arguing from the point of view of a determined attacker against a specific target, in which case, I agree -- there's an infinite number of different attacks you can try, with the caveat that any failed attempt is possibly going to tip your target off and make them up their opsec game, becoming a much more difficult target.
I took the OP to be talking more about general case. Random people plugging into a public recharge station, using (shady) Amazon/Ebay USB drives, plugging in a "found" USB stick, etc. The OS can at least help thwart simple attacks here.
In the worst case, the device contains a GSM modem which is powered by USB but otherwise only appears to the host as a USB drive -- and if you can get the target to write useful data to it, I guess you have something? That's an awfully expensive attack that I would assume has relatively low chance of yielding something useful. (Unless maybe you market it as a "secure cryptocurrency wallet", and hope you can sell enough to people that then put on enough cryptocurrency to make up for the significant manufacturing expensive which you're able to steal before anyone notices there's a modem in it and sounds the alarm..)
> You could do similar things with a USB stick. You could have a high voltage converter which fries your PC the second you plug it in.
While being obnoxious and causing one (random?) person some money (presumably they will destroy or throw out this USB drive aftward), it doesn't really get you anything. There's many other cheaper ways to destroy someone's computer, as there are many other things you can destroy to cause a person expense and/or inconvenience.
> Basically, it is always a bad idea to plug in unknown peripherals to your computers. The OS isn't going to save you in all cases.
100% agree, but that doesn't mean it shouldn't try at all.
> I suspect you're arguing from the point of view of a determined attacker against a specific target
Not necessarily a specific target(although maybe in a sense). If I were, say, the Chinese intelligence apparatus, I'd be sprinkling exfiltration devices around D.C., military bases, and defense contractor offices(especially the small ones, who don't always seem to have their shit together).
You can fit a lot of smarts in a small form factor these days. I could, with the budget of an intelligence agency, cheaply mass produce USB storage controllers which only activate when specific files of interest(say, OrCAD schematics, or source code) are saved to the device. I could sprinkle them around, or even just strongarm one of my country's manufacturers so that the bug goes into wide distribution. Now I use sniffer vans, like were used to execute the Tempest attacks against military bases in the 80s, to find my beacons and exfiltrate.
GSM modems might be expensive, although it would be a great way to get data out. You could also add GPS and use a small geofencing database to activate when you're within a target radius.
Keep in mind this is just the musings of a bored idiot(me). I suspect an intelligence agency could find more useful things to do with a USB stick.
>Am I the only one old enough to remember 'disk bombs' from the 90s where you filled 3.5" floppies with paste made from strike anywhere match heads so when the disk spun up it melted?
Damn dude that really worked? I remember reading about it in the anarchist cookbook but didn't go through with the effort after getting thoroughly punked re: smoking banana peels and trying out pressure points on older kids
> 1. Obtain 15 lb. of ripe yellow bananas. 2. Peel the bananas and eat the fruit. Save the skins. 3. With a sharp knife, scrape off the insides of the skins and save the scraped material. 4. Put all scraped material in a large pot and add water. Boil for three to four hours until it has attained a solid paste consistency. 5. Spread this paste on cookie sheets and dry it in an oven for about 20-30 minutes. This will result in a fine black powder (bananadine). Usually one will feel the effects of bananadine after smoking three or four cigarettes.
It just made a little fire, it didn't "explode". It would melt your floppy drive and make it useless but wouldn't come close to doing enough damage to hurt anyone unless they had their face a few inches from the front of the PC.
I'm guessing that there are plenty of things that would fit in the floppy and cause serious damage. Mercury fulminate maybe, or ammonium triiodide(?), assuming they didn't just self-detonate.
> USB device that appears as a storage medium, yet contains a transmitter which slowly exfiltrates any data written
I won't copy my data on unknown device. Mics and cameras trigger prompts in MacOS. The keyboard device on the other hand, can be used for 5 seconds walk by attack, running install scripts (Bad USB) attack.
You won't, but many people will. They'll plug it in, figure the device is fine, and begin to trust it.
Mics and cameras trigger prompts if they present themselves as USB devices. I'm saying they do not need to do that. They can draw power from the port and send captured data out wirelessly.
That's assuming it presents itself as a mic or camera. What's to say it can't have the hardware embedded in the device but not present it to the host machine? Then any exfiltration technique can get a direct look into audio/video of the area.
That's different. Car crashes are unpreventable, unexpected events that we can prepare for. Plugging random USB stick into your computer is preventable, and adding these safety features may cause people to think it is safe to plug in random USB sticks into their computer.
Most car crashes are extremely preventable. Do some people not drive more dangerously because they believe themselves to be safe because of things like seat belts?
You are so right.
I never understood the prevalent idea that traffic accidents are somehow random rolls of the dice. Seemingly the vast majority of them are not. Adjust your speed, not too fast, not too slow; stay focused on road, mirrors, and other traffic; keep your distance; don't be drunk; don't fall asleep; know and follow the rules, and you will hugely reduce your risk of harm.
> I never understood the prevalent idea that traffic accidents are somehow random rolls of the dice.
It's pushed by the auto manufacturers and insurance companies to normalize driving and make you pay for more expensive safety features. If people drive irresponsibly enough to wreck their cars, but not enough to kill themselves (modulo the safety level of their car), they buy more cars and spend more money on car insurance.
Definitely a good start but in a targeted attack scenario that's pretty trivial to bypass, if someone brags about having the latest Das Keyboard or something that's all it'd take... we need cryptographic authentication in the USB specification or at least a randomized serial that'd be unique per device so an attacker would need physical access to clone your keyboard.
I believe modern Thunderbolt already has this sort of cryptographic device authentication, which means not only physical access but at least a bit of reverse engineering skill, a much higher barrier than knowing their keyboard model.
It's particularly frustrating because of how trivial the solution appears to be. Trust on first use is more than sufficient in this case, so asymmetric cryptography with a randomized key would be fine. I realize mass produced electronics can be very cost sensitive and that a PKI chip might add a whole $0.70 to your product (https://www.digikey.com/en/product-highlight/a/atmel/atsha20...), but still. I paid ~$50 for my keyboard! I would not have begrudged the manufacturer an extra dollar or two in order to ensure my system's security.
As far as I understand it, this already is on by default for ChromeOS. The kernel patches make it possible to utilize internal USB devices during the boot process without disabling protection - ie there's no vulnerability window prior to user space being up and running.
I believe the major missing piece for desktop Linux at this point is that many input devices (including my own) are USB based. Without a way for the device to cryptographically attest its identity, you either have to accept vulnerability from wired external devices during boot or do without input until user space has been started.
Edit: My mistake. It appears that it was opt-in as of January, will become on-by-default at some point in the future, and only blocks devices during boot and while the screen is locked. It appears to trust all devices plugged into it once you've logged in. (https://www.forbes.com/sites/leemathews/2019/01/07/google-sh...)
The only reason your laptop is trusted is because you trust the person who gave you the laptop. The same threat model applies to the first keyboard you get for your desktop. Neither laptop/desktop nor keyboard is inherently more trustworthy.
I'm not worried about the keyboard I purchased or my hardware vendor. Well I am, but far less so than the prospect of a foreign USB device being plugged in and managing to execute malicious code. Think someone quickly inserting a device as they walk by, secretly swapping out one of my peripherals while I'm not around, or similar. Authorization on first use is more than sufficient to mitigate this type of attack, and if you add end-to-end encryption you can also prevent USB keyloggers.
At this point in 2019 intelligence gathering and government/corporate security vulnerabilities are much more in the digital realm than physical. Wifi enabled cameras/microphones, cell phones, servers, consumer computers, usb devices, iot devices are all used to that end.
We need to hold the flame to OS vendors to handle basic security precautions. It's not like the US government doesn't have contract negotiations with them large enough to force the issue.
It's also unacceptable to have security around the most protected person on the planet be ignorant to common attack vectors and procedures.
It’s largely shortcomings of “modern” OS designs and hardware. Things like kernel-space drivers and dma for peripherals make it very hard to have any reasonable level of protection.
I am just saying that they should have a pre usb meter that prevents the usb stick from being attached to a device directly such that they can screen it off ...
Yubikeys can pretend to be keyboards to type your password. It's a simple way to get maximum compatibility for a hardware key. I imagine there's other legitimate use-cases for non-keyboards to act like keyboards.
Still, requiring one to type a password in a newly connected keyboard is a pretty good idea as long as it's a configuration option. I imagine you'd also like something similar for the mouse. Maybe having to type a password on a virtual keyboard. It's annoying to have to do something like that every time a computer is woken up. You're talking about typing a password 3 times. Once to log the keyboard in, then to log the mouse in, then to select a user and log the user in.
Your other suggestions are vague, so I'm not sure what you mean by "basic". I mean, if one knows a driver is buggy, those bugs would be taken care of (from the developer's point of view; the administrator might not update the software, but what can the developer do?).
And what does it mean to "harden" a filesystem driver when a device identifies itself as a storage device? A filesystem driver should be "hard", period. All the time. That's something done when the driver is being written, not until it identifies a device.
You only need to authenticate a device once, when you first acquire it, or after it is tainted due to loss of physical control. This is how Bluetooth works today.
Maybe I'm wrong, but there's currently no sort of authentication protocol for devices in USB, right? I (and I think jimrandomh too) was thinking of USB as-is. Something that OSes can do right now without having to wait for whoever controls the USB spec. As it is, how can an OS know that the mouse it sees on waking up is the same mouse that was connected before it slept or powered off? I don't think there's any sort of cryptographic authentication specified for USB devices.
> If a USB device identifies itself as a keyboard, the system shouldn't accept its keystrokes until that keyboard has typed the user's login password.
Wireless presenters often identify themselves as keyboards so that they can "press" the arrow keys to move forward or backward. How are you going to type your password using such a device?
Yes, there are corner cases (another commenter mentioned a temperature sensor, and I this is also common among barcode scanners). These corner cases are not hard to work out; just prompt the user and require them to confirm that the device is, in fact, allowed to act like a keyboard.
(Which would mean you can still have malware-download-command-typers pretending to be barcode-scanners pretending to be keyboards, but you can't have malware-download-command-typers pretending to be storage devices pretending to be keyboards, because the "Allow typing with this keyboard?" dialog will give it away.)
Because the overwhelming majority of computer users in the world are not sophisticated, and want things to "just work" once those things are plugged in. I don't think it's an unreasonable expectation/desire, despite the risks.
Keep in mind that autoplay is not unique to USB drives either. CD-ROM drives have had that feature forever.
>It's 2019. Why the f* haven't Windows, MacOS and Linux all implemented these basic precautions?
Because up until 10 years ago, developing your own USB device was generally expensive and malicious devices ended up being out of scope in threat modelling. In addition, some models these days still define 'physical access == game over'...
I imagine he means before the advent of 3d printing, services like PCBWay, products like Arduino, and online stores like DigiKey. It's probably much easier to make one's own devices today than it was when USB was first being designed.
He's also right about the physical access thing. Fundamentally, it doesn't make much sense to add protections against scenarios where the attacker apparently needs physical access, because there's no way to protect against all things he could possibly do then. It's not really obvious that the user needs protecting from himself as he plugs in a device of doubtful origins. We used to hold the user to higher standards.
The key thing to realize is that malicious USB devices get to choose which device they identify themself as to the operating system, but have much less control over what they physically look like to the user.
If you plug in an old printer, you know you just plugged in an old printer; you can load the old-printer device driver and it probably won't exploit it. But if you plug in a USB stick you found in the parking lot, and it asks you whether you just plugged in an old printer, then the game is up; you know it's a tricky device, pretending to be something it's not in order to target a security vulnerability.
You are putting way, way too much faith in the average user. See, for instance, TLS exceptions. Also, realize that all the adversary needs to do is some trivial social engineering. A label on the thumb drive with a picture of the prompt and a mouse over "ok" would probably do it.
And because the 99% use case is: "I plug it in and I want it to just work"
This type of protection parent is referencing is "endpoint protection" and there are many industry standard solutions. Why should an OS be more limiting? If you have physical access to a machine that stores things you shouldn't have access to, it's already compromised in my opinion. Why the eff are people overlooking physical security in 2019 is the better question.
> It's a severe discredit to the major operating system vendors that plugging in a USB stick can still compromise a system.
Universal plug'n'play is USB's reason for existence, if it can't do that then maybe we should step away from USB itself. Back when keyboards were plugged into PS/2 ports I didn't have to worry a floppy disk would emulate one (ignoring autorun). I'm sure it's possible to have a malicious PS/2 device, but having it plug into the keyboard port would at least indicate what it's going to do.
I'd like to point out that nearly every single USB barcode scanner shows up as a keyboard to the operating system. Your point of sale system has to have focus on the field awaiting input and then when you scan a barcode it just "types in" the scanned number. What you are suggesting would immediately brake compatibility with a huge number of devices out there.
This is likely not enough to secure a system against a sufficiently skilled adversary. An OS has limited control over many of the side-channels available to the USB stick once it is inserted into the system (e.g., fluctuations in the voltage rails that give away what the processor is doing).
If you are thinking in terms of "if it identifies itself as...," then there is a good chance that something lower in the stack may be compromised.
This gets even more troublesome once we consider that people sometimes forget that seemingly "dumb" dongles such as display adapters can be very similar to USB sticks from an implementation and vulnerability point of view (e.g., "Thunderclap").
I think the overhead of hardening systems for each of these scenarios would be immense.
Yes, there will likely still be ways for a malicious USB device to use electrical side-channels to attack a connected computer. But devices like that will be much harder to develop. And more importantly: compromised devices which weren't originally designed to do that, won't be able to rewire themselves into side-channel-exploiters. So if my USB storage device has a firmware vulnerability, and a malicious computer reprograms it, it won't be able to use electrical side-channels to attack my other computers because it doesn't have a suitable DAC and ADC.
> If a USB device identifies itself as a keyboard, the system shouldn't accept its keystrokes until that keyboard has typed the user's login password
Probably easier/safer to display a random number on-screen and then ask the user to retype it into the device. I figure numbers are less likely to run into problems when the keyboard isn't US-standard QWERTY.
For more paranoia/portability, show the user a repeating rhythm-game and wait for them to hit any keys they want as long as it is close enough to the correct pattern. ("Shave and a haircut... two bits!")
You could also use audio output for the user to hear, but then the attacker could embed a tiny microphone in the USB stick...
Confirming a new device should always be separate from the dozens of unique and distinctive scenarios where a user might (or might not) need to authenticate themselves.
The reason for it to be the user's login password is that, in the common case where you plug a keyboard into a computer that's just booted or which has been unattended for awhile, you're already typing a login password, so it isn't making you do anything you weren't doing already.
One small example: KVM switches would become incredibly cumbersome to use. However, I agree, there should be a much higher security standard for USB devices on the OS-level.
There are some implementation details that the KVM maker would have to get right, but if they don't screw it up, it all works as expected.
Good KVMs already look at the keyboards they have connected, present separate virtual keyboards to connected computers, and route keystrokes explicitly based on state. You just need them to count the keyboards connected to them, and present a separate virtual keyboard for each downstream connected keyboard, so that the connected computers can tell which keystrokes came from which keyboard.
> It's a severe discredit to the major operating system vendors that plugging in a USB stick can still compromise a system.
Well, just the one OS vendor comes to mind and a particular chip maker also shares the blame. Just how difficult can it be to design-in total isolation into a 'computer'.
You misunderstand. Malicious USB devices often present themselves to computers as keyboards, which type malicious commands. But they don't look like keyboards, or have keys on them; they usually look like USB storage devices.
They don't drop them but instead ship them to arrive for Friday delivery. Over the course of the weekend the malicious keyboard cuts its way out of the shipping envelope and scans the target office for the nearest USB port. More recent models will shove the existing keyboard behind the desk, like a Cuckoo chick does with any remaining eggs after hatching.
You misunderstand. Start forcing me to type my password as the first thing into a new keyboard, and now malicious keyboards can be certain that the first characters up to <ENTER> are a valid password for the device in question.
Buggy drivers are a problem, but if you control the hardware, it's your responsibility to vet what you plug into it. It's like with door locks: if you need protection from advanced thieves you'll need to go through some extra hoops anyway.
You could petition OS manufacturers to focus more on physical security, but there's limits to what you can do without piles of abstractions (ala smart phone security)
> it's your responsibility to vet what you plug into it
Okay, please explain a little more. I'll give you a concrete example of a device to work with.
Last week I accidentally left my usb flash drive at a coffee shop with some important files on it. When I went back, the coffee shop had it in the lost and found. It looks the same on the outside, but it's a mass-produced model.
How do a vet this hardware before plugging it into my computer? I do need to access the files on it, but also attackers may have had access to it for several hours.
If security is so important to you, you buy a $100 laptop, put the stick in it, get the files and upload them somewhere then burn the laptop and stick.
A human cannot vet an electronic device. We can only interface to it from another electronic device.
The same argument applies to the Internet -- we don't say that it's the human's responsibility to vet every website or email message before we let our computer connect to it. We expect our computer to do that. That's why it was wrong for Outlook to automatically execute every program sent to you via email.
Your computer isn't vetting things it gets from the internet at all, with the exception of TLS certs and anti-virus scanning. Virtually all other operations done with remote content are unvetted; it's play & pray. You clicking a button is the only vetting process.
If a USB device identifies itself as a keyboard, the system shouldn't accept its keystrokes until that keyboard has typed the user's login password (EDIT: or the user explicitly authorizes the device using a different keyboard). If it identifies itself as a storage device, the filesystem driver should be hardened. If it identifies itself as an obscure 90s printer with a buggy driver written in C, it should prompt the user to confirm the device type before it loads the driver.
It's 2019. Why the f* haven't Windows, MacOS and Linux all implemented these basic precautions?