Anyone have a sense of how much of a problem this is?
It's not surprising that a network can be fooled by small input changes, but if some image preprocessing is enough to solve this It's not a big problem.
On the other hand, if I can make a sign that looks like a stop sign to people but looks like a road work sign to a tesla, that's obviously a big deal.
These slides touch on the difference by saying that physical examples of adversarial inputs are harder, and they mention some mitigation techniques, but they don't seem to really quantify how effective mitigation is in real world scenarios.
It's not surprising that a network can be fooled by small input changes, but if some image preprocessing is enough to solve this It's not a big problem.
On the other hand, if I can make a sign that looks like a stop sign to people but looks like a road work sign to a tesla, that's obviously a big deal.
These slides touch on the difference by saying that physical examples of adversarial inputs are harder, and they mention some mitigation techniques, but they don't seem to really quantify how effective mitigation is in real world scenarios.