The biggest attack vector I see here is embedded fonts; perhaps allowing browsers to download and use embedded fonts should be restricted to trusted sites too, since some font formats do contain an embedded Turing-complete language. It gives a whole new meaning to the term "web-safe fonts"...
Turing-completeness isn't a problem per se (lots of people get this wrong) because no attacker is trying to compute something: they want to run some shellcode, or redirect an HTTP request, or display something on screen, or something like that, none of which a Turing machine can do.
The worst a Turing machine can do is run forever, which is easily prevented by limiting the number of steps it can ran.
this really misses the point/problem entirely, the problem with parsing a turing complete format is that verifying data encoded with it in the general case (as valid/non malicious/useful) is equivilent to the halting problem.
They are dangerous to parse not because they can potentially run forever, but because due to their expressiveness, it is very easy to arbitraily trigger bugs, aided by the massive nessisary complexity of most parsers.
Those two things together make avoiding logic errors invoked by malicious data, leading to exploitability essentially impossible to prevent.
Another factor is it is impossible to formally verify something that executes turing complete code.
Would a primitive recursive language be less dangerous? Well, it depends on the language.
> They are dangerous to parse not because they can potentially run forever, but because due to their expressiveness, it is very easy to arbitraily trigger bugs, aided by the massive nessisary complexity of most parsers.
> Those two things together make avoiding logic errors invoked by malicious data, leading to exploitability essentially impossible to prevent.
Actually, the parsers for these languages are not too complex -- the problem is that they're written in a language ill-suited to writing parsers. The interpreters are complex, but all the exploitability is very easily avoided: by writing the interpreter in a memory-safe language.
This problem might be bug-prone, but a bug in a font interpreter should lead to whacky looking fonts, and never to remote code execution. The fact that it routinely does just shows we're using the wrong tools for the job.
there are more classes of exploits than simply memory management errors. For instance look at the laundry list of issues surrounding verification of x.509 certificates. Deciding a forged certificate is valid is catastrophic, and invokes no sort of memory related exploit at all.
X.509 certificates don't contain any Turing-complete languages, so the fact that X.509 interpreters have the same class of bugs as font interpreters, that supports my point that Turing-completeness itself is not the problem.