One interesting, unnoticed property of security is that it often contains provably unsolvable problems. Generally, we tend to split problems into those that have been solved and those which we don’t know if they can be solved. Nobody knows right now how to build a 100 mpg+ Internal Combustion Engine but that’s because building a 100+ mpg engine is hard. We imagine that if we throw enough smart people and technology at a problem, it will inevitably be beaten down and solved or we’ll reach a point where it’s not worth the effort to solve. Nobody imagines that building fuel efficient engines is impossible.
Translating that same thinking to security, we imagine security problems are a matter of effort. If only we were willing to expend enough resources, security problems could get solved. The TSA takes this approach to airline security. Airline security breaches occur because there is a lack of political will and if we only had enough regulations, screeners, X-Ray backscatter machines and cameras, airport security would become a solved problem.
However, the fundamental flaw with airport security is that what makes a good “dangerous” is how you use it and not what its made out of and so it’s impossible to develop an effective screening process that is not in the context of use. A laptop battery is pretty much just an explosive which is designed not to explode (sometimes unsuccessfully). That planes aren’t being brought down every day from laptop explosions is not because they can’t explode but because nobody wants them to explode. Imagine all the technology you want, it’s impossible to look at a laptop battery sitting in a scanning machine and decide whether it will be wanted to explode.
Convincing people that security can be provably unsolvable is the hardest step because often, the actual proofs of unsolvability are fairly simple. Normally, we assume that an explanation of why something can’t be done is something only comprehensible to experts because it’s more accurately a proof of why it can’t be done yet which requires you to understand what can be done now. As a result, we take explanations of infeasibility on a certain degree of faith and deferral to expert opinion, we use zero knowledge rather than first order proofs.
Security flips this around. Proving something secure is hard because it requires you to know all the ways it can be attacked whereas proving something can never be secure is easy because it requires a simple application from first principles. This is an important consideration in policy debates because one common tactic of bamboozling your opponent is to force them into using first order proofs where zero knowledge proofs would have been more appropriate (the Intelligent Design movement uses this to great effect with their “teach the controversy” and “let the children decide” messages). This means that unless your opponent is aware of the curious inversion on the structure of a security debate, arguments about security can often seem seedy and underhanded because they resemble so much debates in other, less reputable areas.
The result of all this is that security is one of those areas where there is a disproportionate amount of astoundingly bad, poorly thought out policies and a large part of this can be explained through the communication mismatch between security experts and managers where “it can’t be done” means “It’s impossible to do” but is interpreted as “I don’t know how to do it and I’m too lazy to find out”.