•
Jul 5th, 2006 by ravi
NSA monitoring and Bayes Theorem
At CounterPunch, Floyd Rudmin (who I hope to quote a lot of, from what I have seen of his writing) provides a great lesson on Bayes Theorem to demonstrate the ineffectiveness of NSA monitoring with regard to identifying terrorists. But I have some comments, which can be found after the quote below.
Floyd Rudmin: the Politics of Paranoia and Intimidation
[…]
The US Census shows that there are about 300 million people living in the USA.
Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033% which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA's monitoring of everyone's email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA's misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA's surveillance system is useless for finding terrorists.
Suppose that NSA's system is more accurate than .40, let's say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes' Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.
[…]
I believe this is honest and valid reasoning. However it has to be read closely because Rudmin does not use more familiar terms such as 'false postive' and 'false negative'.
He points out that the chance is very low that a person is actually a terrorist if so identified by NSA. The if-then order here is important to note. Another way to say it is to say that (simply because of the extremely low incident rate of terrorists) there will be a lot of false positives. A lot of people who are not terrorists will be wrongly labelled so by the NSA.
What he does not say or imply, but is not clear (at least in my reading, to a layperson) is that given a high accuracy rate (of the NSA test for terrorist) the chance of a false negative is quite low. In other words, the NSA monitoring (if accurate) will not miss a real terrorist. The if-then here is reversed.
IMHO, this is a crucial difference for two reasons:
- A high false positive rate, given a low false negative rate, is an acceptable outcome for screening tests. Further tests/filters can be applied to narrow the count and eliminate false positives. The monitoring here serves as a first, coarse, red flag.
- To the public (to whom I assume Rudmin is addressing his argument), this is of utmost relevance. Their concern is not so much with being swept up as a false positive (for they are sure they can easily exonerate themselves in further tests), but with making sure that no terrorist gets away unnoticed (false negative).
The public has demonstrated many times over that they are willing to swallow the fear-mongering and sacrifice significant chunks of liberties (especially if they believe it to be those of others) in return for perceived security and toughness. While Rudmin makes a powerful argument in pointing out that the monitoring does a poorer job than the toss of a coin (given his assumptions on accuracy rate, etc), this argument falls on mostly deaf ears.
Read the full post and comments »