Jul 15th, 2006 by ravi
One for the bookmarks: Antimeta

Antimeta — Found this very interesting math and philosophy blog while searching for something entirely unrelated (the history and politics of the name change of Bombay). Posting it for my own record, but perhaps it may be of interest to those, if any, who read my blog? Kenny Easwaran (the author of the blog) has a very useful collection of links to other philosophy/logic/math blogs at his site.

Read the full post and comments »
Jul 5th, 2006 by ravi
NSA monitoring and Bayes Theorem

At CounterPunch, Floyd Rudmin (who I hope to quote a lot of, from what I have seen of his writing) provides a great lesson on Bayes Theorem to demonstrate the ineffectiveness of NSA monitoring with regard to identifying terrorists. But I have some comments, which can be found after the quote below.

Floyd Rudmin: the Politics of Paranoia and Intimidation


The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033% which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA's monitoring of everyone's email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA's misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA's surveillance system is useless for finding terrorists.

Suppose that NSA's system is more accurate than .40, let's say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes' Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.


I believe this is honest and valid reasoning. However it has to be read closely because Rudmin does not use more familiar terms such as 'false postive' and 'false negative'.

He points out that the chance is very low that a person is actually a terrorist if so identified by NSA. The if-then order here is important to note. Another way to say it is to say that (simply because of the extremely low incident rate of terrorists) there will be a lot of false positives. A lot of people who are not terrorists will be wrongly labelled so by the NSA.

What he does not say or imply, but is not clear (at least in my reading, to a layperson) is that given a high accuracy rate (of the NSA test for terrorist) the chance of a false negative is quite low. In other words, the NSA monitoring (if accurate) will not miss a real terrorist. The if-then here is reversed.

IMHO, this is a crucial difference for two reasons:

  1. A high false positive rate, given a low false negative rate, is an acceptable outcome for screening tests. Further tests/filters can be applied to narrow the count and eliminate false positives. The monitoring here serves as a first, coarse, red flag.
  2. To the public (to whom I assume Rudmin is addressing his argument), this is of utmost relevance. Their concern is not so much with being swept up as a false positive (for they are sure they can easily exonerate themselves in further tests), but with making sure that no terrorist gets away unnoticed (false negative).

The public has demonstrated many times over that they are willing to swallow the fear-mongering and sacrifice significant chunks of liberties (especially if they believe it to be those of others) in return for perceived security and toughness. While Rudmin makes a powerful argument in pointing out that the monitoring does a poorer job than the toss of a coin (given his assumptions on accuracy rate, etc), this argument falls on mostly deaf ears.

Read the full post and comments »
Jan 25th, 2006 by ravi
You might be a (mathematical) Platonist?

[with apologies to Jeff Foxworthy]

Among other eminent bits, Karlis Podnieks has an interesting test you can use to see if you are a Platonist. As psychologists like to say, denial is more than just a river in Egypt and the first step to recovery is accepting the problem. So, take the test… its for your health! ;-)

Foundations of Mathematics. Mathematical Logic. By K.Podnieks

Suppose, someone has proved that the twin prime conjecture is unprovable in set theory. Do you believe that, still, the twin prime conjecture possesses an “objective truth value”? Imagine, you are moving along the natural number system:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, …

And you meet twin pairs in it from time to time: (3, 5), (5, 7), (11, 13), (17, 19), (29, 31), (41, 43), (59, 61), (71,73), … It seems there are only two possibilities:

a) We meet the last pair and after that moving forward we do not meet any twin pairs (i.e. the twin prime conjecture is false),

b) Twin pairs appear over and again (i.e. the twin prime conjecture is true).

It seems impossible to imagine a third possibility…

If you think so, you are, in fact, a Platonist.

Read the full post and comments »