In a well-reasoned piece titled:
Can humans escape Goedel?:A review of "Shadows of the Mind" by Roger Penrose
Daryl McCollough provides a non-paradoxical version of the Liar's Paradox to illustrate inconsistency in human thinking. In doing so, he addresses a particular aspect of the interpretations of belief and truth with regard to debate on Gödel's incompleteness theorem (the first, for the picky). That issue is a better understanding of human fallibility (and its relationship to the phrase "there are some sentences we know to be true"). Perhaps Wittgenstein can be interpreted to also explore this in his [in]famous commentary on Gödel's Theorem but more on that later.
McCollough writes:
6. How Could Inconsistency Creep Into Human Reasoning?
6.1 As I discussed in the last section, Penrose's arguments, if taken to their logical conclusion, show us not that the human mind is noncomputable, but that either the human mind is beyond all mathematics, or else we cannot be sure that it is consistent. If we reject the "mysterian" position that mind is beyond science, we are left with the conclusion that we can't know that we are consistent. This seems very counter-intuitive. If we are very careful, and only reason in justified steps, why can't we be certain that we are being consistent?
6.2 Let me illustrate with a thought experiment. Suppose that an experimental subject is given two buttons, marked "yes" and "no", and is asked by the experimenter to push the appropriate button in response to a series of yes-no questions. What happens if the experimenter, on a lark, asks the question "Will you push the 'no' button?". It is clear that whatever answer the subject gives will be wrong. So, if the subject is committed to answering truthfully, then he can never hit the "no" button, even though "no" would be the correct answer. There is an intrinsic incompleteness in the subject's answers, in the sense that there are questions that he cannot truthfully answer.
6.3 Now, there is no real paradox in this thought experiment. The subject knows that the answer to the experimenter's question is "no", but he cannot convey this knowledge. Thus there is a split between the public and private knowledge of the subject. But now, let's extend the thought experiment.
6.4 Someday, as science marches on, we will understand the brain well enough that we can dispense with the "yes" and "no" buttons (which are susceptible to lying on the part of the subject). Instead of these buttons, we assume that the experimenter implants probes directly into the subject's brain, and we assume that these probes are capable of directly reading the beliefs of this subject. If the probes detect that the subject's brain is in the "yes" belief state, it flashes a light labeled "yes", and if it detects a "no" belief state, it flashes a light labeled "no". Now, in this improved experiment, the subject is asked the question "Will the 'no' light flash?"
6.5 In this improved set-up, there is no possibility of the subject having knowledge that he can't convey; the probe immediately conveys any belief the subject has. If the subject believes the "no" light will flash, then the answer to the question would be "yes", and the subject's beliefs would be wrong. Therefore, if the subject's beliefs are sound then the answer to the question is "no". Therefore, since the subject cannot correctly believe the answer to be "no", he similarly cannot correctly believe that he is sound. If the subject reasons from the assumption of his own soundness, he is led into making an error.
6.6 As can be seen from this thought experiment, the inability to be certain of one's own soundness is not a deficiency of intelligence. There is no way that the subject in the experiment can correctly answer the question by just "thinking harder" about it.
And provides this conclusion:
8. Conclusion
8.1 Penrose's arguments that our reasoning can't be formalized is in some sense correct. There is no way to formalize our own reasoning and be absolutely certain that the resulting theory is sound and consistent. However, this turns out not to be a limitation on what computers or formal systems can accomplish relative to humans. Instead, it is an intrinsic limitation in our abilities to reason about our own reasoning process. To the extent that we understand our own reasoning, we can't be certain that it is sound, and to the extent that we know we are sound, we don't understand our reasoning well enough to formalize it. This limitation is not due to lack of intelligence on our part, but is inherent in any reasoning system that is capable of reasoning about itself.
I think its a refreshing angle to the old debate, one that does not get as much attention.
P.S: When talking about truth above I am hopefully not mystifying it in a way that ignores the deflationary theory of truth.