There are two swordsmen, A and B: A is attacking B. A knows that B has a weaker defense on the left side, but B also knows that A knows that he has a weaker defense on the left side. A knows further that B knows he knows about B’s weaker left side; and B knows that A has this knowledge.
At first glance, it would be rational for A to attack B on the left side. However, because B knows that A knows his weakness, it is rational to assume that B will concentrate his defending capabilities to his left side. But A knows that B will focus on his left side, thus, at second glance, it would be better to attack him on the right side after all. However, B also knows that A is making this judgment, thus, he will not concentrate especially on the left side.
The paradox consists in that A and B both have two very relevant pieces of information and yet the most rational path they can take is to act as if they wouldn’t possess these pieces of information, i.e. at random. Thus, from a purely behavioral point of view it is impossible to tell whether or not A and B have these two pieces of information. Thus, there exist beliefs, and even beliefs highly salient to the given situation, that cause an agent to act as if s/he wouldn’t have those beliefs. So, as long such beliefs exist – and indeed they do, it is theoretically impossible for a purely behavioral test to discover all the beliefs of a certain agent. Or, to put it differently, there are some cases in which it is impossible to tell whether one is lying. Even considering the goals as given and publicly known, the relation between beliefs and behaviors is not 1-to-1, but many-to-1.
Can we say that all the beliefs that entice the same behavior are in fact equivalent, despite their apparent divergence? The swordsmen paradox seems to imply a "no", as otherwise we would have to equivalate beliefs that are obviously relevant for the decision to a lack of knowledge.