Joseph W. asked for a separate and new thread to discuss this subject, which arose in our discussion of problems of creation.
Even a summary of this problem -- indeed, even a book-length summary -- would necessarily compress a massive amount of careful argument. What I am hoping to provide here is more like a sketch of a summary of the problem; to tackle the problem with the seriousness it deserves is the work of years, not a few hours. The basic problem is twofold: how can I have knowledge about the world, and how can I communicate regarding knowledge of the world with other minds in a useful way?
Note that this is different from the question of "how/why did communication between intelligent beings arise?" One can accept an evolutionary response to that kind of question: it arose because, when 'tried' by animals who happened into it, it proved valuable. This is a different question, which is about how (and indeed whether) it is possible for such a thing to be at all. If evolutionary utility were the only criterion, why do animals not teleport themselves or engage in other sorts of fantastic behavior? They do not because they cannot. They do this because they can: but why can they? It's a very difficult problem.
Let's start with Kant's idea of transcendental unity of apperception. He was responding to some difficulties raised by Hume -- Hume is still today a powerful source for difficulties -- about how the mind can work. Kant argues that when we take our sense perceptions -- sight, hearing, touch, and so forth -- we must mentally mold our various senses into a single object that can serve as an object of thought. This is called representation (that is, we are re-presenting the sense data as an object of thought rather than as data per se). It's not just the object that has to be represented as a whole, though: we must also represent all of our disparate experiences as a kind of unity, the unity we take to be ourselves (for what are we if not the sum total of our experiences?).
One consequence of this approach is that we end up being unable to have any knowledge at all about anything in the world. Those things are not what our minds represent to us: the unity imposed upon them is artificial, for one thing. Thus, what we have "knowledge" about is only our representations, not the things themselves. Kant calls these things "noumina" and our representations "phenomena," and argues that noumenon are completely unknowable by human beings.
That's going to be a problem for communication about the world -- for science, say. We think that we are engaged in learning about the world through the scientific method, which involves experiements, measurements, and then communication of our results to see if others can reproduce them. If Kant is right, no part of that approach works the way we think it does. Our experiments are not of the world, but of mental phenomena that are different from the world in ways we not only cannot know but cannot conceive. Our measurements are likewise. Our theories about the meaning of these results are thus doubly disconnected from reality, because they are theories about theories about what things are really like. That's problematic enough, but now I need to convey them to you for you to try to reproduce.
You've got your own set of representations. Since neither you nor I have access to the things in the world, but only our individually constructed representations, we have absolutely no way of knowing if we are talking about the same objects. When I communicate my ideas to you, what I think I'm saying to you is being filtered as sound impulses and then re-presented by your mind to you according to your own unity of apperception: thus, I have no idea what you're hearing when I tell you something.
We might be satisfied to say, "Well, my own unity will represent all input in a coherent way, so while I don't really know if you're agreeing with me or not, it will appear to me that we agree on the basic facts." That would make sense, but it doesn't explain why science appears to give us increasing new capacities to do physical things: we can work together to produce rockets that fly to the moon, for example. That's a capacity that suggests that we really are cooperating: there's nothing in our pre-existing unity that should suggest it. It is a capacity that arises from this cooperation, which suggests that the cooperation is real.
We might say, "Well, let's stick with the evolutionary explanation. Our brain structures are similar enough that we can 'understand' each other to a certain degree because similar structures produce similar representations." Even if this were fully adequate, which it isn't, it doesn't make sense of the problem of why we can understand things that aren't like us. I usually use horses as a model for examining the question of a unitary order of reason across species (an idea also rooted in Kant, via Sebastian Rödl's explorations); but we have a similar capacity with animals of any kind. We seem to be able to distinguish between animals that are reacting to a pre-programmed instinct versus those which seem to have a capacity to reason and learn, for example, even if we don't share much evolutionary history with them.
The explanation is also inadequate because it simply doesn't answer the depth of the problem. Kant's argument gives us a world in which we can have no knowledge whatsoever of the reality around us, including the minds of others. To argue that our brain structures are 'mostly similar' is thus to argue facts not in evidence. We can't know any facts about the structures of our brains, only about the phenomena of the structures of our brains -- and these are likely being represented according to a pre-existing internal order that makes them accord to some degree with what we expect from them.
It also just doesn't make sense to leap from "it is impossible to have any knowledge whatsoever about the things themselves" to "nevertheless, we seem to do a pretty good job." You can't jump from "impossible" to "a pretty good capacity" in the same way that you can't build a line out of points. The points have no extension, so no number of them added together will give you an extended line. Likewise, no amount of phenomena can be combined into a noumenon: no phenomenon contains any nouminal content.
This has led people to question, well, everything: it has led otherwise serious people to wander around speculating about Zombies (which set of arguments, by the way Joe, is very similar to the ones you cited to me re: whether AIs would have real consciousness); or mad scientists keeping our brains in a vat.
Or it has led people -- particularly practical-minded people -- simply to ignore the problem and pretend it doesn't exist. This science stuff seems to work; why worry too much about why it works?
I suppose I will stop here, and call this "part one," because there remains a great deal to be said about what I think the right way to resolve the problem happens to be. For now, though, maybe we should stop and take a moment to appreciate the problem.