Previous: in-sightNext: self-knowledge is dangerousContents

Society of Mind

6.12 internal communication

If agents can't communicate, how is it that people can — in spite of having such different backgrounds, thoughts, and purposes? The answer is that we overestimate how much we actually communicate. Instead, despite those seemingly important differences, much of what we do is based on common knowledge and experience. So even though we can scarcely speak at all about what happens in our lower-level mental processes, we can exploit their common heritage. Although we can't express what we mean, we can often cite various examples to indicate how to connect structures we're sure must already exist inside the listener's mind. In short, we can often indicate which sorts of thoughts to think, even though we can't express how they operate.

The words and symbols we use to summarize our higher-level goals and plans are not the same as the signals used to control lower-level ones. So when our higher-level agencies attempt to probe into the fine details of the lower-level submachines that they exploit, they cannot understand what's happening. This must be why our language-agencies cannot express such things as how we balance on our bicycles, distinguish pictures from real things, or fetch our facts from memory. We find it particularly hard to use our language skills to talk about the parts of the mind that learned such skills as balancing, seeing, and remembering, before we started to learn to speak.

Meaning itself is relative to size and scale: it makes sense to talk about a meaning only in a system large enough to have many meanings. For smaller systems, that concept seems vacant and superfluous. For example, Builder's agents require no sense of meaning to do their work; Add merely has to turn on Get and Put. Then Get and Put do not need any subtle sense of what those turn-on signals mean — because they're wired up to do only what they're wired up to do. In general, the smaller an agency is, the harder it will be for other agencies to comprehend its tiny “language”.

The smaller two languages are, the harder it will be to translate between them. This is not because there are too many meanings, but because there are too few. The fewer things an agent does, the less likely that what another agent does will correspond to any of those things. And if two agents have nothing in common, no translation is conceivable.

In the more familiar difficulty of translating between human languages, each word has many meanings, and the main problem is to narrow them down to something they share. But in the case of communication between unrelated agents, narrowing down cannot help if the agents have nothing in common from the start.