Meaning and the meeting of minds
Imagine yourself listening to someone who is talking in a language very different from any one you know. Most probably, for you it will be just a series of meaningless noises. If, instead, you are listening to someone speaking in your own native language, usually the meaning of his or her utterances would be as clear to you that you will probably not even notice the peculiarities of his or her phonetics, save if these happen to be very striking or you are interested in them for some reason. It seems as if, when we listen or read in a language we are familiar with, sounds and graphs magically transform themselves in meanings, or as if it were the meanings themselves of the other people what we were reading or listening to, more than their voices or their writings. This is, of course, a trick of our brains, for, whatever meanings happen to ‘really’ be, it is not in our heads, eyes or ears where what other people mean resides.
Traditionally, the meaning of linguistic expressions has been considered as a relation between words and the ‘world’ (objects, events, properties, etc.), a relation in which the weight of the individual minds, on the one hand, and ‘external’ facts (either physical, or cultural-institutional), has been the object of a fierce philosophical debate; Hilary Putnam famously asserted1 that ‘meanings are not in the head’, what basically meant that the meaning of a symbol is more a kind of public, intersubjective institution. But this publicity is of little help to the child who is learning what-on-earth can be meaning his mother by ‘dream’ or by ‘breakfast’, or when we are learning a foreign language, or simply when someone is using an expression with a meaning which is different from the one we thought: after all, if meanings are not wholly ‘in the heads’ of speakers, they are there to a large extent, and it is precisely because of that that we use a language, i.e., an external medium, to transmit what we mean.
Cognitive scientists Massimo Warglien and Peter Gärdenfors2 have presented an application of the latter’s theory3 on conceptual spaces in order to illuminate the nature of meaning under the assumption that what is more fundamental to it is not the relation between the mind, the words and the world, but the relation between words and different minds in the process of communication, what they call ‘the meeting of minds’. As the authors express it:
“So long as communication is conceived as a process through which the mental state of one individual affects the mental state of another, then a “meeting of the minds” will be that condition in which both individuals find themselves in compatible states of mind, such that no further processing is required. Just as bargainers shake hands after reaching agreement on the terms of a contract, so speakers reach a point at which both believe they have understood what they are talking about. Of course, they may actually mean different things, just as the bargainers might interpret the terms of the contract differently. It is enough that, in a given moment and context, speakers reach a point at which they believe there is mutual understanding.” (p. 3).
In a nutshell, I use a verbal expression to represent for you something within my own ‘conceptual space’ (more on this below), and you use the expression to translate it into something within your conceptual space. For communication to be possible, it is not necessary that our two conceptual spaces are ‘one and the same’; it is enough that, from your own speaking or behaviour, I can re-translate your expressions into my own conceptual space so that the process does not give rise to inconsistencies. Combining insights from research on joint attention (e.g., Tomasello, 19994), experimental coordination games (e.g., Selten and Warglien, 20075), and communication games (e.g., Lewis, 19696), the authors present a model in which the meaning of an expression (or of a set of expressions) emerges as the set of fixpoints of the functions that map one conceptual space onto itself through the back-and-forth process of communication-translation exemplified at the beginning of this paragraph. If the function does not reach a fixpoint (i.e., if there is something that makes you think you and the other speaker were not understanding the same), then you will have a reason to modify your meanings, or your guesses on the meanings of the other people.
Gärdenfors’ theory on conceptual spaces represents these as logical spaces whose dimensions are, for example, primitive qualities; every point of such a space could count as a possible concept, and a relation of ‘closeness’ or ‘similarity’ exists between different points; people tend, however, not to use any point of the space, but to identify prototypical cases within the space, both because of the typical experiences one is subjected to during learning, or because of psychological salience, or of a combination of both. In more recent works, this approach has been extended to more complex concepts, like actions and verbs 7. One essential property of these spaces is convexity: a subset S of the space is convex when, if a point A lies ‘between’ two elements of S (according to the notion of ‘closeness’ mentioned above), then A also belongs to S. Convexity allows a net division of a whole conceptual space into regions ‘around’ prototypical cases, such that the non-prototypical points (i.e., possible different concepts) are assimilated to prototypical ones, i.e., consdired as ‘cases’ of the same concept. In this new paper, the authors also show that convexity and continuity are sufficient to entail the existence of one or more fixpoints in the mappings of one conceptual space onto itself.
Where does ‘external reality’ enter the picture? It does, according to Warglien and Gärdenfors, only through the success of the actions to which communication leads. In my opinion, this could be included also in cases in which one single agent is trying to introduce new concepts (think, for example, of a scientist trying to construct new categories to understand some phenomena): the ‘back-and-forth’ process that leads to the relevant automorphism that their argument needs can depend just on the success or failure of the predictions made with the help of the new concepts. In any case, the conclusion of this model is that there is no need that the conceptual spaces in different people’s mind are identical (nor that they are identical to the ‘things-in-themselves’), only that they are structurally similar enough for successful communication to take place.
Lastly, one critical comment: it is not clear to me where the ‘dimensions’ on which the conceptual spaces are based come from, for, after all, they are concepts; does this mean that they are on their turn based on a more primitive conceptual space? Nothing in the model prevents that there are different levels of spaces built on more primitive ones, but the process has to start somewhere, and the dimensions of the ‘most primitive’ conceptual spaces would be a peculiar type of ‘concepts’ within this approach, if it happens that they are ‘concepts’ at all. But, if not, what are they?
- Putnam, H. (1975). The meaning of ’meaning’. In K. Gunderson (Ed.), Language, mind and knowledge (pp. 131–193). Minneapolis: University of Minnesota Press. ↩
- Warglien, M., and P. Gärdenfors (2013), “Semantics, conceptual spaces, and the meeting of minds”, Synthese, forthcoming (published online). ↩
- Gärdenfors, P. (2000). Conceptual spaces: The geometry of thought. Cambridge, MA: MIT Press. ↩
- Tomasello, M. (1999). The cultural origins of human cognition. Cambridge, MA: Harvard University Press. ↩
- Selten, R., & Warglien, M. (2007). The emergence of simple languages in an experimental coordination game. Proceedings of the National Academy of Sciences, 104(18), 7361–7366. ↩
- Lewis, D. (1969). Convention. Cambridge, MA: Harvard University Press. ↩
- Warglien, M., and P. Gärdenfors (2012), “Using conceptual spaces to model actions and events”, Journal of Semantics, 29, pp. 487-519 ↩
I would say that the bottom level concepts, or “primitive concepts” as you say, are what in machine learning are known as features. These are low level sensory information patterns that the low levels of the neocortex respond to, and whose output is composed and aggregated to form higher and higher level concepts, all the way up to abstract ideas further from the sensory realm.
This model is used in deep learning and unsupervised feature learning applied to vision . These techniques are believed to match what’s going on in the brain. See gabor filters as a specific example, for vision, of what could be at the bottom of the hierarchy.
I like this.