A minimal theory of ideology for the post-COVID-19 world. (1)
Ideology is a concept that has always made me feel a bit perplexed. I have thought a lot about this question in the last weeks, when comparing the reactions and suggestions of different people about the shocking events the world is now involved into. Why do people have so much disparate ideas about the facts and about the best policy in face of them? Why do people are so ‘sentimentally’ attached to their opinions? Why is ‘ideology’ so influential even when thinking about something so terrible and world-encompassing? I am referring to the sense in which we typically talk about someone’s ‘ideology’ as something that (a) distorts the image of the world he or she has, but (b) the ’cause’ or ‘reason’ why she or he has that ideology is because having it is, in a way, ‘promotes his/her material interests’. For, after all, wouldn’t be systematically better for you having a non-distorted image of the facts, no matter what your interests are? Of course, there can be cases in which having a false belief happens to be beneficial for you, like when, say, having faith in a medicine that objectively has a low rate of success helps you to recover with a little higher probability. But it is suspicious that there can be cases in which having a totally distorted view of the world that surrounds you is better for your interests than knowing it more accurately.
Another important, and not unrelated, aspect of ideologies is that (c) they consist in ‘views of the world’ in which the factual and the valorative are intimately intermingled; in a sense, one sees the world through the lens of his or her values (political, social, moral, aesthetical, religious, etc.). Point (c) clashes to some extent with point (b), because it may happen that having the values you have is in some cases detrimental for you (like when Greenland’s Norsemen contempt towards the ‘savages’ Inuit prevent the former to benefit from the latter’s know-how about living in a colder climate, if we accept Jared Diamond’s narrative in Collapse 1). And hence, there is also the possibility of one’s having something like an inefficient ideology from his/her own point of view. This would bring the concept of ideology closer to the more scientifically reasonable concept of ‘cognitive bias’: ideology would just be a cluster of biases, and its treatment would be the same, i.e., trying to de-bias our minds, both individually and collectively.
But I think that, very frequently, we do not assume that somebody’s ideology is ‘just a bias’, as a kind of optical illusion, but that ideologies play an important causal role in defending or attacking some political and economic interests, and so ideologies are usually fueled in a totally intentional ways (through education, mass media, culture, social networks, etc.) to an extent in which ‘mere biases’ are not. And so we come back to our original quandary: how can it be that having wrong ideas is better for people than having objective information?
In order to clarify my perplexity, let me portray very sketchily the question of knowledge and belief as I see it. We, humans, live thanks to our coordinated actions; our live is a continuous answering, or trying to answer, to questions that have the general form ‘what to do?’. We live within a sea of what-to-does, so to say. A what-to-do contains three essential elements: circumstances, action and valuation. For it can be analysed as having the form ‘how will circumstances change, if we do such and such, and how good that will be?’. An action is, amongst other things, something that changes our circumstances in some way (or so we try to do). So, in order to efficiently guide our actions we need four types of knowledge (that I suggest to call ‘descriptive’, ‘praxeologic’, ‘predictive’ and ‘evaluative’): first, we need to know what the current circumstances are; second, we need to know what are the actions available to us; third, we need to know what will probably happen (how circumstances will change) if we perform each one of those possible actions; and fourth, we need to know how good or bad the new circumstances will be. Having wrong beliefs about any of these three elements will lead us to circumstances that are not the ones we want (or will do it more frequently than when we act on the basis of right beliefs; by the way, by ‘wrong’ beliefs I just mean believing that X is the case when X is not the case, and accordignly for ‘right’ beliefs). (Another marginal note: perhaps many people would prefer not to include my ‘evaluative knowledge’ as a kind of cognition, but just as a statement of our ‘preferences’ or ‘values’; I am happy with that, if you prefer so, but I also think that one may be wrong about his/her evaluations, particularly the prospective ones: it can be the case that you do not enjoy something as much as you guess you would).
There is another important concept that I have mentioned but I have said nothing yet about it: actions are (most usually) coordinated. What this means is, in the first place, that the consequences (the change in circumstances) that are relevant to take into account are (most usually) not the consequences of my activity alone, but the way in which the actions of many people at approximately the same time affect the current circumstances. And, in the second place, that the evaluation of the new circumstances is a plural evaluation: it is how I, and you, and you, and them… evaluate the situation, and most often the preferences of one individual will not coincide with those of others. And, by the way, there is also the question that the beliefs of different individuals about many of the other three types of knowledge can, and usually will be different. As you know, this web has dealt several times with the problem of combining different and conflicting preferences, so I will not insist now too much on this.
Hence, where can something like ‘ideology’ enter into the fourfold division of knowledge I have suggested? One obvious place is ‘evaluative knowledge’: people simply have different values and preferences. But I wouldn’t say this makes that type of knowledge necessarily ‘ideological’, because it just could amount to ‘objective (i.e., non-ideological, non-distorting) knowledge of your own values and preferences (and of those of other people)’. One may have, of course, an ‘ideologically-distorted knowledge’ of his/her own values, as the Marxist-Sartrean concept of ‘false consciousness’ or ‘bad faith’ might entail; I mean, one may think, for example, that he/she is pursuing noble goals, while in fact is only pursuing his/her own greed. And perhaps this is one of the main domains of our slippery concept: ideology is something that helps you to justify your actions before yourself or others, even if a more objective analysis and evaluation of them would show that they are not as justifiable as you pretend.
But I guess that this is not enough: when we speak of someone’s ideology, it is not only about his or her values (real or fake, conscious or unconscious) that we are talking about, but also about his or her factual beliefs, i.e., about the other three types of knowledge I have presented above. Your ideology does not only make you value things differently, but also see the world in a different way as others with different ideologies do. I shall explain in the next entry how I think this can be possible (to some extent).
References
- Diamond, J., 2005, Collapse: How Societies Choose to Fail or Succeed, New York, Viking Press. ↩