I’ve gotten into a few arguments recently about the notion of intellectual or epistemic humility – with a group of law professors regarding whether and when judges should consider other judges’ votes (my contribution here) and with some philosophers about the general rationality of humility and its relationship to free speech. The latter debate is around a rarely fully-articulated epistemological stance which I’ll call humilitantism. As I say, it’s rarely fully articulated, but I think we can sum it up as the thesis that humans are almost always overconfident in the beliefs they hold, and that it is therefore almost always epistemically rational – that is to say, it’ll almost always increase their proportion of held true to false beliefs – if they make a conscious effort to lower their credence in their own beliefs and to consider alternate beliefs (including meta-beliefs such as worldviews, theories, explanations, and so forth).

To be clear, humility is a topic on which I have competing intuitions. On the one hand, I’m a Popperian pluralist and a committed contrarian; I believe things proceed best when different people pursue different lines of thinking and check each others’ work for mistakes rather than coming together for consensus or confirmation. On the other hand, I’m deeply opposed to tribalism and censorship; I despise out-of-hand rejections of ideas that are new, counterintuitive, or offensive to individuals, groups, or some established order. The conundrum is about the suspension of disbelief, on whichever side, that’s required to commit fully to working out the consequences of one’s own instincts and theories while respecting and remaining open to the instincts and theories of others. I haven’t solved the conundrum, but one point I can clarify, apt for the recurring themes of this blog, concerns the mechanisms that instantiate overconfidence in our beliefs, especially our beliefs about politics and issues relevant to group identity and virtue.


There is an obvious problem with humilitantism. Consider Henry the humilitant. One of Henry’s beliefs is P, and Henry guesses he’s about 90% sure of P. Applying epistemic humility, Henry downgrades his certainty to, let’s say, 72%, having estimated his overconfidence at about 20%. But humilitantism says that we’re almost always too sure of our beliefs. So Henry thinks “I’d better apply epistemic humility again.” This takes his certainty about P down to 57.6%. The problem is that Henry can just keep going and going. Nothing can stop him from humbly downgrading all his beliefs until he barely has any left.

It’s easy to stipulate ways to avoid this problem. But what it shows us is that it’s difficult to know when exactly (and how often, to what degree, etc.) one should apply humility. There may be “natural” humilitants in our midst who have already applied humility to their beliefs. Humilitantism would render such people underconfident. Now, one of the philosophers with whom I talk about this stuff suggested that it’s an empirical question whether people are generally overconfident or underconfident, and by how much. That obviously seems right. But I don’t think the formal question of overconfidence and underconfidence really gets at what most humilitants are concerned about. One could be a very humble reasoner in some sense while failing, in the actual social context in which reasoning and believing take place, to exhibit the virtues the humilitants want.


The natural way to think about humility is as a retention of some level of uncertainty in our beliefs and a responsiveness to other people’s beliefs when they go counter to ours. Something like:

  • for some epistemic agent A, the more certain A is in their own beliefs, the less humility A has, and the less certain, the more humility;
  • further, given some belief Q of A’s and some other agent B who expresses a belief of not-Q, the less this causes A to downgrade their certainty about Q, the less humility A has, and the more it causes A to downgrade their certainty, the more humility.

But this formulation must be incomplete. The reason is that it is never just A and B. It’s always A1, A2, A3, …, An; B1, B2, B3, …, Bn; and, in all likelihood, C, D, and so forth. So a broader view of A’s humility must consider: what happens when a group of people disagree with A?

One classic perspective on belief aggregation is Condorcet’s Jury Theorem. This theorem is really just a simple probabilistic calculation. It tells us that as long as certain conditions obtain, the reliability of a majority vote among people, each of whom individually are more than 50% likely to be correct, will quickly approach 100%. Say, for instance, that there are five people who are each correct 60% of the time. If they all agree that R, we can estimate the chance that R as 1-[(40%)^5], or almost 99%.

Condorcet’s Jury Theorem is an intuitive epistemic justification for democracy. However, the conditions that must obtain for the math to work are pretty difficult to engineer. In particular, the math requires statistical independence among voters. This means that their votes are not correlated. It’s easy to see why: If five people were each correct 60% of the time, but also held all the same beliefs, then a majority vote among them would be correct 60% of the time, too, and wrong 40% of the time. The group couldn’t possibly be any more reliable than the individual; indeed, statistically, it acts as though it simply is one of its individual constituents.


It’s a commonplace that one of the ways in which humans “other” their outgroups is by treating them as uniform, monolithic – as a horde, a mass, etc. By contrast, our ingroups are just full of diverse individuals and interrelations among subgroups. Many leftists, for example, will be able to explain to you in detail the differences among Maoists, Stalinists, Leninists, Marxists, communists, anarcho-communists, left-libertarians, socialists, social democrats, progressives, left-liberals, and mainstream Democrats; many of the same leftists, however, will opine very eloquently about how they don’t see a real difference between white nationalists and Donald Trump or between Donald Trump and the mainstream conservative movement – how it’s all the same, deep down. By the same token, the same rightists who can recognize various levels and forms of libertarianism, traditionalism, capitalism, etc. will tell you that Hillary Clinton was a socialist.

But this ingroup/outgroup dynamic has huge consequences for how we think about epistemic humility. The more variegated my ingroup is, dividing into subgroups or even a large number of discrete freethinking individuals, the more I can rationally treat its members as being independent. And the more monolithic my outgroup is, the more I can rationally treat its members’ beliefs as being correlated. This is often made explicit in groups’ mythologies of their “others”: that the outgroup is indoctrinated, brainwashed, “asleep” (as opposed to “woke”) conducting a “war on noticing”, and so on. My beliefs can be reached through a variety of methods, as the diverse individuals around me demonstrate; yours, however, can be reached through only one (usually a bad one!), as your groupish horde demonstrates.


Of course, what this means is that even if our groups are the same size, mine has Condorcet on its side and yours does not; that is to say, my one group is really a hundred individuals, whereas your hundred individuals are really just one group. If that’s the case, what looks like individual epistemic humility is actually exactly what the humilitant doesn’t want, because it will always take the individual toward the position held by the ingroup (made up by many epistemic agents) and away from the outgroup (made up, ultimately, of only one epistemic agent, who is multiply instantiated). So in the social context we need an additional concept of group epistemic humility. One is humble in this way precisely to the extent that one construes the outgroup as being just as epistemically diverse as the ingroup. This counsels us against broad stereotypes and categorizations, but also against the sort of grand theories of our enemies (“genealogies”, I think they’re sometimes called) that are so popular among smart people in politics – myself included.

There may be some good news in this. Although epistemic humility is often trumpeted these days as part of an opposition to political correctness and censorship on college campuses, ideas about outgroups and “others”, and about the way we ought to treat them, stand firmly within the social justice lexicon. Remembering that the people with whom we disagree even about very important issues remain people is essential to good epistemic practice. We diminish ourselves as reasoners when we dismiss interlocutors as being under the sway of some or another ideology or as merely defending their interests in an exercise of bad faith. But to avoid such dismissals requires a difficult acknowledgment of the basic similarities of all human groups – an acknowledgment that one is in a group just like the outgroup and not just in a collection of clever independent thinkers. Unfortunately, deference to one’s group, and faith that it’s the best group, is often imagined as the essence of humility. We may want a new word entirely.

Advertisements