This week at The Junkyard we’re hosting a symposium on Franz Berto’s recent book Topics of Thought: The logic of knowledge, belief, and imagination (OUP 2022). See here for an introduction from Franz. Commentaries and replies will follow Wednesday through Friday. Derek Lam’s commentary proceeds in two parts; we will run part 2 tomorrow.
* * *
Modeling the Internal Chaos (I)
Topics of Thought is a mighty ambitious book and a thought-provoking (cringy pun intended) piece of philosophy. It’s ambitious for trying to offer an over-arching framework for all propositional attitudes we call thoughts using the idea of topic mereology. And it’s thought-provoking in the way the approach manages to bring apparently different psychological phenomena under the single idea of the topics of thoughts.
Berto confronts the fundamental question right from the start: what is the normative status of the formal semantics he proposes? Is it meant to provide prescriptive rules that describe the thoughts of a perfect agent, or is it meant to be descriptions about the thoughts of non-ideal actual people? Berto’s formal semantics is descriptive with a disclaimer. His logic provides an idealized model of how actual people believe, imagine, know, etc. The idealization here is not meant to be normative, just a simplified description, for scientific purposes (p.8).
So, when Berto speaks of modeling in the book, he’s using the word “model” in two senses. The idea of model is at the heart of any formal semantics. Semantic models give symbolic syntax semantic values; they can just be a piece of machinery to assign truth-values without pretending to represent any real-world phenomena. The sentences that are true according to Berto’s semantic models, however, describe a fictional/abstract thinking agent, which is an idealized model of actual thinking agents — model in the sense of a scientific model, not a semantic model. My comments below will focus on Berto’s logic as a description of a scientific model.
The first key idea in Berto’s semantics is to model thoughts as fundamentally conditional. Take beliefs for example. Instead of making a belief that p as the basic unit of his logic, Berto makes a conditional belief — a belief that p given a belief that q — the basic building block, expressed with the syntax Bqp. (This is true until Berto and Özgün introduce a different language to model working memory and long term memory in chapter 7.) The second key idea in his semantics is that the truth-conditions for Bqp don’t only require the typical possible world analysis for conditionals but also requires that the topic of p is contained in the topic of q: Berto calls this semantic rule SX (p.65). These two key ideas together help him capture the elusive phenomenon of hyperintensionality by representing thoughts holistically: via the connections among thoughts (represented exclusively in conditional form) while these conditional connections are, in turn, informed/constrained by topics of the content of those thoughts. Thoughts about propositions that are co-necessary can behave differently due to their having different topic-informed connections with other thoughts.
By letting us prove theorems about thoughts, formal systems about thoughts can be useful in helping us untangle puzzles and paradoxes. But proving theorems can reward us this way only if the formal system is normative. This is because these puzzles and paradoxes are themselves results of intuitive norms of thought. Here’s an analogy. We cannot solve the Sorites paradoxes by describing the fact — even if done with simplified formal languages — that actual people do not believe that a person with a full head of hair is bald. We cannot idealize our way out of the semantic paradox. The paradox is not a psychological puzzle and cannot be solved by a psychological model. As we have seen, Berto’s logic offers a scientific model, not a system of norms. With no normative commitments, I suspect there are things Berto’s logic cannot do. Take the non-closure of knowledge for example. We may be interested in the non-closure of knowledge as an empirical fact about the psychology of knowledge attribution. Or we may be interested in non-closure as a fact of epistemic norm that helps us address puzzles like the Kripke-Harman dogmatism paradox (p.87-88) or Cartesian skepticism (p.93). In light of what I said about the Sorites paradox, Berto’s non-normative logic can satisfy our former interest about human psychology, but not the latter about puzzles pertaining to epistemic norms, I suspect. Consider the dogmatism paradox for instance. In the end, it is a paradox about what we should do with evidence we know to be misleading:
“If I know that h is true, I know that any evidence against h is evidence against something that is true: so I know that such evidence is misleading. But I should disregard evidence that I know is misleading. So, once I know that h is true, I am in a position to disregard any future evidence that seems to tell against h.” (Harman 1973, p.148; my italic)
And this appears equally evident in the way Berto presents the upshot of the paradox:
“[…] presumably one is rational, in the face of E [i.e., the misleading evidence], to continue believing P, ignoring the ‘usual implications’ of E. […] This sounds bad.” (p.88)
What sounds bad is the idea that a person ignores the usual implication of a piece of evidence; they shouldn’t have done so. The same goes for skepticism. Understood as a fact of epistemic norm, non-closure offers us a way out of these puzzles (perhaps!) that stem from some intuitive epistemic norms. On the contrary, understood as a fact of human psychology, non-closure seems irrelevant.
All this could very well be me overlooking something as a careless reader/thinker. But I’m hoping that Berto can say a bit more on this to help me wrap my head around the non-normative status of his logic as a scientific model on the one hand, and the apparently normative status of some of the problems his logic was meant to help tackle on the other hand. Or perhaps as a state, knowledge/knowability is just fundamentally different from belief and imagination so that we are supposed to switch back and forth in our interpretation of the TSIMs: normatively when we apply TSIMs to knowability but non-normatively when we apply TSIMs to other kinds of thoughts like belief or imagination?
Finally, assuming that Berto’s logic, being non-normative, doesn’t aim to solve normative puzzles, but offers a scientific model of human psychology: Why do we want formal models of human psychology? What does it offer beyond the empirical models psychologists build? For example, psychologists offer the model of working memory and long-term memory to account for observations about human behaviors. Why would someone who seeks to understand human psychology want to model this psychological model with a further model, described specifically in formal languages interpreted with TSIMs?
These questions aren’t presented as objections. I have a Quinean answer on behalf of Berto to these questions. I wonder whether he finds this answer an accurate representation of his stance. Berto’s scientific models are empirical models. Their formulation is driven by the empirical observations like the hyperintentionality of human thoughts and by how, as an empirical fact, the topics of thoughts play a key role in shaping our thoughts. In other words, Berto is doing empirical psychology, only that he prefers to do so and to describe the empirical models he builds with a particular kind of formal languages that are interpreted by TSIMs, the same way different psychologists prefer to use different languages to describe their models — some use mathematical languages other than Berto’s, some use natural languages, etc. That Berto prefers to describe his models with a particular kind of language doesn’t make his modeling activities different in kind. What Berto purports to show in the book is that his empirical models of thought-topics have so much predictive power that not only do they accurately predict our behaviors related to hyperintentionality, they can also predict many other phenomena that other psychological models predict in an isolated, piecemeal fashion about imagination, belief, framing, memory, knowledge, etc. Berto’s models are better than these piecemeal psychological models for providing a stronger unification of our empirical data. (Surely, conjectures are made, but all empirical models do.)
I have an inkling of Berto’s potential disagreement with how I portray his project above, partly because my portrayal appears to push much of his technical work to the background of his project, but also because he occasionally presents his models as different in kind from the empirical models psychologists develop. For instance, he cites the armchair nature of his modeling as a justification for remaining neutral in certain debates about the psychology of memory (p.150-151). (To me, depending on the goal of modeling, an idealized empirical model doesn’t need to take a stance on every empirical issue.) Perhaps, what I’m hoping is that if Berto disagrees with my Quinean portrayal of his work (perhaps his logic is to be understood in a Kantian way), maybe he could elaborate a little on how he views his models’ place in the broader study of the psychology of human thoughts to help me see the forest better.
REFERENCES
Berto, Franz. Forthcoming. Topics of Thought. Oxford University Press.
Harman, Gilbert. 1973. Thought. Princeton University Press.
Reply to Derek’s Commentary, Part 1
Derek wonders whether the TSIM approach is descriptive or normative. He takes it to be better interpreted as descriptive, then wonders how come I sometimes interpret it differently, and asks for clarification. Good point!
I think Derek’s remarks have to do with what epistemic logicians in general are doing. For, sure, the TSIM semantics describes agents in various senses less mighty than the ones of Hintikka’s original epistemic logic; but so do tons of other logics on the market. And, unlike Derek, I don’t think the issue of the descriptive or normative status of the TSIM semantics is ‘the fundamental question’, for the reason I mentioned at the end of the précis.
For a trivial example of how focusing on the descriptive/normative distinction may not help one understand what we epistemic logicians are doing: suppose truth is the normative aim of belief. Then the normatively perfect believer is the one that believes all and only truths. So if the agent believes that P, then P. But although the agents of Hintikkan logics are logically omniscient, the logics don’t have their belief as factive. Aren’t these supposed to be idealized agents representing a normative ideal? Well, we don’t impose factivity because we think it’s intrinsic to the nature of human beliefs that they can be false. And we want our models to be realistic, in the following plain sense: they capture core features of the real notion. A logic where the belief that P entails P would grossly miss its target concept.
A less trivial example. Tim Williamson has famously argued against the KK or positive introspection principle for knowledge: for him, one can sometimes know, without knowing that one knows. That’s because, in his view, knowledge requires safety from error and combining this with one’s limited powers of discrimination delivers failures of KK.
Who’s ‘one’? Surely it’s a less mighty agent than the perfectly introspective ones represented by S4-S5 modal-epistemic logics validating KK. So, is Williamson giving a ‘descriptive, not normative’ account? Well, the agent of the epistemic logic KT Williamson has in Knowledge and Its Limits, pp. 227-8, is still logically omniscient, thus supremely idealized with respect to us humans. What is Williamson presenting to us then?
The reply is, I think, is that he’s investigating the very nature of knowledge – what knowledge is for us humans, of course: folks with limited powers of discrimination. But then, assuming logical omniscience allows factoring out failures of KK which may be due to bounded deductive capacities, rather than limited powers of discrimination. To say that the KT agent represents a ‘normative, not descriptive’ ideal wouldn’t be very illuminating either. It may well be that logically omniscient agents represent a normative ideal, but that’s not what Williamson’s main focus is. He is idealizing, in the sense of working with a simplified setting in one respect, to isolate and investigate another: how our mental states fail to be ‘luminous’, due to our limited powers of discrimination. Surely he’s modelling a realistic notion of knowledge, insofar as he’s capturing something about how knowledge works for us humans.
This way of doing things is widespread in logical approaches to the investigation of attitudes. When epistemic logicians address the phenomenon of unawareness as lack of conception, rather than lack of information, in awareness logics widely used from economics to game theory, they start by representing logically omniscient and perfectly introspective agents (see Schipper’s intro): they idealize in these respects for they want to focus on modeling how unawareness works independently from cognitive limitations of other kinds. They’ll surely claim that their models are realistic, insofar as they capture something about how (un)awareness works for us humans.
ToT does take a stance on what counts as normative, sometimes. Some TSIM invalidities are naturally marketed as describing what we often do though we probably shouldn’t (e.g., framing effects: we shouldn’t believe that we have 60% chances of success without believing that we have 40% chances of failure); some, what we mostly do/do not and we should (not) be doing (e.g., in focused mental simulation, we shouldn’t think about 2’s primeness while we wonder what will happen if we jump the river; and, I guess, we mostly don’t). This involves taking possibly controversial stances on rational normativity. E.g., when I say it’s a mistake to imagine that either two is prime or it’s composite while one is trying to predict whether one will make it to the other side of the river, I’m appealing to a Harmanian view of what is normative for us, given our limited cognitive powers. But Hintikkan logically omniscient agents already think that either two is prime or it’s composite, as they think all necessary truths. And one may doubt that Harmanian normativity captures the correct view of rationality better than what is represented by the Hintikkan logically omniscient agents. E.g., a recent wave of pushbacks against Harman does just that: see e.g. Christensen, Smithies, Titelbaum. Admittedly, I don’t track the debate in the ToT book. We epistemic logicians are a bit like this: we won’t wait for normative theorists to come to an agreement on the principles of rationality before we start building our models.