Synopsis

The volume is devoted to the analysis of works by the distinguished British philosopher L. Jonathan Cohen. In Part I, Cohen delineates the development of his thought in various areas, including political philosophy, philosophy of language, philosophy of law, the application of psychological theory, and deductive as well as inductive rationality. In the following three parts, other authors discuss the relevant issues and Cohen’s contribution to them.

Part II concerns Cohen’s “method of relevant variables” and other modes of scientific reasoning. L. NOWAK investigates the interrelationships between Cohen’s method of relevant variables and the method of idealization, the latter of which Nowak considers to be central to the methodology of both the natural sciences and the humanities. Next, M. FISCH examines the method of relevant variables critically. He argues that the method has its place in the evaluation of a kind of “higher-order” hypothesis in “dynamical” theoretical settings, and not with “first-order” hypotheses in more “static” theoretical settings. Concluding Part II, M.A. FINOCCHIARO discusses some of the main themes in one of Cohen’s most recent books, “The Dialogue of Reason: An Analysis of Analytical Philosophy.” He focuses on Cohen’s idea of the “inductive- intuitive method,” on Cohen’s emphasis on the importance of the idea of norms of reasoning in understanding analytical philosophy, and on Cohen’s analysis of the role of intuition in analytical philosophical reasoning.

Part III contains articles focussing on the topic of subjective probability and on Cohen’s distinction between Pascalian and Baconian conceptions of probability. D.A. SCHUM first describes different kinds of conceptions of probability, and then elaborates a model of applying the different conceptions to the problem of assessing the credibility of human testimony, when the relevant inference is not direct but rather involves a chain of reasoning that may have multiple intermediate stages. J. LOGUE’S paper examines the idea of the “weight” of subjective probabilities , roughly, the degree of confidence we have in our subjective probability assessments, which is, or presumably should be, a function of the amount of evidence that our assessments are based on. Logue analyzes the weight as a function of second-order probabilities, and applies his analysis to Cohen’s arguments for pluralism , especially to the issue of whether a Baconian, rather than a Pascalian, conception of probability is appropriate in judicial contexts.

Also in Part III, Cz.S. NOSAL focuses on the possibility of neurological correlates of subjective probability assessments, which may be causally responsible for subjects’ evaluations of the frequencies of events. It is difficult to find the principles behind such evaluations from data obtained from various animal studies, but Nosal suggests that a broad construal of Cohen’s idea of the “norm extraction method” is at least a principled way of tracking the data, and more precise than other methods. Finally, R. STACHOWSKI’s investigation concerns the measurement of various psychological magnitudes: it turns out that there are interesting parallels between certain standard and nonstandard psychological approaches, on the one hand, and the Cohen’s proposals concerning the Baconian interpretation of probability, on the other.

Part IV deals with the controversies concerning human rationality and with the methodological pluralism in the evaluation of these debates. L.L. LOPES and G.C. ODEN begin by arguing that the conventional notion of rationality , involving the idea of expected utility , implies the conception of intelligence that involves competence in deductive logic, in probability and expected utility calculations and in following “linear,” rule oriented, decision procedures. Empirical evidence seems to show that we are rational in a way that is consistent with this kind of intelligence , rather, we seem to use fallible heuristics, such as “representativeness,” “anchoring and adjustment” and “availability,” which have been described by Kahnemann and Tversky (and are summarized in Lopes and Oden’s paper). Lopes and Oden question certain assumptions behind the empirical research; they make comparisons with artificial intelligence (AI) research (which sometimes strives to incorporate heuristics with properties that authors argue are shared with the three main heuristics Tversky and Kahnemann have identified); and they argue, using recent ideas in AI and in psychology, that using the relevant heuristics embodies a more valuable kind of intelligence than the kind that works in the special conditions that obtain in empirical research in question.

Next in Part IV, G. GIGERENZER addresses the idea that there is an analogy between visual illusions and purported cognitive illusions. He points to the three features of visual illusions: 1) there is only one way things really are, 2) illusion persists even when the operative principles are known to the subject, and 3) the fact that what is perceived depends on certain contextual factors and on prior knowledge. Gigerenzer argues that the analogy fails for features 1) and 2). But he agrees with the analogy for feature 3), and he develops the analogy into an interesting way of seeing how optimal cognitive function can be consistent with systematic cognitive illusions. This conclusion is similar to those drawn by Lopes and Oden, though the analyses are different. Next, J.E. ADLER comes to conclusion similar those of Lopes and Oden and of Gigerenzer, though for different reasons. Adler argues that considerations of “conversational implicature” suggest that subjects may be misled by informal features of instructions given in tests. In some tests subjects are given information the very supplying of which would seem to “conversationally imply” its relevance to the solution of the problem at hand, whereas this information is actually irrelevant to the problem understood in a formally standard way. Adler also explores the limitations of his conversational approach, and he argues that the connection between subjects’ responses and the assessment of their beliefs and rationality is not very straightforward. In line with some of the ideas urged above, Adler thinks of rationality as testable only in the long run; and in line with Lopes and Oden, he thinks of cognition as having multiple “objectives,” truth being just one of them.

The next two contributions focus on Cohen’s norm extraction method. T. MARUSZEWSKI’s essay stresses the fact that according to this method, we may distinguish different kinds of human rationality. The author’s aim is to transfer the distinction between competence and performance into an analysis of rationality, and this is another application of Cohen’s idea that had been used before in his analysis of subjective probability. Finally, I.T. SCIGALA applies the norm extraction method to the question of the nature of mental health. Most previous attempts to understand the idea of mental health were postulative in nature, and often combined prescriptive and descriptive elements. It is argued that the use of the norm extraction method avoids pitfalls of previous attempts to understand mental health.

The volume concludes with Part V, in which L.J. COHEN offers commentary on the main themes discussed in previous contributions: methodology, induction and rationality.