|Edited by Andoni Ibarra and Thomas Mormann
Amsterdam-Atlanta, GA: Rodopi, 1997
ISBN 90-420-0324-3 (b)
Attempts from the Iberian Peninsula to contribute to the philosophical study of science during our century and to receive it critically have been largely a failure. To explain this, we propose a chronology for the evolution of philosophy of science in Spain: I. First Signs, II. Conjunctural Reception, III. Academic Reception and IV. Normalization. We show that the history of Philosophy of Science in Spain in the 20th century has been that of a series of ruptures established on the basis of a latent stylistic and thematic continuity. Finally, an interpretation for the normalization linked to the shift in the nineties is given.
In this paper we argue for the thesis that theories are to be considered as representations. The term 'representation' is used in a sense inspired by its mathematical meaning. Our main thesis asserts that theories of empirical science can be conceived of as geometrical representations. This idea may be traced back to the very beginnings of Western science, to wit, Galileo. The geometric format of empirical theories cannot be simply considered as a clever device for displaying a theory. Rather, the geometric representation deeply influences the theory's ontology. Embedding the representational approach in the framework of a Peircean semiotics enables to take into account explicitly the role of the cognizing subject for the representational constitution and development of empirical theories. Finally, we address the recently much debated problem of whether the concept of representation is a philosophically respectable notion or not. We argue that it would be disastrous for philosophy if it followed Rorty's "neo-pragmatic" proposal to discard the concept of representation from philosophical discourse.
Measurement, which is par excellence the justification of experimental science, requires in turn its own justification. Almost all indirect measurements depend on experimental laws that relate the measured magnitude to magnitudes directly measurable. And direct measurement depends on empirical generalizations as a result of which the measured property has the structure of magnitude. But both the laws and the empirical generalizations that structure measurement must be contrasted by means of measurement. This circularity disappears due to the diversity of the methods of measurement of the same magnitude. And, in the genesis of the magnitudes, when there is only one method of measurement, circularity disappears with the repetition of any one method of measurement of a magnitude.
Some cognitive scientists, attracted by Kant's theory of space, and thinking it straightforward to make a naturalistic reading of the Kantian notion of "subjective constitution" in terms of brain structures, have, in recent times, tried to derive neurophysiological underpinnings for the alleged uniquely intuitive character of Euclidean geometry, which would then characterize "psychological space." From this, it has also been claimed that we can infer the inexistence of physical space. In this paper, we discuss various reasons for believing the derivation approach is not viable, and we point out the paradoxical character of the radical claim for inexistence. We criticize also further arguments for attributing an extremely subjective character to space based on considerations of innateness, and, as an alternative, we make some tentative proposals for the attribution of a minimally objective character to spatial representations.
Measurement Theories study different types of qualitative empirical systems and different groups of conditions that give rise to different kinds of numerical representations. Some measurement systems are characterized by the fact that the objects under qualitative comparison are pairs expressing magnitude intervals or differences. Interval Measurement Theory (IMT) studies different groups of conditions that these systems must satisfy for numerical representation to be possible. The aim of this contribution is to reconstruct IMT in the structuralist framework as a set of theory-elements making up a theory-net. It begins with a conceptual and historical introduction in order to determine some concepts, especially those of measurement, metrization and measurement theories, and to set the general traits of the reconstruction. Secondly, IMT is identified as a tree-like theory net and, after some comments on the nature of IMT and the strategy of its reconstruction, the basic theory-element is defined. Thirdly, the first line of specialization, algebraic interval systems, is reconstructed with its further specializations. Finally, we reconstruct the other main line of specialization, which deals with absolute interval systems, also with its further branches. Representation theorems are given formally, with introductory comments but without complete proofs.
In economic theory, no-one usually expects theoretical models to correspond directly to empirical facts for the specific cases to which they are to be applied. This paper has two goals related to this situation. Firstly, it attempts to make it more understandable, underlining certain features of the strategy used in the construction of the theory. Secondly, it analyzes several aspects of the mechanism by which these models induce and support beliefs and expectations in relation to these empirical cases.
H. A. Simon, Nobel Prize 1978, offers "bounded rationality" as an alternative to classical and neoclassical conceptions of rationality. For him, the aim of economic decision- making is satisficing certain goals instead of maximizing utilities or profits. In the economic theory he presents a new conception of the role of prediction in economics. He rejects the primacy of prediction in economics, a view held by some neoclassical authors, since he does not consider prediction in economics to be a basic element of the set of characteristics that make economics a science. His focus is on understanding the mechanisms that explain past and present economic phenomena rather than on predictability.
This paper analyzes the philosophical basis of Simon's "bounded rationality" and makes explicit the view of economic prediction which derives from it. This analysis focuses on the characteristics of his conception of "economic predictions" and evaluates its adequacy in characterizing what economic prediction is and ought to be. After this critical analysis of his conception, a final section outlines an alternative concept of prediction based on Action Theory, where economics as "activity" replaces economics as "behavior."
Scientific researchers can be seen to act as economic agents, who try to maximize an "epistemic utility function." Several such functions are defined that allow methodological rules to be derived as criteria which determine the "utility" of several theories under given circumstances.
In addition to this, researchers have to take decisions about what to do with their limited resources (money, staff; instruments, and so on). The maximization of "epistemic utility" also makes it possible to explain these decisions, and justifies the creation of "epistemic institutions" as sets of norms that try to make the decisions of individual scientists optimal from the collective point of view.
The neologism "methodonomics" is introduced to denominate this kind of studies.
A prominent concern of the structuralist conception of metascience is the specific ontology of empirical theories. Provided that the ontology of empirical theories is subject to semantic constraints, structuralist analyses constitute an interesting starting point for the design of a formal semantics specifically intended as an account of scientific expressions. However, the structuralist movement itself has paid little attention to the semantic analysis of linguistic formulations of scientific theories (in the formal semantic framework), with respect to giving an account of the semantic relationships between scientific expressions and extralinguistic entities.
In Montague's semiotic program, we find a paradigm for formal analysis of the semantic problems of language. The aim of this paper is to establish the basis of a framework for formal ontosemantic analysis of the expressions of an empirical theory in the light of the insights provided by the structuralist conception, and to specify what modifications must be made to Montague's general program for this purpose.
We analyze to what extent fuzzy logic can help to pose some problems, or to look at existing traditional problems from a new point of view, in the philosophy of science. We start by examining whether vagueness can be eradicated from scientific language by way of two different outlooks: those of Frege and Parikh. According to Frege, vague predicates have sense but not reference, therefore they cannot play a role in scientific theory. Parikh shows the ineradicability — under certain but frequent conditions — of the vagueness of observational predicates. Subsequently, we show how fuzzy set theory can help to model and analyze some forms of imprecise knowledge. By means of fuzzy sets it is possible to define new forms of transitivity which enable us to model some aspects of equality in physics and psychophysics from a new perspective, as well as linguistic relations for which there was until now no mathematical model, such as synonymy.
Once mathematics has been approached as a praxis that creates models of the real, different types of Doing appear in it: Figural, Global and Computational that are reflected in different styles. Each Doing has its demonstrative ways, each with their own characteristics. In Figural Doing, the given element is the object, e.g., geometric figure, different equation, polynomial, and its properties are constructed. In Global Doing, demonstrative ways such as bijection, chain, diagonal, exponentiation, appear; these are mainly existential and are generally combined with reductio ad absurdum. In Computational Doing, recursive repetition is linked to the notion of algorithm. Mathematical Doings, with their demonstrative ways, different in each case, induce an epistemological inversion with regard to the former.
In this paper, I explore the relationship between three theories: the Sexual Theory of Reproduction (SR), Mendelian genetics (MG) and Mendel's Hybridization Theory (MH). As a preliminary step, each of these must be stated with a minimum of clarity and precision. Subsequently the relationship between them is shown. Insofar as the theoretical-semantic aspect is concerned, MH and MG are strongly incomplete theoretizations on SR, and although MH and MG are equivalents, their theoretical specializations are not; the theoretical specializations of MH can be reduced to those of MG, but those of MG cannot be reduced to those of MH. By contrast, with reference to the technical-experimental aspect, the techniques employed MH and MG are constructed upon the foundations provided by the techniques of SR.
The concept of universe makes, in principle, sense but only if viewed as an entity that can be conceived as the model of a fundamental scientific theory. Since, normally, theories have different models, the question arises whether we have good reasons to assume that there is just one universe. Four a priori possibilities have to be envisaged: A) there is just one fundamental theory and it is categorical; B) it is not categorical; C) there are several different but mutually compatible fundamental theories; D) they are mutually incompatible. Only in cases A) or B) would there be any warrant for the unicity of the universe. However, both synchronically and diachronically, case D) is the most plausible one, so that we should admit a plurality of (real) universes.