Edited by Martii Kuokkanen Amsterdam-Atlanta, GA: Rodopi, 1994 |

The paper gives a systematic introduction to the basic ideas of the structuralist reconstruction of empirical theories. It starts from a number of global ideas about the nature and structure of empirical theories. According to the structuralist view an axiomatized theory defines a class of structures, and the conditions imposed on the components of the structures are the axioms of the theory. The link with reality is made by the claim, associated with the theory, that the set of set-theoretic representations of the so-called intended applications forms a subset of the class of structures of the theory. The main advantage of the structuralist approach is that its representations and analyses of theories and their relations are as close to their actual presentations in textbooks as is formally possible. However, even theoretical questions concerning for instance idealization and concretization as a truth approximation strategies can be treated relatively easily in structuralist terms. I will present the main general aspects and I will not go into technical details which are not of primary importance for actual practice. My main goal is to make clear what kind of entities one may be looking for in theory formation and how standard questions about these entities can be explicated. Moreover, in passing I will explicate some standard Popperian concepts in structuralist terms.

There are at least three things in philosophy of science we have learnt since the heroic times of logical positivism: 1) there are no such things as brute empirical facts upon which a theory may be built or with which it may be confronted, 2) a scientific theory is a cultural product which essentially contains irreducibly pragmatic components and 3) in general, no theory works unless a certain measure of “idealization” with respect to its “outer world” is allowed. We call these results the principles of “theory-ladenness,” of “praxis-ladenness,” and of “approximation-ladenness” of science. The first two are, it seems, widely accepted and taken into account in present-day philosophy of science. The third one is not so popular, though it has increasingly become a matter of study from different perspectives in the last years.

The three principles mentioned are tied together in a deep, though not entirely obvious, way. To make the point very briefly, our thesis is that, on the one hand, empirically meaningful approximations and idealizations are only possible because of theory-ladenness and, on the other hand, the concepts of approximation and idealization we need for empirical theories are essentially constituted by some irreducibly pragmatic components in addition to semantic ones.

Recently, the scheme of idealization and concretization has become the subject of properly methodological investigation, mainly through the work of Nowak and Krajewski. Attempts are made to explicate the concepts, to apply them to cases from the history of science, and to evaluate them in the context of other methodological issues. In this paper we investigate the notions of idealization and concretization as intertheoretical relations in structuralist terms. Krajewski’s and Nowak’s explications use a rather narrow syntactic format. We generalize their concept of idealization by using the structuralist format. Our generalization makes these notions applicable to various reconstructed examples as they can be found in the literature. On the other hand, we do not generalize the treatment of concretization for this would lead us to a general notion which has been studied in detail under the label of approximative reduction. On our account, idealization and concretization are seen as special cases of approximative reduction which can be clearly distinguished from the latter, and may be studied on its own. In addition to this general point, we address three more special ones. First, we elaborate on a feature neglected by Nowak and Krajewski, namely that the laws of the two theories (the idealized and the concretized one) have the same form. Second, we stress that in the examples from the natural sciences an additional requirement of continuity is satisfied. Third, we suggest that quasi-metrical spaces are to be used for the treatment of “real-life” examples of idealization and concretization.

One possible way to explain the idealized character and indirect applicability of scientific theories is provided by possible worlds semantics. The framework of possible worlds semantics enables us to understand the counterfactual character of scientific laws. However, it remains incomplete as long as the relation between the actual and the “ideal” worlds is not elucidated. Another approach that has contributed to a deeper understanding of the problem of the applicability of empirical theories is structuralism. This has been done by developing a highly sophisticated description of the structure of empirical theories. However, until today, structuralism has hardly taken any notice of possible worlds semantics. Finally, there is a third approach to philosophy of science which has explicitly dealt with the problem of idealization, exemplified by the Poznan School. However, although Nowak repeatedly emphasizes the counterfactual character of economic laws, he never refers to any kind of possible world semantics or to any other account of modal logic. In this paper we introduce counterfactual (or idealizing) deformation procedures following some recent ideas of Nowak. We then study counterfactual deformation operators in the structuralist approach, and the complementary concepts of idealization and concretization, are introduced. Finally, we apply the framework of structuralism cum idealization structure to the elucidation of the counterfactual character of empirical laws.

I shall discuss the structuralist theory of verisimilitude developed by T.A.F. Kuipers. I concentrate mostly on applying the definition of verisimilitude to the values of a finite number of quantitative variables. A theorem is proved which shows that, in an important special case, Kuipers’s definition only makes use of a small part of the information contained in the hypotheses or theories compared. I shall also discuss in detail a physical system which provides us with several examples of counterintuitive results of another kind that the definition leads to.

The Galilean revolution consisted in making evident the misleading nature of the world image produced by the senses. We only see phenomena which are the joint effects of the relevant forces. As a result, senses do not contribute in the slightest to the understanding of the facts. To understand phenomena we must take into account the work of reason for reason is needed to select some features of the objects through idealization. These idealized models differ a great deal from their sensory prototypes. What is more, they present images of hidden relationships which could not be captured in experience at all. This gap between the abstract world of laws and the world of senses can be crossed by means of concretization which takes into account what has been previously abstracted from. Because of this, abstract laws become more and more realistic and the distance between them and the actual facts diminishes. Idealization and concretization constitute the essence of the method whose adoption in physics Galileo initiated. This method had been systematically applied by Newton and our understanding of it had been deepened in Newton’s Principia.

Applied science exists in two forms: predictive and design science. The former tries to establish dynamic regularities that help to predict the future state of a natural or social system; the latter attempts to establish technical norms or conditional rules of action. It is typical of both cases that idealized theoretical descriptive models are combined with empirical information. When the idealized model is concretized in Nowak’s sense, the predictions derived and technical norms can be likewise improved, so that their degree of approximate truth or truthlikeness increases. These methodological ideas can be illustrated by the history of exterior ballistics.

If human goal-directed behavior is to be analyzed from an action-theoretical perspective, a theory is required which comprises all relevant processes between the deliberation of potential action goals and the final evaluation of action outcomes. The Rubicon theory of action phases offers an adequate framework for a sequential description of goal-directed activities. A concise version of the Rubicon theory will be outlined in structuralist terms, comprising only those assumptions that are essential for either theoretical or empirical reasons. An assumption is classified as essential if it uses fundamental explanatory concepts of the Rubicon theory, if it has been empirically corroborated, or if it is at least expected to underlie future applications. All other assumptions, especially those which refer only to different operationalizations of higher-order concepts will be omitted. On the basis of the concise structuralist version of the theory, it is possible to illustrate the idealizing assumptions of the theory in more detail.

During the last twenty years, the interdisciplinary research efforts of synergetics have invented methods that make new and powerful tools available for the social scientist. It would be fruitful to study these theoretical tools from a structuralist point of view. We distinguish between two approaches. On the first one, we have models in which individuals do not interact directly, but change the state of the whole system by their behavior and react to changes of this collective with individual changes in their behavioral states. In the beginning we have an “aggregate” of initially unconnected individuals which turns by itself into a system. This process is due to the fact that the individuals are endowed with the ability to move in a potential which is built up as a result of the sheer existence of these individuals. On the second approach, we have models in which individuals interact directly, mostly in a network or, as it were, in a cellular automaton. Here too, the population is taken to be homogeneous in the beginning. Then, stochastically influenced interactions change the states of the interacting individuals as well as the relations between them. As a result we find stable clusters, groups, subnets, or strata in the whole population. The aim of the paper is to give a logical reconstruction of such approaches to self-organization in the social sciences. I shall try to show that computer simulation supports the structuralist reconstruction of these approaches if it is done in a certain way.

It is suggested by Smolensky that in cognitive science the symbolic is an idealization of the subsymbolic. They may be seen as complementary ways to understand and explain cognition, and therefore we should not try to eliminate one of them in favor of the other. Rather, we should think of their relationship as providing a cognitive correspondence principle, a principle analogous to the correspondence principle much discussed in the philosophy of physics. But Smolensky’s proposals are vague, and there seem to be no attempts in the literature to give them a more definite form. For this reason it is somewhat difficult to see their real significance. In this paper, we shall evaluate the proposals by relating them to the work recently done in the philosophy of science, and offer a case study of the relationship between symbolic and subsymbolic representations. It turns out that good sense can be made of the suggestion that a particular kind of symbolic representation (or a theory of symbolic representation) is a limiting case of a particular kind of subsymbolic representation (or a theory). Due to the method used, the case study also sheds some light on the notion of idealization in cognitive science.

The basic ideas of the Poznan School are highly suggestive, and the motivation laudable. However, we shall argue for some amendments. First, it is not clear that a theory as a whole has a structure which is susceptible to treatment along Poznan lines. Secondly, the notion of idealizational laws and their application process does not do full justice to the model-constructing activities found in many sciences. We shall single out one crucial feature of model building, viz. the interplay between general laws, on the one hand, and model-specific assumptions, on the other. We shall try to show that this is more complex than Poznan philosophers have assumed. The example of theory construction and model building we have chosen comes from the domain of evolutionary biology. We shall start with a brief account of evolutionary theory and the Poznan view and distinguish several senses in which a theory may be idealized and needs to be concretized. We shall show that evolutionary theory at large does not fit the view in which theories have a unique core, and that if we want to adopt a realistic picture of model building in evolutionary subtheories such as population genetics we have to make alterations in the Poznan account.

There has been much dispute concerning the use of the method of idealization in ecology. Its use has been defended especially by those ecologists who, following the example of mechanistic physical science, believe that the quantitative, mathematical and analytical methods of physical sciences are also applicable in ecology. On the other hand, its use has been criticized by those who have abandoned the mechanistic approach, arguing that it is incompatible with the view of the holistic, unique, and historically changing nature of ecological entities. We believe, however, that a more viable intermediate position between the extremities of mechanism and anti-mechanism exists. We call it the approach of interactive particularism.

Interactive particularism, developed in detail in the paper, is our ontological and methodological starting point. Two important additional theses are presupposed: the realistic conception of theory and the theory-ladenness of all data. First, the theoretical and the empirical contents of a theory are separated. Second, we claim that all observation, measurement and experimentation in ecology is founded on the theoretical ideas of the processes of data generation. Theory construction is analyzed as a process in which the theoretical content of a theory is gradually developed and explicated by using the methods of isolation, idealization, and concretization or specification. Theory construction which proceeds in this way typically results in a hierarchy of theoretical models with varying degrees of generality and realism. For empirical tests, the empirical content of a theory in some designed test situation must be specified via a theory of data generation. The relevant properties of the process of data generation are also defined step by step by using the methods of isolation, idealization, and concretization.

We shall reconstruct some basic ideas of Early Utilitarian Theorizing using the framework of the structuralist theory of science. We first divide the core assumptions of Early Utilitarianism into three groups: the Greatest Happiness Principle, the Impartiality Principle and assumptions about the quantitative and qualitative aspects of pleasures. Historically several substantially different versions of the Greatest Happiness Principle have been distinguished. We consider only two of them explicitly; Strict Universal Altruism and Classical Utilitarianism. There are also several formulations of the Impartiality Principle. We formulate two of them, a weak and a strong one. We study four different positions concerning the assumptions about the qualitative and quantitative aspects of pleasures. It is argued that the three core assumptions of Early Utilitarianism constitute three mutually independent (and compatible) sets. They make it possible to specialize the core assumptions in the structuralist sense. There are two basic theory- elements, a quantitative and a qualitative one, both constituting a theory-net. It is shown that three of the four positions on the relation of the qualitative and quantitative aspects of pleasures contain idealizing assumptions. Second, it is shown that applying utilitarian theorizing presupposes idealizations. The general form of idealizations can be crystallized using the Poznan School Theory of Idealizations. It turns out that the idealizations about the qualitative and quantitative aspects of pleasures are special, stronger cases of the general form of idealizations presupposed in applications of utilitarian theorizing.

In this paper I will discuss the question how substantial theories can be connected with measurement theories in order to adequately specify the theoretical basis of scores and numbers used in empirical research. The main problem dealt with here is the fact that, although there is no empirical research without some form of measurement, in analyzing and reconstructing actual scientific theories and research processes it becomes apparent that there is little or no manifest connection to a theory of measurement. The most basic problem lies in the fact that measurement theories usually involve idealizations that make it difficult to apply them in actual empirical research practice. Measurement theory primarily deals with an ideal form of measurement, the “representational” measurement, whereas the vast majority of variables are, at least in the social and behavioral sciences, amenable only to non-ideal forms of measurement which are called “derived” and “quasi- representational.” These three types of measurement are explained with the use of specific examples and some suggestions are given as to how they can be incorporated into theory-holons to form an adequate basis for deriving scale values for non-theoretical terms and assessing their scale type. In addition, most measurement-theoretical structures are strictly deterministic and pertain to ideal error-free situations. This idealization can be dealt with by probabilistic formulations and specific error theories or, more simply, by testing derived statistical hypotheses. A short explanation of this latter approach is also given analyzing quasi-representational measurement.

It is common scientific experience that deterministic empirical laws are seldom exactly valid. In nearly every non-trivial empirical application there are some data which are not in complete correspondence with the law. The whole conceptual framework of structuralism has been developed with regard to deterministically formulated theories. The formulation and empirical application of probabilistic models has hardly ever been treated within the structuralist framework. In order to counter this deficiency, an attempt is made to apply the structuralist conception to the reconstruction of a probabilistic model together with common strategies of its empirical application. The probabilistic model which is to be reconstructed is the so-called BTL-model, which is of great importance in psychology.

Two alternative reconstructions are presented. The first reconstruction is nearer to usual expositions of the model in the textbooks of mathematical psychology. It will, however, be argued that this reconstruction does not provide an adequate conceptual basis for reconstructing the empirical application of the model. In contrast, the second reconstruction is less similar to usual expositions; but on the other hand it is adequate in giving an account of how the model is empirically applied. The core idea of the latter reconstruction consists of interpreting probability as an idealized relative frequency.

In an earlier paper by Suck the basic ingredients of mathematical probability are incorporated in the structuralist formulation of theories. In this paper we investigate the empirical claims of probability statements in some simple cases. We base our reconstruction on frequencies in the following special sense: the standard mathematical definitions of probability are used on the theoretical level. However, frequencies are used in determining probabilities and interpreting probability statements, i.e., in all kinds of empirical and practical issues in which probabilities play a part. Mathematically we base our considerations on the various laws of large numbers which provide the connection between probabilities and frequencies. These theorems are not subject to discussions or contradicting opinions, and are therefore safe to start with. Relative frequencies are — in principle — easy to determine. Problems arise with the limiting process which the laws of large numbers entail and with the condition of independence. For the limit problem we draw on the topological construction of uniformities. As to the independence problem, we push the problem around a bit until it is connected to another problem for which there exists no known practical solution.

Quantum theory of measurement is the part of quantum mechanics which investigates the measurement process as a physical process subject to the laws of quantum physics. This theory has both practical and conceptual dimensions, ranging from a study of the measurement accuracy limitations in instrumentation technology to an investigation of the fundamental issues of quantum mechanics. Recent advances in ultrahigh technology have brought these two divergent subjects nearer to each other, to a quite surprising extent. Extremely well-controlled experimentation on individual objects, like atoms, neutrons, electrons, or photons is a daily practice in experimental quantum physics. To say the least then, this progress has made it highly desirable to develop an interpretation of quantum mechanics which goes beyond the phenomenological level of a statistical interpretation. Progress on the mathematical and conceptual foundations of quantum mechanics, especially on its theory of measurement, together with some new ideas on the interpretation of the theory, like the modal interpretations, have made it possible to formulate in a systematic fashion an interpretation of quantum mechanics which exceeds a purely statistical level, and which comes closer to present day experimental practice. In this paper I shall attempt to review some of that development and to show, in particular, how an increase in the degree of idealization in measurement leads to a further structural specification of a measurement, thereby opening perspectives for enriching the statistical interpretation of quantum mechanics to its realistic interpretation as a theory of individual objects and their properties.