### Reviewed by Alexander Reutlinger and Stephan Hartmann

*Reconstructing Reality: Models, Mathematics, and Simulations*

Margaret Morrison

New York: Oxford University Press, 2015, £44.49 (hardback)

ISBN 9780199380275

In her new book *Reconstructing Reality *(henceforth RR), Margaret Morrison’s main target is the kind of information about the world (or, more specifically, about physical and biological systems) one can extract from the ‘reconstructive methods and practices of science’ (p. 1). To address this, Morrison focuses on three central kinds of interrelated strategies for ‘recasting nature’ (p. 2) by using reconstructive methods and practices: (i) abstract mathematical explanations and understanding (Part 1 of the book), (ii) scientific models (Part 2), and (iii) computer simulations (Part 3). Hence, RR is the ambitious attempt to connect and analyse three major topics in current philosophy of science.

Morrison does not approach these topics by developing a general philosophical theory of (various aspects of) explanation, understanding, models, and simulations in the sciences—where a general philosophical theory consists in the explication of concepts in the form of providing necessary and sufficient conditions. Instead, we think that it is fair to describe Morrison’s project as ‘particularist’ or even ‘therapeutic’—at least in spirit, in the sense of philosophers such as Wittgenstein and Rorty. (Morrison sometimes chooses terms such as ‘naturalist’, ‘minimalist’, or ‘deflationist’ to describe her own views.) Her major goal is to show that a general philosophical account of explanations and understanding, models, and simulations cannot be had. Or more cautiously put, Morrison argues that the general accounts currently on the table are all flawed and unsatisfactory in that these accounts do not cover all (or even the majority of) paradigmatic examples of explanations and understanding, models, and simulations in the sciences. Either way, RR is a book with provocative meta-philosophical claims and goals. Throughout the book, Morrison supports this particularist or therapeutic view by guiding the reader through a wealth of detailed case studies, each of which is intended to help undermine some aspect of some general account proposed in the literature.

Due to the sheer complexity and number of the case studies discussed in RR, it is impossible to summarize the content of the book in a review. What we will do instead is to highlight selected examples and claims from the three main parts of the book in order to present central aspects of Morrison’s particularist and therapeutic project.

In Part 1 (‘Mathematics, Explanation, and Understanding’), Morrison turns to the first kind of reconstructive method: an analysis of the role that ‘abstract’ mathematical assumptions play in the explanation and understanding of physical and biological phenomena. Morrison’s goal is not to argue for any general philosophical theories of explanation and of understanding. Following her particularist and therapeutic approach regarding both explanation and understanding in the sciences, Morrison instead adopts a minimalist view according to which ‘neither understanding nor explanation is capable of being successfully codified into a philosophical theory’ (p. 19). We take this to suggest that Morrison believes that there is no single account of explanations (such as the covering law account, the unification account, causal accounts, and so forth) that doesn’t face serious counterexamples and that covers all kinds of causal and non-causal (‘structural’) explanations.

Morrison’s focus is on ‘abstract’ mathematical assumptions that are key to explaining and understanding certain phenomena and ‘deidealization [of the abstract mathematical assumptions] isn’t possible’ (p. 21). Here, it is important to highlight how Morrison draws a line between idealization and abstraction. Morrison uses the abstract/idealized distinction in a rather idiosyncratic way: ‘abstraction is a process whereby we describe phenomena in ways that cannot possibly be realized in the physical world’ and ‘the mathematics associated with the description is necessary for modeling the system’ (p. 20). By contrast, idealization ‘typically involves a process of approximation whereby the system can become less idealized by adding correction factors’ (p. 20).

Morrison presents two main examples illustrating explanation and understanding based on abstract mathematical assumptions. Both examples concern limit theorems. The first example is the renormalization group explanation of phase transitions and universality involving the abstract mathematical assumption that ‘a system contains an infinite number of particles’ (p. 27), the so-called thermodynamic limit. Her second example is from biology and concerned with abstract mathematical assumptions involved in the application of the Hardy–Weinberg law, such as the assumption that a population of organisms is infinitely big.

Morrison defends such a reliance on mathematical assumptions against critics (including Callender and Earman, among others) who are portrayed as dismissing any idealization or abstraction as bad science if it cannot be de-idealized (p. 28). Morrison’s main line of argument against the critics is that their ‘line of reasoning quickly rules out explanations of the sort we deem acceptable in other contexts’ (p. 28). One example that Morrison uses in order to support this claim are cases of inter-theory relations. It is uncontroversial that characterizing the relations between theories (in physics) often involves limit theorems. However, the critics of limit theorems in the context of renormalization group explanations do not seem to be critical of limit theorems in the context of inter-theoretical relations, or so Morrison argues (p. 29). Furthermore, Morrison provides a stimulating and detailed discussion of how abstract mathematical assumptions are linked with statistical analyses in R. A. Fisher’s model in population ecology (pp. 40–8).

Morrison also tackles another big issue regarding so-called mathematical explanations. A large part of the recent literature on this subject has revolved around making a sensible distinction between ‘purely’ or ‘distinctively’ mathematical explanations of physical phenomena and merely mathematized explanations thereof (p. 50). Morrison proposes to side-step this issue in a particularist and therapeutic fashion: ‘the worry isn’t whether the explanation is truly mathematical but rather whether there is any genuine physical information over and above the calculation or prediction of numerical values for specific parameters and effects’ (p. 55). To address this extremely interesting question, Morrison returns to one of her main examples of an abstract mathematical assumption figuring in the explanation and understanding of a physical phenomenon—renormalization group explanations of universality involving the thermodynamic limit. Morrison defends the claim that the abstract mathematical assumptions (such as the thermodynamic limit) within renormalization group explanations provide genuine physical information, because—without limit assumption—it is not possible to ‘show *how* the coupling constants at different length scales could be computed; *how* critical exponents could be estimated; and, finally, that the universality followed, in a sense, from the fact that the process [of applying renormalization group transformations] can be iterated’ (p. 78).

In Part 2 (‘Where Models Meet the World: Problems and Perspectives’), Morrison lays out several extended case studies of models including Maxwell’s ether model and the Fisher–Wright model in population genetics. Morrison uses these case studies to support her particularist and therapeutic stance. First, she argues against fictionalism about models as being ‘at once too broad and too narrow’ (p. 90) targeting views by Cartwright, Frigg, Godfrey-Smith, and others. Morrison articulates her general line of critique as follows: ‘The general characterization of models as fictions seems unable to do justice to the way models are used in unearthing aspects of the world we want to understand’ (p. 97). Drawing on a complex historical case study of Maxwell’s ether model (pp. 98–112), Morrison attempts to show how a toy model (containing certain fictional elements) can be used to do precisely what fictionalists fail to capture—namely, to ‘unearth aspects of the world’ (including mechanical aspects, pp. 110–11) we wish to explain and understand.

The second target of Morrison’s particularist and therapeutic approach is the debate on scientific representation in the context of models. Morrison focuses on accounts of scientific representation by Cohen and Callender, Frigg, Giere, Suarez, and van Fraassen. Although Morrison is explicitly sympathetic to various features of these accounts, she argues that general philosophical accounts of representation—referring to truth, approximate truth, isomorphism, and so on—are wrong-headed. Morrison advocates a ‘deflationary attitude’ (p. 125) towards scientific representation. No general philosophical account of scientific representation is needed to state why some particular model is useful for representing a target: ‘for example, Maxwell’s ether model was a useful representation, not because it was true or approximately true but because it could be used to generate hypotheses about how electromagnetic waves were propagated through space’. This ‘deflationary’ case study-based approach is, according to Morrison, not an anything goes view since particular models are subject to specific modeling constraint such as: ‘But that model [that is, Maxwell’s ether model] needed to represent the ether as a mechanical system if it was to yield useful results’ (p. 128). Morrison’s more general lesson regarding scientific representation is this: ‘we don’t need a metaphysics of representation to understand how specific examples of representational models can convey information about their target’ (p. 129). Furthermore, Morrison provides a detailed and critical discussion of the (alleged) role of inconsistencies in science (see Chapter 5). Once more, the upshot of Morrison’s discussion regarding (alleged) inconsistencies in science is particularist and therapeutic: ‘What is significant for philosophical purposes is that this is not a situation that is resolvable using strategies like partial structures, paraconsistent logic, or perspectivism. No amount of philosophical wizardry can solve what is essentially a scientific problem of theoretical consistency and coherence’ (p. 192).

In Part 3 (‘Computer Simulation: The New Reality’), Morrison investigates philosophical issues arising in the context of computer simulations. One such issue concerns a central question in the recent epistemology of computer simulation: the question whether computer simulations (or, computer simulation studies) provide the same kind of knowledge as laboratory experiments do. A plausible reason for believing that computer simulations do not yield the same kind of knowledge as laboratory experiments is expressed by the ‘materiality condition’, that is, ‘the idea that experimental inquiry typically involves not only a concrete physical system but typically one that is made of the “same stuff” as the target’ (p. 211). Morrison does not accept the materiality condition and puts more emphasis on ‘formal similarity’ between the model underlying the simulation and its target (p. 211). Simulations sometimes have the epistemic function of a traditional lab experiment (especially, but not only, in situations in which we cannot perform traditional lab experiments): ‘for instance, measuring the values of critical exponents in second-order phase transitions is typically done via simulations. Given that simulation is capable of generating enough data to take account of an extremely large number of scenarios and conditions, it would seem that it can provide a much more reliable way to determine or measure the values of parameters than experiment, even when the latter might be possible’ (p. 238). And in the same vein, Morrison writes: ‘the computational power provided by CS [computer simulations] allows us to uncover features of physical systems that may be implicit in the equations but would otherwise remain unknown’ (p. 250).

However, she also points out that it too simple-minded (and ultimately not interesting) to exclusively focus on the question of whether computer simulations are just like ordinary lab experiments, and to then search for a general philosophical account of experiments applying to both traditional lab experiments and computer simulation studies. The lesson of her particularist and therapeutic approach to the epistemology of computer simulations is summarized in two quotes. First: ‘my intention was not to argue that simulation is just like experiment but rather to emphasize that the conditions under which we have faith in the experimental data are sometimes also replicated by CS [computer simulations]’ (p. 249). Second: ‘the claim that simulation is, in essence, epistemically inferior to experiment is simply not true’ (p. 316). Morrison supports and supplements this view by (a) a detailed analysis of the validation and verification of computer simulations, especially engaging with Winsberg’s work (pp. 253–84), and by (b) analysing the role of computer simulations in the discovery of the Higgs boson.

In sum, Morrison’s new book is a rich and provocative contribution to several core debates in current philosophy of science such as explanations and understanding, idealizations, models, and computer simulations. Even if one is not convinced by all of the points that Morrison makes in (what we call) a ‘particularist’ and ‘therapeutic’ style, one thing is for sure: if one wants to defend a ‘general philosophical account’ of some of the discussed topics, then one has to respond to the criticisms forcefully presented in RR.

*Alexander Reutlinger and Stephan Hartmann
Munich Center for Mathematical Philosophy, LMU Munich
Munich, Germany
Alexander.Reutlinger@lmu.de
S.Hartmann@lmu.de*