Edited Volumes:
1) Why Trust a Theory? Epistemology of Fundamental Physics (2019): Cambridge University Press (with R. Dawid and K. Thebault)
Abstract: Do we need to reconsider scientific methodology in light of modern physics? Has the traditional scientific method become outdated, does it need to be defended against dangerous incursions, or has it always been different from what the canonical view suggests? To what extent should we accept non-empirical strategies for scientific theory assessment? Many core aspects of contemporary fundamental physics are far from empirically well-confirmed. There is controversy on the epistemic status of the corresponding theories, in particular cosmic inflation, the multiverse, and string theory. This collection of essays is based on the high profile workshop 'Why Trust a Theory?' and provides interdisciplinary perspectives on empirical testing in fundamental physics from leading physicists, philosophers and historians of science. Integrating different contemporary and historical positions, it will be of interest to philosophers of science and physicists, as well as anyone interested in the foundations of contemporary science.
The workshop has been discussed e.g. here, here and here; video of my talk can be found here.
2) Symmetries and Asymmetries in Physics (2020). Synthese Topical Collection (with Mathias Frisch and Giovanni Valente).
Papers:
- "Unsharp Humean Chances in Statistical Physics: A Reply to Beisbart." (with Luke Glynn, Karim Thébault and Mathias Frisch). In: Maria Carla Galavotti, Dennis Dieks, Wenceslao J. Gonzalez, Stephan Hartmann, Thomas Uebel, Marcel Weber (Eds.), New Directions in the Philosophy of Science, Springer, 531-542. (Draft Version, 869 KB)
mechanics (SM) are Best System chances runs into a serious obstacle: there is no one axiomatization of SM that is robustly
best, as judged by the theoretical virtues of simplicity, strength, and fit. Beisbart takes this 'no clear winner' result to imply
that the probabilities yielded by the competing axiomatizations simply fail to count as Best System chances. In this reply, we
express sympathy for the 'no clear winner' thesis. However, we argue that an importantly different moral should be drawn
from this. We contend that the implication for Humean chances is not that there are no SM chances, but rather that SM
chances fail to be sharp.
- "Confirmation via Analogue Simulation - What Dumbholes Could Tell us About Gravity" (with Karim Thébault and Eric Winsberg). The British Journal for the Philosophy of Science, 68 (1), 55–89. (Draft Version, 247 KB)
potential to be confirmatory. This notion is distinct from the modes of analogical reasoning detailed in the literature, and
draws inspiration from fluid dynamical ‘dumb hole’ analogues to gravitational black holes. For that case, which is considered
in detail, we defend the claim that the phenomena of gravitational Hawking radiation could be confirmed in the case that its
counterpart is detected within experiments conducted on diverse realizations of the analogue model. A prospectus is given
for further potential cases of analogue simulation in contemporary science.
- "On the Empirical Consequences of the AdS/CFT Duality" (with Richard Dawid, Sean Gryb and Karim Thebault) In Huggett et al.: Beyond Spacetime with Cambridge University Press.
in a fundamental theory, effective theory and instrumental context. Analysis of the first two contexts is intended to serve as a
guide to the potential empirical and ontological status of gauge/gravity dualities as descriptions of actual physics at the
Planck scale. The third context is directly connected to the use of AdS/CFT to describe real quark-gluon plasmas. In the
latter context, we find that neither of the two duals are confirmed by the empirical data.
- Physics Without Experiments? In: Epistemology of Fundamental Physics: Why Trust a Theory? (Draft Version)
alternative methods of theory assessment that do not rely on experiments. For instance, Dawid [2013] has proposed a non-
empirical method of theory assessment which strongly relies on the concept of theory space. We will argue that the lack of
empirical data as well as this new proposed methodology re- quire a change in scientific practice, namely towards an active
search for alternative competing theories. We further argue that this change in prac- tice would face at least three challenges,
which illustrate the difficulty to implement this change of focus in practice.
A brief introduction to the physics and philosophy of symmetry breaking.
- Assessing Scientific Theories: A Bayesian Analysis (with Stephan Hartmann). In: Epistemology of Fundamental Physics: Why Trust a Theory?
other more controversial methods have been proposed, especially in fundamental physics. Amongst these methods are the
use of analogue experiments and so-called non-empirical ways of theory-assessment such as the no- alternatives argument.
But how can these methods themselves be as- sessed? Are they reliable guides to the truth, or are they of no help at all when
it comes to assessing scientific theories? In this chapter, we de- velop a general Bayesian framework to scrutinize these new
(as well as standard empirical) methods of assessing scientific theories and illustrate the proposed methodology by two
detailed case studies. This allows us to explore under which conditions non-traditional ways of assessing sci- entific theories
are successful and what can be done to improve them.
- "Hawking Radiation and Analogue Experiments: A Bayesian Analysis." (with Stephan Hartmann, Karim Thébault and Eric Winsberg). Studies in the History and Philosophy of Modern Physics, 67, 1-11.
First, we prove that such experiments can be confirmatory in Bayesian terms based upon appeal to ‘universality arguments’.
Second, we provide a formal model for the scaling behaviour of the confirmation measure for multiple distinct realisations of
the analogue system and isolate a generic saturation feature. Finally, we demonstrate that different potential analogue
realisations could provide different levels of confirmation. Our results provide a basis both to formalise the epistemic value of
analogue experiments that have been conducted and to advise scientists as to the respective epistemic value of future
analogue experiments.
- "The Ethical Dilemma when (not) Setting up Cost-based Decision Rules in Semantic Segmentation" (with Robin Chan, Matthias Rottmann, Fabian Hüger, Peter Schlicht, Hanno Gottschalk). Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop 2019.
probability distribution on predefined classes. The predicted class is then usually obtained by the maximum a-posteriori
probability (MAP) which is known as Bayes rule in decision theory. From decision theory we also know that the Bayes rule is
optimal regarding the simple symmetric cost function. Therefore, it weights each type of confusion between two different
classes equally, e.g., given images of urban street scenes there is no distinction in the cost function if the network confuses a
person with a street or a building with a tree. Intuitively, there might be confusions of classes that are more important to avoid
than others. In this work, we want to raise awareness of the possibility of explicitly defining confusion costs and the associated
ethical difficulties if it comes down to providing numbers. We define two cost functions from different extreme perspectives,
an egoistic and an altruistic one, and show how safety relevant quantities like precision / recall and (segment-wise) false
positive / negative rate change when interpolating between MAP, egoistic and altruistic decision rules.
- "No-Go Theorems: What Are They Good For?" Studies in History and Philosophy of Science Part A 86, 47-55
No-go theorems have played an important role in the development and assessment of scientific theories. They have stopped whole research programs and have given rise to strong ontological commitments. Given the importance they obviously have had in physics and the huge amount of literature on the consequences of specific no-go theorems, there has been relatively little attention to the more abstract assessment of no-go theorems as a tool in theory development. We will here provide this abstract assessment of no-go theorems and conclude that the methodological implications one may draw from no-go theorems is in disagreement with the implications that have often been drawn from them in the history of science.
- What should AI see? Using the Public's Opinion to Determine the Perception of an AI, (with Robin Chan, Meike Osinski, Matthias Rottmann, Dominik Brüggemann, Cilia Rücker, Peter Schlicht, Fabian Hüger, Nikol Rummerl, Hanno Gottschalk) Deep neural networks (DNN) have made impressive progress in the interpretation of image data, so that it is conceivable and to some degree realistic to use them in safety critical applications like automated driving. From an ethical standpoint, the AI algorithm should take into account the vulnerability of objects or subjects on the street that ranges from "not at all", e.g. the road itself, to "high vulnerability" of pedestrians. One way to take this into account is to define the cost of confusion of one semantic category with another and use cost-based decision rules for the interpretation of probabilities, which are the output of DNNs. However, it is an open problem how to define the cost structure, who should be in charge to do that, and thereby define what AI-algorithms will actually "see". As one possible answer, we follow a participatory approach and set up an online survey to ask the public to define the cost structure. We present the survey design and the data acquired along with an evaluation that also distinguishes between perspective (car passenger vs. external traffic participant) and gender. Using simulation based F-tests, we find highly significant differences between the groups. These differences have consequences on the reliable detection of pedestrians in a safety critical distance to the self-driving car. We discuss the ethical problems that are related to this approach and also discuss the problems emerging from human-machine interaction through the survey from a psychological point of view. Finally, we include comments from industry leaders in the field of AI safety on the applicability of survey based elements in the design of AI functionalities in automated driving.
In Progress:
- Understanding Scientific Problems
- What is this thing called theory space?
- The empirical progress of non-empirical research
- Testing Indvidual assumptions
- A multi-layered approach to trust in science