The proponents of Marxism, however, neither abandoned the theory as falsified nor introduced any new, falsifiable auxiliary hypotheses that might account for the failed predictions. Instead, they adopted ad hoc hypotheses that immunized Marxism against any potentially falsifying observations whatsoever.
For example, the continued persistence of capitalism might be blamed on the action of counter-revolutionaries but without providing an account of which specific actions these were, or what specific new predictions about society we should expect instead. Popper concludes that, while Marxism had originally been a scientific theory:. It broke the methodological rule that we must accept falsification, and it immunized itself against the most blatant refutations of its predictions.
Ever since then, it can be described only as non-science—as a metaphysical dream, if you like, married to a cruel reality. A second complication for the simple theory of falsification just described concerns the character of the observations that count as potential falsifiers of a theory.
The problem here is that decisions about whether to accept an apparently falsifying observation are not always straightforward. For example, there is always the possibility that a given observation is not an accurate representation of the phenomenon but instead reflects theoretical bias or measurement error on the part of the observer s. In any specific case in which bias or error is suspected, Popper notes that researchers might introduce a falsifiable, auxiliary hypothesis allowing us to test this.
And in many cases, this is just what they do: students redo the test until they get the expected results, or other research groups attempt to replicate the anomalous result obtained. Popper argues that this technique cannot solve the problem in general, however, since any auxiliary hypotheses researchers introduce and test will themselves be open to dispute in just the same way, and so on ad infinitum.
If science is to proceed at all then, there must be some point at which the process of attempted falsification stops. In order to resolve this apparently vicious regress, Popper introduces the idea of a basic statement , which is an empirical claim that can be used to both determine whether a given theory is falsifiable and thus scientific and, where appropriate, to corroborate falsifying hypotheses.
More specifically, basic statements must be both singular and existential the formal requirement and be testable by intersubjective observation the material requirement. Every test of a theory, whether resulting in its collaboration or falsification, must stop at some basic statement or other which we decide to accept.
If we do not come to any decision, and do not accept some basic statement or other, then the test will have led nowhere… This procedure has no natural end. Thus if the test is to lead us anywhere, nothing remains but to stop at some point or other and say that we are satisfied, for the time being. Finally, if the scientific community cannot reach a consensus on what would count as a falsifier for the disputed statement, the statement itself, despite initial appearances, may not actually be empirical or scientific in the relevant sense.
Popper agrees with Hume that inductive reasoning in this sense could not be justified, and he thus rejects the idea that empirical evidence regarding particular individuals, such as successful predictions, is in any way relevant to confirming the truth of general scientific laws or theories. Popper argues that there are in fact two closely related problems of induction: the logical problem of induction and the psychological problem of induction. The first problem concerns the possibility of justifying belief in the truth or falsity of general laws based on empirical evidence that concerns only specific individuals.
However, Popper claims that while a successful prediction is irrelevant to confirming a law, a failed prediction can immediately falsify it. In contrast to the logical problem of induction, the psychological problem of induction concerns the possibility of explaining why reasonable people nevertheless have the expectation that unobserved instances will obey the same general laws as did previously observed instances. While the technical details of this account evolve throughout his writings, he consistently emphasizes two main points.
First, he holds that a theory with greater informative content is to be preferred to one with less content. Here, informative content is a measure of how much a theory rules out; roughly speaking, a theory with more informative content makes a greater number of empirical claims, and thus has a higher degree of falsifiability.
According to the latter view, a successful prediction of this sort, subject to certain caveats, provides evidence that the theory in question is actually true. The question of theory choice is tightly tied to that of confirmation: scientists should adopt whichever theory is most probable by light of the available evidence.
Instead, a corroborated theory has shown merely that it is the sort of theory that could be falsified and thus can be legitimately classified as scientific. While a corroborated theory should obviously be preferred to an already falsified rival see Section 2 , the real work here is being done by the falsified theory, which has taken itself out of contention.
While Popper consistently rejects the idea that we are justified in believing that non-falsified, well-corroborated scientific theories with high levels of informative content are either true or likely to be true, his work on degrees of verisimilitude explores the idea that such theories are closer to the truth than were the falsified theories that they had replaced.
The basic idea is as follows:. With this definition in hand, it might now seem that Popper could incorporate truth into his account of his theory preference: non-falsified theories with high levels of informative content were closer to the truth than either the falsified theories they replaced or their unfalsified but less informative competitors.
While Popper explores ways of modifying his proposal to deal with these problems, he is never able to provide a satisfactory formal definition of verisimilitude.
His work on this area is nevertheless invaluable in identifying a problem that has continued to interest many contemporary researchers. While a comprehensive list of these criticisms and alternatives is beyond the scope of this entry, interested readers are encouraged to consult Kuhn , Salmon , Lakatos , , Putnam , Jeffrey , Feyerabend , Hacking , and Howson and Urbach One criticism of falsificationism involves the relationship between theory and observation.
Because of this, those holding different theories might report radically different observations, even when they both are observing the same phenomena. For example, Kuhn argues those working within the paradigm provided by classical, Newtonian mechanics may genuinely have different observations than those working within the very different paradigm of relativistic mechanics.
His solution to it, however, crucially depends on the ability of the overall scientific community to reach a consensus as to which statements count as basic and thus can be used to formulate tests of the competing theories. This remedy, however, looks less attractive to the extent that advocates of different theories consistently find themselves unable to reach an agreement on what sentences count as basic.
Instead, the results of any such potentially falsifying experiment would be interpreted by one part of the community as falsifying a particular theory, while a different section of the community would demand that these reports themselves be subjected to further testing.
In this way, disagreements over the status of basic sentences would effectively prevent theories from ever being falsified. A second, related criticism of falsifiability contends that falsification fails to provide an accurate picture of scientific practice. Specifically, many historians and philosophers of science have argued that scientists only rarely give up their theories in the face of failed predictions, even in cases where they are unable to identify testable auxiliary hypotheses.
Conversely, it has been suggested that scientists routinely adopt and make use of theories that they know are already falsified. Instead, scientists will generally hold on to such theories unless and until a better alternative theory emerges. For example, Lakatos describes a hypothetical case where pre-Einsteinian scientists discover a new planet whose behavior apparently violates classical mechanics. Lakatos argues that, in such a case, the scientists would surely attempt to account for these observed discrepancies in the way that Popper advocates—for example, by hypothesizing the existence of a hitherto unobserved planet or dust cloud.
In contrast to what he takes Popper to be arguing, however, Lakatos contends that the failure of such auxiliary hypotheses would not lead them to abandon classical mechanics, since they had no alternative theory to turn to.
In a similar vein, Putnam argues that the initial widespread acceptance of Newtonian mechanics had little or nothing to do with falsifiable predictions, since the theory made very few of these. Finally, Hacking argues that many aspects of ordinary scientific practice, including a wide variety of observations and experiments, cannot plausibly be construed as attempts to falsify or corroborate any particular theory or hypothesis.
Instead, scientists regularly perform experiments that have little or no bearing on their current theories and measure quantities about which these theories do not make any specific claims. When considering the cogency of such criticisms, it is worth noting several things. First, it is worth recalling that Popper defends falsificationism as a normative, methodological proposal for how science ought to work in certain sorts of cases and not as an empirical description intended to accurately capture all aspects of historical scientific practice.
Second, Popper does not commit himself to the implausible thesis that theories yielding false predictions about a particular phenomenon must immediately be abandoned, even if it is not apparent which auxiliary hypotheses must change. This is especially true in the absence of any rival theory yielding a correct prediction. This being said, Popper himself argues that the methodology of falsificationism has played an important role in the history of science and that adopting his proposal would not require a wholesale revision of existing scientific methodology.
For example, Popper explicitly rejects the idea that corroboration is intended as an analogue to the subjective probability or logical probability that a theory is true, given the available evidence. Urbach argues that, insofar as Popper is committed to the claim that every universal hypothesis has zero probability of being true, he cannot explain the rationality of adopting a corroborated theory over an already falsified one, since both have the same probability zero of being true.
While the sorts of objections mentioned here have led many to abandon falsificationism, David Miller provides a recent, sustained attempt to defend a Popperian-style critical rationalism.
For more details on debates concerning confirmation and induction, see the entries on Confirmation and Induction and Evidence. While Popper grants that realism is, according to his own criteria, an irrefutable metaphysical view about the nature, he nevertheless thinks we have good reasons for accepting realism and for rejecting anti-realist views such as idealism or instrumentalism.
In particular, he argues that realism is both part of common sense and entailed by our best scientific theories. Once one accepts the impossibility of securing such certain knowledge, as Popper contends we ought to do, the appeal of these sorts of arguments is considerably diminished.
Popper consistently emphasizes that scientific theories should be interpreted as attempts to describe a mind-independent reality. Because of this, he rejects the Copenhagen interpretation of quantum mechanics, in which the act of human measurement is seen as playing a fundamental role in collapsing the wave-function and randomly causing a particle to assume a determinate position or momentum.
In particular, Popper opposes the idea, which he associates with the Copenhagen interpretation, that the probabilistic equations describing the results of potential measurements of quantum phenomena are about the subjective states of the human observers, rather than concerning mind-independent existing physical properties such as the positions or momenta of particles.
It is in the context of this debate over quantum mechanics that Popper first introduces his propensity theory of probability. Popper proposes his propensity theory as a variant of the relative frequency theories of probability defended by logical positivists such as Richard von Mises and Hans Reichenbach. According to simple versions of frequency theory, the probability of an event of type e can be defined as the relative frequency of e in a large, or perhaps even infinite, reference class.
The main alternatives to frequency theory that concern Popper are logical and subjective theories of probability, according to which claims about probability should be understood as claims about the strength of evidence for or degree of belief in some proposition. Like other defenders of frequency theories, Popper argues that logical or subjective theories incorrectly interpret scientific claims about probability as being about the scientific investigators, and the evidence they have available to them, rather than the external world they are investigating.
However, Popper argues that traditional frequency theories cannot account for single-case probabilities. By contrast, questions about the probability that it will rain on a particular , future August day raises problems, since each particular day only occurs once.
At best, frequency theories allow us to say the probability of it raining on that specific day is either 0 or 1, though we do not know which. To resolve this issue, Popper proposes that probabilities should be treated as the propensities of experimental setups to produce certain results, rather than as being derived from the reference class of results that were produced by running these experiments. On the propensity view, the results of experiments are important because they allow us to test hypotheses concerning the values of certain probabilities; however, the results are not themselves part of the probability itself.
Popper argues that this solves the problem of single-case probability, since propensities can exist even for experiments that only happen once. Importantly, Popper does not require that these experiments utilize human intervention—instead, nature can itself run experiments, the results of which we can observe. For example, the propensity theory should, in theory, be able to make sense of claims about the probability that it will rain on a particular day, even though the experimental setup in this case is constituted by naturally occurring, meteorological phenomena.
Popper argues that the propensity theory of probability helps provide the grounds for a realist solution to the measurement problem within quantum mechanics. As opposed to the Copenhagen interpretation, which posits that the probabilities discussed in quantum mechanics reflect the ignorance of the observers, Popper argues these probabilities are in fact the propensities of the experimental setups to produce certain outcomes.
Interpreted this way, he argues that they raise no interesting metaphysical dilemmas beyond those raised by classical mechanics and that they are equally amenable to a realist interpretation. If the experimental setup, however, is expanded to include the results of our looking at the penny, and thus includes the outcome of the experiment itself, then the probability will be either 0 or 1.
This does not, though, involve positing any collapse of the wave-function caused merely by the act of human observation. Instead, what has occurred is simply a change in the experimental setup. Once we include the measurement result in our setup, the probability of a particular outcome will trivially become 0 or 1. This picture becomes somewhat more complicated, however, when we consider methodology in social sciences such as sociology and economics, where experimentation plays a much less central role.
This stands in stark contrast to disciplines such as physics, where the formulation and testing of laws plays a central role in making progress. If the relevant theories are falsified, scientists can easily respond, for instance, by changing one or more auxiliary hypotheses, and then conducting additional experiments on the new, slightly modified theory.
By contrast, a law that purports to describe the future progress of history in its entirety cannot easily be tested in this way. Even if a particular prediction about the occurrence of some particular event is incorrect, there is no way of altering the theory to retest it—each historical event only occurs one, thus ruling out the possibility of carrying more tests regarding this event.
Popper also rejects the claim that it is possible to formulate and test laws of more limited scope, such as those that purport to describe an evolutionary process that occurs in multiple societies, or that attempt to capture a trend within a given society.
This impossibility is because of the holism of utopian plans, which involve changing everything at the same time. This lack of testability, in turn, means that there is no way for the utopian engineers to improve their plans. In place of historicism and utopian holism, Popper argues that the social sciences should embrace both methodological individualism and situational analysis. Scientific hypotheses about the behavior of such unplanned institutions, then, must be formulated in terms of the constituent participants.
For both Popper and Hayek, the defense of methodological individualism within the social sciences plays a key role in their broader argument in favor of liberal, market economies and against planned economies. While Popper endorses methodological individualism, he rejects the doctrine of psychologism , according to which laws about social institutions must be reduced to psychological laws concerning the behavior of individuals.
Popper objects to this view, which he associates with John Stuart Mill, on the grounds that it ends up collapsing into a form of historicism. In order to eliminate the reference to the particular social institutions that make up this environment, we are then forced to demonstrate how these institutions were themselves a product of individual motives that had operated within some other previously existing social environment.
This, though, quickly leads to an unsustainable regress, since humans always act within particular social environments, and their motives cannot be understood without reference to these environments. The only way out for the advocate of psychologism is to posit that both the origin and evolution of all human institutions can be explained purely in terms of human psychology.
Popper argues that there is no historical support for the idea that there was ever such as an origin of social institutions. He also argues that this is a form of historicism, insofar as it commits us to discovering laws governing the evolution of society as a whole. As such, it inherits all of the problems mentioned previously. In place of psychologism, Popper endorses a version of methodological individualism based on situational analysis.
On this method, we begin by creating abstract models of the social institutions that we wish to investigate, such as markets or political institutions. In keeping with methodological individualism, these models will contain, among other things, representations of individual agents. Viewing truth in terms of a commitment to natural realism is not so clearly pragmatic though some parallels still exist.
Because natural realism allows for different types of truth-conditions—some but not all statements are true in virtue of correspondence—it is compatible with the truth-aptness of normative discourse: just because ethical statements, for example, do not correspond in an obvious way to ethical state of affairs is no reason to deny that they can be true Putnam In addition, like earlier pragmatic theories of truth, this neo-pragmatic approach redefines correspondence: in this case, by taking a pluralist approach to the correspondence relation itself Goodman These two approaches—one tending toward relativism, the other tending toward realism—represented the two main currents in late twentieth century neo-pragmatism.
Both approaches, at least initially, framed truth in terms of justification, verification, or assertibility, reflecting a debt to the earlier accounts of Peirce, James, and Dewey. Subsequently they evolved in opposite directions. The first approach, often associated with Rorty, flirts with relativism and implies that truth is not the important philosophical concept it has long been taken to be.
Here, to take a neo-pragmatic stance toward truth is to recognize the relatively mundane functions this concept plays: to generalize, to commend, to caution and not much else. On this account truth points to standards of correctness more rigorous than simply what our peers will let us get away with saying. Like neo-pragmatic accounts, these theories often build on, or react to, positions besides the correspondence theory: for example, deflationary, minimal, and pluralistic theories of truth.
Unlike some of the neo-pragmatic accounts discussed above, these theories give relativism a wide berth, avoid defining truth in terms of concepts such as warranted assertibility, and treat correspondence theories of truth with deep suspicion. However, while classical pragmatists were responding primarily to the correspondence theory of truth, new pragmatic theories also respond to contemporary disquotational, deflationary, and minimal theories of truth Misak , a.
As a result, new pragmatic accounts aim to show that there is more to truth than its disquotational and generalizing function for a dissenting view see Freedman In asserting something to be true, speakers take on an obligation to specify the consequences of their assertion, to consider how their assertions can be verified, and to offer reasons in support of their claims:. Misak a: Truth is not just a goal of inquiry, as Dewey claimed, but actually a norm of inquiry that sets expectations for how inquirers conduct themselves.
More specifically, without the norm of truth assertoric discourse would be degraded almost beyond recognition. The norm of truth is a condition for genuine disagreement between people who speak sincerely and with, from their own perspective, good enough reasons. In sum, the concept of truth plays an essential role in making assertoric discourse possible, ensuring that assertions come with obligations and that conflicting assertions get attention.
Without truth, it is no longer clear to what degree assertions would still be assertions, as opposed to impromptu speculations or musings. Correspondence theories should find little reason to object: they too can recognize that truth functions as a norm.
It is important that this account of truth is not a definition or theory of truth, at least in the narrow sense of specifying necessary and sufficient conditions for a proposition being true. The proposal to treat truth as a norm of inquiry and assertion can be traced back to both classical and neo-pragmatist accounts.
In this respect, these newer pragmatic accounts are a response to the problems facing neo-pragmatism. In another respect, new pragmatic accounts can be seen as a return to the insights of classical pragmatists updated for a contemporary audience. This pragmatic elucidation of the concept of truth attempts to capture both what speakers say and what they do when they describe a claim as true.
In a narrow sense the meaning of truth—what speakers are saying when they use this word—is that true beliefs are indefeasible. However, in a broader sense the meaning of truth is also what speakers are doing when they use this word, with the proposal here that truth functions as a norm that is constitutive of assertoric discourse.
As we have seen, pragmatic accounts of truth focus on the function the concept plays: specifically, the practical difference made by having and using the concept of truth. These earlier accounts focus on the function of truth in conversational contexts or in the context of ongoing inquiries. By viewing truth as a norm of assertion and inquiry, these more recent pragmatic theories make the function of truth independent of what individual speakers might imply in specific contexts.
Truth is not just what is assertible or verifiable under either ideal or non-ideal circumstances , but sets objective expectations for making assertions and engaging in inquiry.
Unlike neo-pragmatists such as Rorty and Putnam, new pragmatists such as Misak and Price argue that truth plays a role entirely distinct from justification or warranted assertibility. These theories often disagree significantly with each other, making it difficult either to define pragmatic theories of truth in a simple and straightforward manner or to specify the necessary conditions that a pragmatic theory of truth must meet.
As a result, one way to clarify what makes a theory of truth pragmatic is to say something about what pragmatic theories of truth are not. One way to differentiate pragmatic accounts from other theories of truth is to distinguish the several questions that have historically guided discussions of truth. These projects also break into distinct subprojects; for a similar approach see Frapolli This project often takes the form of identifying what makes a statement true: e.
This often takes the form of giving a criterion of truth that can be used to determine whether a given statement is true. Unfortunately, truth-theorists have not always been clear on which project they are pursuing, which can lead to confusion about what counts as a successful or complete theory of truth. It can also lead to truth-theorists talking past each other when they are pursuing distinct projects with different standards and criteria of success.
In these terms, pragmatic theories of truth are best viewed as pursuing the speech-act and justification projects. As noted above, pragmatic accounts of truth have often focused on how the concept of truth is used and what speakers are doing when describing statements as true: depending on the version, speakers may be commending a statement, signaling its scientific reliability, or committing themselves to giving reasons in its support.
Likewise, pragmatic theories often focus on the criteria by which truth can be judged: again, depending on the version, this may involve linking truth to verifiability, assertibility, usefulness, or long-term durability. With regard to the speech-act and justification projects pragmatic theories of truth seem to be on solid ground, offering plausible proposals for addressing these projects. They are on much less solid ground when viewed as addressing the metaphysical project.
As we will see, it is difficult to defend the idea, for example, that either utility, verifiability, or widespread acceptance are necessary and sufficient conditions for truth or are what make a statement true. This would suggest that the opposition between pragmatic and correspondence theories of truth is partly a result of their pursuing different projects. From a pragmatic perspective, the problem with the correspondence theory is its pursuit of the metaphysical project that, as its name suggests, invites metaphysical speculation about the conditions which make sentences true—speculation that can distract from more central questions of how the truth predicate is used and how true beliefs are best recognized and acquired.
Pragmatic theories of truth are not alone in raising these concerns David From the standpoint of correspondence theories and other accounts that pursue the metaphysical project, pragmatic theories will likely seem incomplete, sidestepping the most important questions Howat But from the standpoint of pragmatic theories, projects that pursue or prioritize the metaphysical project are deeply misguided and misleading. This supports the following truism: a common feature of pragmatic theories of truth is that they focus on the practical function that the concept of truth plays.
Thus, whether truth is a norm of inquiry Misak , a way of signaling widespread acceptance Rorty , stands for future dependability Peirce , or designates the product of a process of inquiry Dewey , among other things, pragmatic theories shed light on the concept of truth by examining the practices through which solutions to problems are framed, tested, asserted, and defended—and, ultimately, come to be called true.
Pragmatic theories of truth can thus be viewed as making contributions to the speech-act and justification projects by focusing especially on the practices people engage in when they solve problems, make assertions, and conduct scientific inquiry. Of course, even though pragmatic theories of truth largely agree on which questions to address and in what order, this does not mean that they agree on the answers to these questions, or on how to best formulate the meaning and function of truth.
Another common commitment of pragmatic theories of truth—besides prioritizing the speech-act and justification projects—is that they do not restrict truth to certain topics or types of inquiry. That is, regardless of whether the topic is descriptive or normative, scientific or ethical, pragmatists tend to view it as an opportunity for genuine inquiry that incorporates truth-apt assertions.
This broadly cognitivist attitude—that normative statements are truth-apt—is related to how pragmatic theories of truth de-emphasize the metaphysical project. As a result, from a pragmatic standpoint one of the problems with the correspondence theory of truth is that it can undermine the truth-aptness of normative claims. If, as the correspondence theory proposes, a necessary condition for the truth of a normative claim is the existence of a normative fact to which it corresponds, and if the existence of normative facts is difficult to account for normative facts seem ontologically distinct from garden-variety physical facts , then this does not bode well for the truth-aptness of normative claims or the point of posing, and inquiring into, normative questions Lynch If the correspondence theory of truth leads to skepticism about normative inquiry, then this is all the more reason, according to pragmatists, to sidestep the metaphysical project in favor of the speech-act and justification projects.
As we have seen, pragmatic theories of truth take a variety of different forms. To begin with, and unlike many theories of truth, these theories focus on the pragmatics of truth-talk: that is, they focus on how truth is used as an essential step toward an adequate understanding of the concept of truth indeed, this come close to being an oxymoron.
More specifically, pragmatic theories look to how truth is used in epistemic contexts where people make assertions, conduct inquiries, solve problems, and act on their beliefs.
By prioritizing the speech-act and justification projects, pragmatic theories of truth attempt to ground the concept of truth in epistemic practices as opposed to the abstract relations between truth-bearers such as propositions or statements and truth-makers such as states of affairs appealed to by correspondence theories MacBride Pragmatic theories also recognize that truth can play a fundamental role in shaping inquiry and assertoric discourse—for example, by functioning as a norm of these practices—even when it is not explicitly mentioned.
In this respect pragmatic theories are less austere than deflationary theories which limit the use of truth to its generalizing and disquotational roles.
And, finally, pragmatic theories of truth draw no limits, at least at the outset, to the types of statements, topics, and inquiries where truth may play a practical role. If it turns out that a given topic is not truth-apt, this is something that should be discovered as a characteristic of that subject matter, not something determined by having chosen one theory of truth or another Capps Pragmatic theories of truth have faced several objections since first being proposed.
Some of these objections can be rather narrow, challenging a specific pragmatic account but not pragmatic theories in general this is the case with objections raised by other pragmatic accounts. This section will look at more general objections: either objections that are especially common and persistent, or objections that pose a challenge to the basic assumptions underlying pragmatic theories more broadly.
Some objections are as old as the pragmatic theory of truth itself. While James offered his own responses to many of these criticisms see especially his [] , versions of these objections often apply to other and more recent pragmatic theories of truth for further discussion see Haack ; Tiercelin One classic and influential line of criticism is that, if the pragmatic theory of truth equates truth with utility, this definition is obviously!
In short, there seems to be a clear and obvious difference between describing a belief as true and describing it as useful:. Russell [ 98]. So whether truth is defined in terms of utility, long-term durability or assertibility etc.
In other words, whatever concept a pragmatic theory uses to define truth, there is likely to be a difference between that concept and the concept of truth e. A second and related criticism builds on the first. Perhaps utility, long-term durability, and assertibility etc. This seems initially plausible and might even serve as a reasonable response to the first objection above.
Falling back on an earlier distinction, this would mean that appeals to utility, long-term durability, and assertibility etc. However, without some account of what truth is, or what the necessary and sufficient conditions for truth are, any attempt to offer criteria of truth is arguably incomplete: we cannot have criteria of truth without first knowing what truth is. If so, then the justification project relies on and presupposes a successful resolution to the metaphysical project, the latter cannot be sidestepped or bracketed, and any theory which attempts to do so will give at best a partial account of truth Creighton ; Stebbing And a third objection builds on the second.
Putting aside the question of whether pragmatic theories of truth adequately address the metaphysical project or address it at all , there is also a problem with the criteria of truth they propose for addressing the justification project.
Pragmatic theories of truth seem committed, in part, to bringing the concept of truth down to earth, to explaining truth in concrete, easily confirmable, terms rather than the abstract, metaphysical correspondence of propositions to truth-makers, for example.
The problem is that assessing the usefulness etc. Far from making the concept of truth more concrete, and the assessment of beliefs more straightforward, pragmatic theories of truth thus seem to leave the concept as opaque as ever. These three objections have been around long enough that pragmatists have, at various times, proposed a variety of responses. One response to the first objection, that there is a clear difference between utility etc.
It has been argued that pragmatic theories are not about finding a word or concept that can substitute for truth but that they are, rather, focused on tracing the implications of using this concept in practical contexts. It is even possible that James—the main target of Russell and others—would agree with this response. To be sure, pragmatic theories of truth have often been framed as providing criteria for distinguishing true from false beliefs.
The distinction between offering a definition as opposed to offering criteria would suggest that criteria are separate from, and largely inferior to, a definition of truth. However, one might question the underlying distinction: as Haack argues,. If meaning is related to use as pragmatists generally claim then explaining how a concept is used, and specifying criteria for recognizing that concept, may provide all one can reasonably expect from a theory of truth.
Deflationists have often made a similar point though, as noted above, pragmatists tend to find deflationary accounts excessively austere. Given that we are mostly interested in the relative difference between the CDM and sDAO models, however, this difference is not critical; the comparison with observations serves mainly as a consistency check of our procedure for generating mock spectra from our simulations.
This is expected, considering that the linear theory cut-off in the sDAO model is similar to that of a 1. However, as we have remarked in Section 1 , constraining models against observed data by means of their relative normalization is fraught with uncertainties due to the assumed thermal history of the IGM.
We are therefore cautious of our interpretation of Fig. In Fig. This figure reveals the defining characteristics of the sDAO model. This is, indeed, the imprint of the DAO in the gas distribution at these early times. As a result, the first DAO peak moves towards smaller scales. For clarity, we do not show the observational data in this figure. This secondary feature is not sourced by DAOs. This may be because the 1D flux spectrum, which can be qualitatively understood as an integrated version of the 3D power spectrum along the line of sight, weighted by velocity moments, is more sensitive to small-scale features in the linear power spectrum than the 3D clustering.
This is somewhat reminiscent of modified theories of gravity [e. Jennings et al. We leave a full understanding of the comparison between 1D and 3D power spectra for future work. It is illuminating to consider the difference in structure in the sDAO and WDM models at these early times in greater detail.
In this calculation, halo mass is defined by M , which is the mass contained within r , the radius interior to which the mean density is equal to times the critical density of the universe at that redshift. Both the sDAO and WDM models then peel away from the CDM curve at an identical mass scale; this is a direct consequence of the fact that the linear power spectra of these two models also deviate from CDM at identical scales. This excess of power is sourced by the DAO, whereas the initial density fluctuations are suppressed indefinitely in the case of WDM.
The left-hand panel of Fig. In particular, while power continues to be suppressed in the case of the 1. In practice, this may prove to be difficult to observe since the largest signal is expected to be present at the highest redshift, where the UV background starts to be inhomogeneous due to incomplete reionization.
Left-hand panel : as Fig. This feature is not observed in Ethos-4 , which also exhibits DAOs in the linear power spectrum, but of smaller amplitude than in the sDAO case. We also show predictions for the Ethos-4 model in which the cut-off is on a smaller scale than in the sDAO case, and where the first DAO peak is of lower amplitude than in sDAO and is pushed to smaller scales see Fig.
Regardless, this comparison highlights the potential of 1D flux spectrum measurements to distinguish not only non-CDM models from CDM, but also different non-CDM models from each other. To diagnose this, in the right-hand panel of Fig. This is consistent with the picture in Fig. The effects of noise in the flux power spectrum are manifested more strongly in the 1.
How the noise level shifts as a function of resolution see also Viel et al. At the lower resolution, the numerical bump is shifted to larger scales by a factor of 2 as expected, since the low-resolution simulation retains the same number of particles in a box that is twice as big as the high-resolution simulation. Moreover, the DAO bump, which just starts to develop, blends with the numerical bump and is therefore unresolved in the low-resolution simulation. With increased resolution i.
As in the case of the cut-off in the small-scale flux spectrum, it may be that quantitative details in Fig. While varying these assumptions may certainly smear the prominence of the DAO feature, it is not clear that such bumps could be replicated by baryonic mechanisms. In particular, the scale at which these features are manifested, if induced by the nature of the DM, will be set by processes intrinsic to the DM model.
We leave a detailed investigation of degeneracies between DAOs and thermal histories to future work. We have performed detailed hydrodynamical simulations of non-standard dark matter species in which the DM is coupled to a relativistic component in the early universe. Early structure formation in these models is therefore modified considerably from standard cold dark matter, principally in the form of a delay in the formation of the first stars, and a suppression in the abundance of low-mass galaxies e.
Lovell et al. The structure of DM haloes may be modified as well through strong DM self-interactions at late times that reshape the phase-space density profiles of galactic haloes e. Vogelsberger et al. The extents to which these processes impact galaxy formation are, of course, sensitive to parameters specific to the DM theory, such as the duration of DM—radiation coupling, or the self-interaction cross-section.
While it is impossible to explore this parameter space fully, various permutations of these model parameters will predict largely similar galactic populations. The Ethos framework Cyr-Racine et al. In this paper, we focus our attention on an atomic DM model which we refer to as sDAO in which DM is composed of two massive fermions that are oppositely charged under a new unbroken U 1 dark gauge force see Section 2.
While models as extreme as these may already be strongly constrained, our goal in this paper was to investigate if DAOs may be, in principle, detectable in the Lyman-alpha forest, rather than to present a model that matches the available data. A priori, it is not obvious that DAOs would persist in the Lyman-alpha flux spectrum.
In particular, we sought to identify observational proxies that are able to distinguish between the different small-scale behaviour of these DAO models from WDM. Our main conclusions from the current study are as follows. A random line of sight through the sDAO simulation box reveals far less structure in absorption than the equivalent line of sight in the CDM simulation Fig.
This faster growth of structure is a fairly generic phenomenon observed in models with a cut-off in the linear power spectrum including WDM. In our work, this is manifested in the form of the transmitted flux PDFs Fig. The probability that a given line of sight intersects a region with high transmitted flux increases as the universe transitions from neutral to ionized due to the ionizing radiation from high-redshift galaxies.
In fact, present data at these redshifts already place the sDAO in significant tension with observations Fig. The appearance and disappearance of the DAO at different redshifts therefore offers an opportunity to disentangle small-scale features in the flux power spectrum induced by the nature of DM from astrophysical effects e.
While there is a vast parameter space of well-motivated non-standard CDM models, the predictions they make for the formation of structure and the properties of galaxies can be challenging to differentiate. Of fundamental importance is the need to identify sets of statistics that allow the identification of physical scales that are characteristic of these theories. DM models in which there is a coupling to a relativistic species in the early Universe are characterized in the linear regime by a cut-off at the scale of dwarf galaxies followed by a series of dark acoustic oscillations towards smaller scales.
We thank the anonymous referee for providing suggestions that have improved this manuscript. We are very grateful to Volker Springel for allowing us access to Arepo , which was used to run all the simulations used in this paper. This work was made possible in part by usage of computing resources at the University of Southern Denmark through the NeIC Dellingr resource sharing pilot.
We note that this power-law index is the same as that used to classify DM models within the Ethos framework Cyr-Racine et al. We note that the a 4 amplitudes given in Vogelsberger et al. We have checked explicitly that our results are converged for this choice for the number of sightlines see Fig. As the small-scale power is suppressed heavily in the sDAO simulation, the large-scale power is boosted somewhat in order to achieve the same mean flux in the two models.
Ackerman L. D , 79 , Albert A. Altay G. Aprile E. Bartels R. Baur J. Benitez-Llambay A. Google Scholar. Google Preview. Bode P. D , 66 , Boera E. Bolton J. Bond J. Bose S. Boylan-Kolchin M. Bozek B. Brinckmann T. Bringmann T. D , 94 , Buckley M. D , 90 , Buen-Abad M. D , 92 , Bullock J. Carlson E. Chacko Z. High Energy Phys. Chu X. Cole S. Colless M. Creasey P.
Croft R. Cyr-Racine F. D , 87 , D , 89 , D , 93 , Das S. Daylan T. Dark Universe , 12 , 1 Di Cintio A. D , 98 , Dooley G. Dubois Y. Eisenstein D. Elbert O. Feng J. Fitts A. Flores R. Garzilli A. Gnedin N. Hahn O. Hooper D.
Ibata N.
0コメント