De Re Beliefs and Evidence in Legal Cases

161697-Thumbnail Image.png
Description
For the past half-century, both jurisprudence and epistemology have been haunted by questions about why individual evidence (i.e., evidence which picks out a specific individual) can sufficiently justify a guilty or liable verdict while bare statistical evidence (i.e., statistical evidence

For the past half-century, both jurisprudence and epistemology have been haunted by questions about why individual evidence (i.e., evidence which picks out a specific individual) can sufficiently justify a guilty or liable verdict while bare statistical evidence (i.e., statistical evidence which does not pick out a specific individual) does not sufficiently justify such a verdict. This thesis examines three popular justifications for such a disparity in verdicts – Judith Jarvis Thomson’s causal account, Enoch et al.’s sensitivity account, and Sarah Moss’ knowledge-first account, before critiquing each in turn. After such an analysis, the thesis then defends the claim that legal verdicts require the factfinder (e.g., the judge or jury) to have a justified de re belief (i.e., a belief about a specific object – namely the defendant), and that this doxastic requirement justifies the disparity in rulings, as it is epistemically insufficient to justify a de re belief based on bare statistical evidence alone. A brief account of how these beliefs are formed and spread is also given. After making such a distinction, the thesis then formalizes the burdens of proof of the preponderance of the evidence and beyond a reasonable doubt using the de re/de dicto distinction. Finally, the thesis pre-empts possible objections, namely by providing an account of DNA evidence as individual evidence and giving an account of how false convictions can occur on the de re view of legal proof.
Date Created
2021
Agent

A consequentialist model for just social contracts

157805-Thumbnail Image.png
Description
The paper reviews some of the models of consequentialist justice, the nature of social contracts, and the social coordination of behaviors through social norms.

The challenge with actualizing justice in many contemporary societies is the broad and often conflicting

The paper reviews some of the models of consequentialist justice, the nature of social contracts, and the social coordination of behaviors through social norms.

The challenge with actualizing justice in many contemporary societies is the broad and often conflicting individual beliefs on rights and responsibilities that each member of a society maintains to describe the opportunities and compensations they attribute to themselves and others. This obscurity is compounded through a lack of academic or political alignment on the definition and tenets of justice.

The result of the deficiency of commonality of the definition and tenants of justice often result in myopic decisions by individuals and discontinuity within a society that reduce the available rights, obligations, opportunities, and/or compensations that could be available through alternative modalities.

The paper begins by assessing the challenge of establishing mutual trust in order to achieve cooperation. I then examine utility enhancement strategies available through cooperation. Next, I turn to models that describe natural and artificial sources of social contacts, game theory, and evolutionary fitness to produce beneficial results. I then examine social norms, including the dual inheritance theory, as models which can selectively reinforce certain cooperative behaviors and reduce others. In conclusion, a possible connection among these models to improve the overall fitness of society as defined by the net average increase in available utility, rights, opportunities, and compensations is offered.

Through an examination of concepts that inform individual choice and coordination with others, concepts within social coordination, the nature of social contracts, and consequentialist justice to coordinate behaviors through social norms may illustrate an integrated perspective and, through additional examination, produce a comprehensive model to describe how societies could identify and foster just human coordination.
Date Created
2019
Agent

Perspectives on Inductive Inference

Description
There is no doubt that inductive logic and inductive arguments are vital to the formation of scientific theories. This thesis questions the use of inductive inferences within the sciences. Specifically, it will examine various perspectives on David Hume's famed "problem

There is no doubt that inductive logic and inductive arguments are vital to the formation of scientific theories. This thesis questions the use of inductive inferences within the sciences. Specifically, it will examine various perspectives on David Hume's famed "problem of induction". Hume proposes that inductive inferences cannot be logically justified. Here we will explore several assessments of Hume's ideas and inductive logic in general. We will examine the views of philosophers and logicians: Karl Popper, Nelson Goodman, Larry Laudan, and Wesley Salmon. By comparing the radically different views of these philosophers it is possible to gain insight into the complex nature of making inductive inferences. First, Popper agrees with Hume that inductive inferences can never be logically justified. He maintains that the only way around the problem of induction is to rid science of inductive logic altogether. Goodman, on the other hand, believes induction can be justified in much the same way as deduction is justified. Goodman sets up a logical schema in which the rules of induction justify the particular inductive inferences. These general rules are then in turn justified by correct inferences. In this way, Goodman sets up an explication of inductive logic. Laudan and Salmon go on to provide more specific details about how the particular rules of induction should be constructed. Though both Laudan and Salmon are completing the logic schema of Goodman, their approaches are quite different. Laudan takes a more qualitative approach while Salmon uses the quantitative rules of probability to explicate induction. In the end, it can be concluded that it seems quite possible to justify inductive inferences, though there may be more than one possible set of rules of induction.
Date Created
2016-05
Agent

Carnap and Conventionality

Description
One of the central ideas in Rudolf Carnap's philosophy is that of convention. For Carnap, conventionality holds as long as there is some latitude of choice for which theoretical reasoning (correctness vs. incorrectness with regard to the facts) is insufficient

One of the central ideas in Rudolf Carnap's philosophy is that of convention. For Carnap, conventionality holds as long as there is some latitude of choice for which theoretical reasoning (correctness vs. incorrectness with regard to the facts) is insufficient and practical reasoning is needed to decide between the alternatives. Carnap uses this understanding of convention to show how one can circumvent the problem of justification for areas such as physical geometry and logic, and he also uses it to propose a new paradigm for philosophy, namely his proposal of the Principle of Tolerance. I maintain that such an understanding of conventionality is helpful and that it ought to be more widely adopted. I also believe that it would be difficult to apply this understanding of conventionality to the realm of religion, but it can be easily and helpfully applied to the realm of politics.
Date Created
2016-05
Agent

Is interactive computation a superset of Turing computation?

137645-Thumbnail Image.png
Description
Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. —

Modern computers interact with the external environment in complex ways — for instance, they interact with human users via keyboards, mouses, monitors, etc., and with other computers via networking. Existing models of computation — Turing machines, λ-calculus functions, etc. — cannot model these behaviors completely. Some additional conceptual apparatus is required in order to model processes of interactive computation.
Date Created
2013-05
Agent

The Genesis Mechanism: an explorative undertaking across academic disciplines in the effort to synthesize a more comprehensive understanding of complexity and the role it has served in the gensis of life

137328-Thumbnail Image.png
Description
The field of biologic research is particularly concerned with understanding nature's complex dynamics. From deducing anatomical structures to studying behavioral patterns, evolutionary theory has developed greatly beyond the simple notions proposed by Charles Darwin. However, because it rarely considers the

The field of biologic research is particularly concerned with understanding nature's complex dynamics. From deducing anatomical structures to studying behavioral patterns, evolutionary theory has developed greatly beyond the simple notions proposed by Charles Darwin. However, because it rarely considers the concept of complexity, modern evolutionary theory retains some descriptive weakness. This project represents an explorative approach for considering complexity and whether it plays an active role in the development of biotic systems. A novel theoretical framework, titled the Genesis Mechanism, was formulated reconsidering the major tenets of evolutionary theory to include complexity as a universal tendency. Within this framework, a phenomenon, referred to as "social transitioning," occurs between higher orders of complexity. Several potential properties of social transitions were proposed and analyzed in order to validate the theoretical concepts proposed within the Genesis Mechanism. The successful results obtained through this project's completion help demonstrate the scientific necessity for understanding complexity from a more fundamental, biologic standpoint.
Date Created
2013-05
Agent

Problem Class Dominance in Predictive Dilemmas

136960-Thumbnail Image.png
Description
One decision procedure dominates a given one if it performs well on the entire class of problems the given decision procedure performs well on, and then goes on to perform well on other problems that the given decision procedure does

One decision procedure dominates a given one if it performs well on the entire class of problems the given decision procedure performs well on, and then goes on to perform well on other problems that the given decision procedure does badly on. Performing well will be defined as generating higher expected utility before entering a problem. In this paper it will be argued that the timeless decision procedure dominates the causal
and evidential decision procedures. It will also be argued in turn that the updateless decision procedure dominates the timeless decision procedure. The difficulties of formalizing a modern variant of the ”smoking gene” problem will then be briefly examined.
Date Created
2014-05
Agent

The Aims and Structures of Research Projects That Use Gene Regulatory Information with Evolutionary Genetic Models

155234-Thumbnail Image.png
Description
At the interface of developmental biology and evolutionary biology, the very

criteria of scientific knowledge are up for grabs. A central issue is the status of evolutionary genetics models, which some argue cannot coherently be used with complex gene regulatory network

At the interface of developmental biology and evolutionary biology, the very

criteria of scientific knowledge are up for grabs. A central issue is the status of evolutionary genetics models, which some argue cannot coherently be used with complex gene regulatory network (GRN) models to explain the same evolutionary phenomena. Despite those claims, many researchers use evolutionary genetics models jointly with GRN models to study evolutionary phenomena.

How do those researchers deploy those two kinds of models so that they are consistent and compatible with each other? To address that question, this dissertation closely examines, dissects, and compares two recent research projects in which researchers jointly use the two kinds of models. To identify, select, reconstruct, describe, and compare those cases, I use methods from the empirical social sciences, such as digital corpus analysis, content analysis, and structured case analysis.

From those analyses, I infer three primary conclusions about projects of the kind studied. First, they employ an implicit concept of gene that enables the joint use of both kinds of models. Second, they pursue more epistemic aims besides mechanistic explanation of phenomena. Third, they don’t work to create and export broad synthesized theories. Rather, they focus on phenomena too complex to be understood by a common general theory, they distinguish parts of the phenomena, and they apply models from different theories to the different parts. For such projects, seemingly incompatible models are synthesized largely through mediated representations of complex phenomena.

The dissertation closes by proposing how developmental evolution, a field traditionally focused on macroevolution, might fruitfully expand its research agenda to include projects that study microevolution.
Date Created
2017
Agent

Epistemic norms and permissive rationality

155037-Thumbnail Image.png
Description
This dissertation consists of three essays, each of which closely relates to epistemic norms for rational doxastic states. The central issue is whether epistemic rationality is impermissive or not: For any total evidence E, is there a unique doxastic state

This dissertation consists of three essays, each of which closely relates to epistemic norms for rational doxastic states. The central issue is whether epistemic rationality is impermissive or not: For any total evidence E, is there a unique doxastic state that any possible agent with that total evidence E should take (Uniqueness), or not (Permissivism)?

“Conservatism and Uniqueness”: Conservatism is the idea that an agent’s beliefs should be stable as far as possible when she undergoes a learning experience. Uniqueness is the idea that any given body of total evidence uniquely determines what it is rational to believe. Epistemic Impartiality is the idea that you should not give special treatment to your beliefs solely because they are yours. I construe Epistemic Impartiality as a meta-principle governing epistemic norms, and argue that it is compatible with Conservatism. Then I show that if Epistemic Impartiality is correct, Conservatism and Uniqueness go together; each implies the other.

“Cognitive Decision Theory and Permissive Rationality”: In recent epistemology, philosophers have deployed a decision theoretic approach to justify various epistemic norms. A family of such accounts is known as Cognitive Decision Theory. According to Cognitive Decision Theory, rational beliefs are those with maximum expected epistemic value. How does Cognitive Decision Theory relate to the debate over permissive rationality? As one way of addressing this question, I present and assess an argument against Cognitive Decision Theory.

“Steadfastness, Deference, and Permissive Rationality”: Recently, Benjamin Levinstein has offered two interesting arguments concerning epistemic norms and epistemic peer disagreement. In his first argument, Levinstein claims that a tension between Permissivism and steadfast attitudes in the face of epistemic peer disagreement generally leads us to conciliatory attitudes; in his second argument, he argues that, given an ‘extremely weak version of a deference principle,’ Permissivism collapses into Uniqueness. However, in this chapter, I show that both arguments fail. This result supports the following claim: we should treat steadfast attitudes and at least some versions of a deference principle as viable positions in the discussion about several types of Permissivism, because they are compatible with any type of Permissivism.
Date Created
2016
Agent

Advancing the causal theory of natural selection

154456-Thumbnail Image.png
Description
The Modern Synthesis embodies a theory of natural selection where selection is to be fundamentally understood in terms of measures of fitness and the covariance of reproductive success and trait or character variables. Whether made explicit or left implicit, the

The Modern Synthesis embodies a theory of natural selection where selection is to be fundamentally understood in terms of measures of fitness and the covariance of reproductive success and trait or character variables. Whether made explicit or left implicit, the notion that selection requires that some trait variable cause reproductive success has been deemphasized in our modern understanding of exactly what selection amounts to. The dissertation seeks to advance a theory of natural selection that is fundamentally causal. By focusing on the causal nature of natural selection (rather than on fitness or statistical formulae), certain conceptual and methodological problems are seen in a new, clarifying light and avenues toward new, interesting solutions to those problems are illustrated. First, the dissertation offers an update to explicitly causal theories of when exactly a trait counts as an adaptation upon fixation in a population and draws out theoretical and practical implications for evolutionary biology. Second, I examine a case of a novel character that evolves by niche construction and argue that it evolves by selection for it and consider implications for understanding adaptations and drift. The third contribution of the dissertation is an argument for the importance of defining group selection causally and an argument against model pluralism in the levels of selection debate. Fourth, the dissertation makes a methodological contribution. I offer the first steps toward an explicitly causal methodology for inferring the causes of selection—something often required in addition to inferring the causes of reproductive success. The concluding chapter summarizes the work and discusses potential paths for future work.
Date Created
2016
Agent