Awareness revision and belief extension

Roussos, Joe | 2024

The Australasian Journal of Philosophy

Abstract

What norm governs how an agent should change their beliefs when they encounter a completely new possibility? Orthodox Bayesianism has no answer, as it takes all learning to involve updating prior beliefs. A partial proposal is Reverse Bayesianism, which mandates the preservation of ratios of prior probabilities, but it faces counterexamples introduced by Mahtani (2021). I propose to separate awareness growth into two stages: awareness revision and belief extension. I argue that Mahtani’s cases highlight that we need to theorize awareness revision before we can define a proposal for belief extension, such as Reverse Bayesianism. I provide a formal model of awareness revision which makes explicit how propositions are distinguished within awareness states and identified across them. Reformulating Reverse Bayesianism to take input from my model allows it to navigate Mahtani-style cases. My model leaves open how agents choose to identify propositions across awareness states, and I propose that they ought to do so conservatively: preserving undisturbed prior reasoning about the structure of their awareness. I then spell out this proposal in a special case. This is a partial proposal, and I close with a discussion of how to elaborate on it and how to advance research into awareness revision.

Read more here >

The Australasian Journal of Philosophy

Abstract

What norm governs how an agent should change their beliefs when they encounter a completely new possibility? Orthodox Bayesianism has no answer, as it takes all learning to involve updating prior beliefs. A partial proposal is Reverse Bayesianism, which mandates the preservation of ratios of prior probabilities, but it faces counterexamples introduced by Mahtani (2021). I propose to separate awareness growth into two stages: awareness revision and belief extension. I argue that Mahtani’s cases highlight that we need to theorize awareness revision before we can define a proposal for belief extension, such as Reverse Bayesianism. I provide a formal model of awareness revision which makes explicit how propositions are distinguished within awareness states and identified across them. Reformulating Reverse Bayesianism to take input from my model allows it to navigate Mahtani-style cases. My model leaves open how agents choose to identify propositions across awareness states, and I propose that they ought to do so conservatively: preserving undisturbed prior reasoning about the structure of their awareness. I then spell out this proposal in a special case. This is a partial proposal, and I close with a discussion of how to elaborate on it and how to advance research into awareness revision.

Read more here >