Thanks, I’m glad you enjoyed it! I don’t have many math-y notes as math plays a small role in the philosophy (& other things) I focus on – so maybe someone with more experience can jump in to help. That being said: I would think that processing here just means adding annotations to the formulas. I assume the goal of these notes is to help you remember & understand & apply the formulas, and not to generate original ideas about them for papers or such – so processing will be more minimal.
An aside: I haven’t used this myself, but this plugin might be useful to you if you have lots of formulas in your vault: GitHub - RyotaUshio/obsidian-latex-theorem-equation-referencer: A powerful indexing & referencing system for theorems & equations in your Obsidian vault.
Here is a copy-paste of four of my more math-y notes (they tend to be more encyclopedic than other notes, more focused on allowing me to quickly understand than on my own ideas):
Bayes’ Theorem
Type:: #note
Status: #
Tags: #statistics #bayesian_epistemology #bayesian
Overview
[! Abstract] In a Nutshell
Bayes’ Theorem is about updating probabilities (/credences) based on new evidence. So: [[Prior Probability (Distribution)|priors]] + evidence = posterior probability distribution (/ updated credence).
Bayes theorem connects $P(H \mid E)$ (the direct probability of a hypothesis (or: “uncertain quantity”) given evidence) with $P(E \mid H)$ (the inverse probability of evidence given hypothesis).
- “Though a mathematical triviality, Bayes’ Theorem is of great value in calculating conditional probabilities because inverse probabilities are typically both easier to ascertain and less subjective than direct probabilities.” SEP
“the Theorem’s central insight[:] that a hypothesis is confirmed by any body of data that its truth renders probable” SEP
[! info] Bayes’ Theorem (simple version):
$P(A \mid B) = \frac{P(B \mid A) \cdot P(A)}{P(B)}$
To understand the formula above, see [[Conditional Probability]] ($P(A \mid B)$) and [[Prior Probability (Distribution)]] ($P(A)$). Also: [[Probability]].
[! info] Bayes’ Theorem SEP
I add this because it uses credences (Cr), evidence (E) and hypothesis (H) and is thereby more in the vein of epistemological use of the theorem.
" Suppose that Cr is probabilistic and assigns nonzero credences to H and E, and that the Ratio Formula holds. Then we have:
$Cr(H \mid E) = \frac{Cr(E \mid H) \cdot Cr(H)}{Cr(E)}$
[…] This theorem is often useful for calculating credences that result from conditionalization on evidence E, which are represented on the left side of the formula."
Expanded formula (Bayes’ Theorem, 2nd form)
![[Screenshot 2022-08-26 at 14.04.33.png]]
Normally use the simpler version. If the denominator is unknown (here: P(A)), use the expanded formula to determine the denominator.
- also used for medical [[Sensitivity and Specificity]]
Application in Epistemology
Key idea:
- Probability as expressing the [[Credence]] in an event.
- Bayesian epistemology studies norms governing credences, including how one’s credence should update in response to evidence
For more information, see [[Bayesian Epistemology]]
References
Conditional Probability
Type:: #note
Status: #
Tags: #statistics #bayesian_epistemology #probability #bayesian
Overview
[! abstract] In a Nutshell
$P(A \mid B)$ is a conditional [[probability]]. This means that it is the probability of event A occuring given that B is true. It is sometimes also called the posterior probability (of A given B).
The conditional probability can be calculated as follows:
$P(A \mid B) = \frac{P(A \land B)}{P(B)}$
($P(A \land B)$ is the joint probability of A and B)
Conditional probability of A given B can be transformed into likelihood of B given A:
$P(A \mid B) = L(B \mid A)$
[[Bayes’ Theorem]] is about [[Conditional Probability]] (see [[Bayesian Epistemology]]).
See [[Probabilistic Coherence Playground]] for examples of conditional probability calculations (need Numerals plugin).
References
Probability
Type:: #note
Status: #
Tags: #statistics #probability
Overview
[! Abstract] In a Nutshell
Notation: $P()$. Decimal representation, where 0 is minimum probability and 1 is maximum probability.
More information about probability is found at [[Statistics]].
See [[Bayesian Epistemology]] for a philosophical application of probability calculus.
Probability Calculus
Some rules of probability calculus:
- $P(\neg A)$: [[Four Basic Rules of Descriptive Statistics#1 Complement Rule|Complement Rule]]
- $P(A \lor B)$: [[Four Basic Rules of Descriptive Statistics#3 Addition Rule|Addition Rule]]
- A&B mutually exclusive: $P(A \lor B) = P(A) + P(B)$
- A&B not mutually exclusive: $P(A \lor B) = P(A) + P(B) - P(A \land B)$
- $P(A \land B)$: [[Four Basic Rules of Descriptive Statistics#4 Multiplication Rule|Multiplication Rule]]
- A&B independent: $P(A \land B) = P(A) \cdot P(B)$
- A&B not independent: $P(A \land B) = P(A) \cdot P(B \mid A)$
- Independence means that $P(A \mid B) = P(A)$ (see [[Statisticial Independence]])
- $P(A \mid B)$: [[Conditional Probability]]
- $P(A \mid B) = \frac{P(A \land B)}{P(B)}$
- [[Bayes’ Theorem]] is an equivalent transformation of this formula.
An important question is: How do we determine the prior?
- $P(A)$: [[Prior Probability (Distribution)]]
- If we have n equally likely outcomes: [[Four Basic Rules of Descriptive Statistics#2 Rule for Equally Likely Outcomes]]
- Can also determine it empirically.
- According to the [[Miller’s Principle]], our [[Subjective Probability]] should match [[Objective Chance]] whenever possible.
- According to [[Bayesian Epistemology#Objective Bayesians|objective bayesianism]], priors should be both coherent and free from bias. One way to encode this freedom from bias is the principle of indifference:
- “A person’s credences in any two propositions should be equal if her total evidence no more supports one than the other (the evidential symmetry version), or if she has no sufficient reason to have a higher credence in one than in the other (the insufficient reason version)” SEP
Probability vs Likelihood
From https://www.statology.org/likelihood-vs-probability/:
- Probability refers to the chance that a particular outcome occurs based on the values of parameters in a model.
- Likelihood refers to how well a sample provides support for particular values of a parameter in a model.
From https://www.psychologicalscience.org/observer/bayes-for-beginners-probability-and-likelihood:
- “Probability attaches to possible results; likelihood attaches to hypotheses.”
- Possible results are mutually exclusive and exhaustive; hypotheses are often neither.
“To decide which of two hypotheses is more likely given an experimental result, we consider the ratios of their likelihoods. This ratio, the relative likelihood ratio, is called the ‘Bayes Factor.’”
From somewhere else:
Conditional probability of A given B can be transformed into the likelihood of B given A:
- $P(A \mid B) = L(B \mid A)$
Notes
For more on probability & randomness (including examples), see [[Mlodinow, Leonard (2008) - “The Drunkard’s Walk”]]
References
Gibbard’s Proof
Type:: #note
Status: #
Tags: #philosophy #philosophy_of_language #conditionals
Overview
[! Abstract] In a Nutshell
Seems to demonstrate that:
“any [[Conditionals|conditional]] operator $\to$ satisfying (i) and the two additional rather obvious principles (ii) and (iii) reduces to material implication.” ([[Kratzer, Angelika (2012) - “Modals and Conditionals”|Kratzer 2012]], 87)
“if indicative conditionals have truth conditions, they cannot be stronger than material implication” ([[Khoo, Justin (2013) - “A Note on Gibbard’s Proof”|Khoo 2013]], 153)
[! Info] Assumptions
(i) $p \to (q \to r)$ and $(p \land q) \to r$ are logically equivalent. (This is also called [[import, export]])
(ii) $p \to q$ logically implies the corresponding material conditional. That is, $p \to q$ is false whenever $p$ is true and $q$ is false.
(iii) If $p$ logically implies $q$, then $p \to q$ is a logical truth.
Objections
[[Kratzer, Angelika (2012) - “Modals and Conditionals”|Kratzer 2012]]: Gibbard assumes that “if…then in English corresponds to a two-place propositional operator.” According to [[Kratzer’s restrictor analysis of conditionals]], this is not the case.
It follows from [[Stalnaker’s ‘close-possible-worlds’ Account of Conditionals#Some consequences of the theory|Stalnaker’s account of conditionals]] that (i) is false. I believe that [[Lewis’ ‘modal’ account of conditionals]] also invalidates (i).
References
#AllanGibbard (Gibbard 1981)
[[Khoo, Justin (2013) - “A Note on Gibbard’s Proof”]]
[[Kratzer, Angelika (2012) - “Modals and Conditionals”]]