# Learning some new theory: here is my current system using Dataview to track concepts from source to notes

When learning a new field, concepts are defined/referenced/explained in many different sources. We can use backlinks etc. to keep track of these (e.g., all key terms get [[concept-note|linked]]). But sometimes these explanations are definitions, sometimes they are just perspectives, clarifications, extensions etc. I think there are plugins out there to augment links with styles, but I want to capture semantics.

So my constantly-evolving/WIP system has stabilized now to the following:

In my literature note:

## Chapter 5: Entropy of Continuous Variables

### The Entropy of a Discretised Continuous Random Variable Diverges to Infinity

- Let $X^{\Delta}$ be continuous random variable $X$ that has been discretised using bins of width $\Delta x$.
%%
[concept-domain:: information theory]
[concept-term:: discretized variable]
[concept-notation:: $X^{\Delta x}$]
[concept-is-defined:: true]
[concept-is-notationed:: true]
[pdf-page:: 113]
%%
^j6m6on
- The [[knowledge/notes/information-theory-entropy|entropy]] of a continuous random variable, $X$, that has been discretized to bins of width $\Delta x$, $X^{\Delta}$, is given by (Eqn 5.8, pg 112):
$$H(X^{\Delta}) = \sum_{i} P_i \log \frac{1}{P_i},$$
where $P_i$ is the probability mass of $x_i$, i.e. the probability that a random sample or realization of $X$, $x$, is in the $i^{th}$ bin.
%%
[concept-domain:: information theory]
[concept-term:: entropy (discretized)]
[concept-notation:: $H(X^{\Delta}$)]
[concept-is-defined:: true]
[concept-is-notationed:: true]
[pdf-page:: 112]
%%
^oi7qpu


A bit messy in source view, but cleans up really well in reading mode:

And the pay-off is the DataView query table:

Which of course, can be filtered by specific term (e.g., "concept-term": "entropy"), domain, is-defined == true etc.

\dataviewjs
await dv.view(
"__annex__/resources/obsidian/plugins/dataview/views/definitions.dataview", {
"concept-domain": "information theory",
"concept-is-defined": true,
isGroupByCallout: false,
renderAs: "table",
isContainedInCallout: false
}
)
\


Here I can keep all the definitions in the source, plus add/extend/analyze/discuss them anywhere. Using various filters of the same query, summary notes can pull in concepts from everywhere and filtered as needed. Easy to add/filter other metadata (e.g., “is-preferred-notation”, “see-also”, “is-obsolete”, “is-personal-insight” etc.).

Crucially, here, everything is a list item. Which allows for atomic referencing of list items both in terms of metadata as well as in terms of block links (so the concept explanations can be linked or embedded elsewhere). The block links (which I’ve previously called ugly ) actually don’t look so bad in this context, and, of course, work very well.

Here is a reference you might like:

![[literature/notes/@stone-2015-information-theorya#^oi7qpu]]


As a side note, I typically wrap verbatim extracts in some sort of quote indicator or callout to ensure the provenance of the text/idea is clearly from an external source:

- > The mutual information $I(X; Y)$ is a measure of the dependence between the two random variables.
> $$> I(X;Y) = H(X) - H(X|Y) = \sum_{x,y} p(x,y) log \frac{p(x,y)}{p(x)p(y)} >$$
> It is symmetric in X and Y and always nonnegative and is equal to zero if and only if X and Y are independent.
%%
[concept-domain:: information theory]
[concept-notation:: $I(X;Y)$]
[concept-term:: mutual information]
[concept-is-defined:: true]
[concept-is-notationed:: true]
%%