It took some time, but I got it working, and the ending table looks like this:
The primary sort is descending on type-sb
column (2)to shift all the strange files to the end, then on the start-begin
date column descending, and finally on the end-finish
column ascending just to make a difference.
Files used to create this table
WorkingExample77566.zip (6.0 KB)
The dateRanges.js
in the archive is expected to be in _js/dateRanges.js
, and all the rest of the files have been in the ForumStuff/f77/f77566
folder. If you want them in other places, do remember to update queries accordingly.
One vital change to the original setup is that I’m using the field set of start-begin
, start-finish
, end-begin
and end-finish
for a few reasons:
- I needed to work with two sets of this as I also explored the option of using inline queries and they had the original field names (and are still present in the test files)
- I kind of liked this variant a little better, as it helps me focus on that these variables are defining date ranges for when something started or ended. So
begin
and finish
I felt better conveyed this range thingy
- I was confused by using
start
and end
in various combination, to such a degree that I wanted different names
Nothing in the logic depends on these names, except for the doLocal()
function, and it could probably be done better to either avoid them, or have them presented to the script.
Some comments on the test files themselves
The A
, B
and C
files have been defined as talked about earlier in this thread. At the end I duplicated the C
files with a change of adding 1, 2, or 3 days to the end-finish
date, so I could have something to sort on. The D
file is defining all the fields in a circle…
The javascript is the engine, and the Full query test
is just a random query to display all files having the start-begin
field (and a folder requirement). Note that the logic shouldn’t depend on which columns actually are dates or not, as they’re recognised either by being dates, or being an array consisting of 2 or 3 elements, where the two first needs to be a link, and a string.
The variable format
There are three valid formats, all shown using this example from the C3
file:
start-begin:: 2024-04-10
start-finish:: [[t77566 C1]], "start-begin", "1 week"
end-begin:: [[t77566 C1]], "start-finish"
end-finish:: [[t77566 C1]], "end-begin", "3 days"
And they are:
- A pure date field, the best of the best fields…

- Or an array of 2 elements, where the first is a link to a note, and the second element is a field name
- Or an array of 3 elements, where the last added element is a positive or negative duration
The script incantations
The script has two variants to be called where the first is used for the local note, to display the calculated dates (this is the doLocal()
function):
`$= await dv.view("_js/dateRanges")`
Currently with no parameters, and the end result is something like:
The first part is the definition, and the last two lines is the output of the script. Note that form for describing the dates also accommodates file renaming as these links are proper links which are handled by Obsidian when renaming and so on. And they appear in backlinks too.
The other variant is the query variant, and it could look like this:
```dataviewjs
await dv.view("_js/dateRanges", {
"query": `
TABLE WITHOUT ID link(file.link, substring(file.name, 6)) as event
, start-begin, typeof(start-begin) as type-sb
, start-finish, typeof(start-finish) as type-sf
, end-begin, typeof(end-begin) as type-eb
, end-finish, typeof(end-finish) as type-ef
FROM "ForumStuff/f77/f77566"
WHERE start-begin `,
"displayTypeInNextColumn": true,
"sort": [ "2 desc", "1 desc", "7 asc"] // 0 is first column
})
```
The query itself could be any query that lists the field value of our variables, and the script will loop through all columns and rows detecting which cell is actually a date variable to be expanded.
This example query also lists the type of the preceding column, and has an optional parameter of "displayTypeInNextColumn": true
which means that if set and the previous column was a calculated date, it’ll change the next column to be the type of what it calculated. In most case you can just leave out this line altogether, but it was very useful when debugging, and if you do set it to false
instead of true
, you can see which fields have been calculated since the type of those fields should now be array
.
Finally a note on the "sort":
array. It can be left out, and the array is in whatever order it was originally not respecting any columns sorted on the date variables. If you specify it, you need to follow the syntax as indicated above, which means:
- “2 desc” – The primary sorting is on column 2 (where 0 is the first column), and it’s descending. In my test query this means that all the “date” values, comes before the “array” value of that pesky “D” note.

- “1 desc” – The secondary sorting is on column 1 and it’s also descending. In my test query this is the
start-begin
date column, and we list the newest values first
- “7 asc” – The last sorting priority (if the two first are equal) it the
end-finish
column, and it’s sorted ascending. In my query that means the oldest dates are first
You can have as many or few sorting columns as you want, but if present they need to be in the format as shown above. That is in an array, where each sort column is specified with a 0-based column number, a space and either desc
or asc
added to the text.
The logic of the script
I can’t really go through the entire script. Look through it and ask about particularities if needed.
The overall sequential structure though is as follows:
- Starting comment – An attempt at explaining what this is, and how to use it
- Main logic – Here we set various parameters based on the
input
given to dv.view()
, and choose whether we should do the local variant, doLocal
or the query based variant, doQuery
doLocal
– This sets up a local loop on the four variables, and builds an array of the calculated values of these dates. If some variables are misbehaving a warning is printed on the console. At the end it builds up a paragraph using the calculated date values.
doQuery
– This function executes the query, and loops through each cell mutating the cell value if it is identified as a date variable. After this mutation/processing of all the cells, it sorts the values according to any sort keys, and presents the table to the user
- Various helper functions:
evaluateArray
– This is the main function for doing the calculation, and is called from both of the above functions.
- It starts off by pushing a (hopefully) unique key to a stack, so we can keep track of whether we’ve tried to calculate this field already or not. This to avoid never-ending loops of circular definitions (like in the D note).
- We then lookup the field, and store it in
tmp
, and calculate the newDuration
to add if that is present in the array variable
- Now we check whether
tmp
is an actual date, in which case we happily return it up the chain after adding the duration to it’s value
- If
tmp
on the other hand is an array, we call our self recursively to further evaluate the fields and duration. We return the value from this call after adding the duration as before
- If
tmp
is neither a date nor an array we throw an exception which will be listed in the console, but the originating definition of this date variable is left untouched as shown in the last line of the example table above
sortValues
– This is just a simple function which loops on every value of the sortKeys
array. If a key is non-conforming to the syntax, we bail out and leave the entire array in its original sorting. If the key is recognized, we sort on key after key, before finally returning the fully sorted table back to the caller.
In conclusion
All in all, this took far more time than I anticipated, but it do work it seems. It do require a change of your variable definitions (and possibly a rename of the variables either in the script, or in your vault). The definition change is sadly needed, as otherwise you’d need to implement a parser, and then the script would be a lot larger.
I also kind of like that now all the logic is gathered in that one script, and you can use that both within the local notes to display the calculated values (and even calculate durations of the start and/or end ranges if you so want), and as queries looking at larger group of events, persons and/or manuscripts. I also find it rather neat that it don’t need to know which columns are actually date variables or not, it just happily processes the values presented to it in the table.
The only thing I’m not fully satisfied with is that it doesn’t cache already found values. This could/should be added to the evaluateArray()
through the use of a global stack based on the queryKey
, but I didn’t have the energy to do so. If one would like to do so, we need to set that value before returning the value back out of the function, and do set it for just the field (and not the included duration). And then one could also add a check in the start of the function to see if there is a queryKey
value in the global stack already, and if so just return it instead of doing the calculation all over again.
Phew, I think that is the end of the work on this (at least for today). I sincerely hope this is useful to you, and that you’re able to implement it into your workflow and that your workflow will benefit from it!