Things I have tried
Found problem with ‘Unknown directive’ when examining robots.txt site from google: https://pagespeed.web.dev/report?url=https%3A%2F%2Fprojectbubbleburst.com%2F&form_factor=desktop
Initially site was password protect and option for crawling disabled.
Subsequently, it has been made public and crawling enabled.
Have toggled the crawling switch again recently to no avail. Unknown directive problem still shows.
The robots.txt file doesn’t resemble a standard file (as per https://forum.obsidian.md/robots.txt for example)
Of course, the second I posted this, I realised I could probably create my own to overwrite any auto-generated. Will see what happens and get back…
What I’m trying to do
Enable search engine crawlers to work.