Very possible!
Pandoc converts it properly, and defaults to UTF8. My powershell is defaulted to UTF8 (this must be the default now, because I never changed it) and I tried using UTF16 as well.
The problem is that any of the functions that do a search/replace within the text (renaming the images, inserted files, and cleaning up some of the symbols/formatting) break with any Chinese (or Korean or Japanese) characters, even when I set it to encode with utf8 on reading from the file and writing back to the file. More specifically, the issue seems to be with any sort of function or read/write from a variable that is derived from the OneNote xml schema. When I simply write a line to the file with something like $var = “事件基本信息”, it writes properly, but $var = $page.name (which is in Chinese characters from the XML) doesn’t work. Interestingly, if I print that variable to the Console, it prints fine. But, again, even just search/replace for symbols like “\” breaks it when they are on a line with Asian characters. Its gotta be a mismatch with encoding between Powershell and the actual document.
Anyway, I spent a few hours on it and couldn’t get it to work and gave up and supplied a “fix” that doesn’t apply changes that break the conversion, which should be 90+% of the way there. I’m a hacker in the hackiest sense, so if you or someone else wants to/can fix it, please go for it and share the results/fix here or on the Github.
You could even try posting an issue at the original Github repo - that guy is a wizard, but I suspect he has heard more than enough from me (which is what spurred me to fork my own version to begin with).