Qt 6 made it so anything that linked with it had the defines that put the Win32 API into UTF-16 mode.
That means that components (which we reuse the PCH for in multiple targets now) had incompatible defines, and the PCHs wouldn't work.
Before CMP0204, CMake wouldn't set these defines at all except with the Visual Studio generators, so this wasn't causing any problems that caused a compile error when I tested a Ninja-based build.
OPENMW_DOC_BASEURL is only used in a CMake-configured file, so it only needs to be a CMake variable, which it already is.
There's no benefit to making it visible to every TU in components.
MYGUI_DONT_USE_OBSOLETE should be visible in everything that transitively includes MyGUI just in case.
This should really be set up by MyGUI's CMake config or embedded in a generated MyGUI header rather than being our responsibility, but while we're forced to deal with it, it's closer to right to make it a PUBLIC define on components rather than a directory-scoped define in the components directory.
Precompiled headers avoid duplicate work.
If you've only got a single TU using a particular PCH, then there's no duplicate work, so it can only add overhead.
We don't need to totally abandon PCHs for these targets, though, as CMake lets us reuse the PCH from components.
If you've only got a few TUs in a target, it's *probably* faster to get components' PCH for free and eat the cost of it not being perfect than it is to make a perfect PCH from scratch.
Note that I don't know if there are drawbacks due to components having a couple of private precompiled headers that wouldn't have otherwise propagated or these targets having different build flags.
I can't test it locally right now as my linker's regained the deadlocking issue it had the other day.
If it turns out there are problems, then for the single-TU targets, simply avoiding using PCHs for them at all will still be an improvement over the status quo.
For the two-or-three TU targets, we'll have to actually measure things.
This might help diagnose some build problems in the future.
In fact, I need one for MacOS right now, and need to run a job on the upstream CI with this change to get one, so hijacking my existing CI MR seems like a great solution.
The prune command fails due to the error mentioned in the previous commit message.
Using PowerShell's Remove-Item is slow due to needing to create a .NET representation for each file before processing it.
Using [System.IO.Directory]::Delete throws an exception and gives up if any file can't be deleted.
Even though docker seems to be thoroughly killed, we don't have access to delete some of its files most of the time, which might be related to the original error that blocks the prune command.
CMD's rd should be as fast as anything else (except for the smallish overhead from creating a subprocess), and at least for Aussiemon, it seems to work.
Incremental builds don't work at all without it, which means every TU needs recompiling for every project.
This will break cleaning unless ccache data is within %APPDATA% or %TEMP%.
This *should* make no difference as we already do things that mean ccache only gets told about one TU at once (e.g. using Ninja, or enabling UseMultiToolTask), but at the minimum, it's misleading to have this enabled when we know we're not using it.
A successful run for the Ninja jobs showed that 1G and 2G were fine for groups one and two.
I'm leaving some leeway for the MSBuild jobs as they've not succeeded yet and there might be some kind of madness that means they need more.
We won't be able to see until at least one build gets far enough.
On a machine with Windows Subsystem for Linux installed, the first bash in the path will typically be the WSL launcher that gives Linux bash.
We must therefore ensure we recursively call into the MSYS2 shell we're already using.