Is that obvious? I can't say for sure they were originally drawn at higher resolution then downscaled. Likewise, I can't say for sure that you're wrong.1) Obviously some assets were created at a low resolution from scratch.
True, but you also stated that this was due to time limitations not technical limitations, but I do concede there is a bit of a catch-22 in that adding more detail sometimes reveals a lack of detail.2) Now onto the pre-rendered sprites which are the most promising venue with neural upscaling. Even if you scale them up with the most simple waifu2x method you'll notice one thing: the models are rather underdetailed. The hands have no separate fingers, no toes if barefoot, the faces are very basic if visible at all etc.
Regardless, I think we're getting a bit into the weeds here. Why don't we make the project mission statement something a little less rigid:
What to upscale?
Whatever interests you! I'd prioritize graphics that haven't already been enhanced by others (looking for gaps in DREAM for example) and also prioritize graphics that can be batched. Try to get the most bang for your buck when it comes to your time and effort. For example, I'd like to upscale a lot of the textures, not because they haven't already been redone by DREAM, but because it interests me to enhance them closer to the vanilla style and I can bang them out fairly quickly.
Upscale to what extent?
Until it looks good to your eye! As long as it looks better than the original graphic and you're proud of the result, then it's good enough. In this sense, it may just be best to decentralize the decision and let the individual contributor be the judge of what looks best. If someone doesn't like it, they can roll up their sleeves and try to do better themselves.
Bottom line: Let's just have fun and apply our skills as best we can. This is a hobby project after all, so spend your time and effort wisely. And don't worry- the results couldn't possibly be any more disjointed than the originals.
On the topic of batching, I've got my GIMP script working to where it will correctly blend the ESRGAN, SFTGAN, and Gigapixel results to predetermined proportions using GIMP 2.10's new "Merge" layer mode. For example, the Giant results looked best with 60% Gigapixel, 30% ESRGAN, and 10% SFTGAN merged together. Then it applies Phredreeke's mask layer to restore the alpha and saves it in GIMP format and PNG. While it does all these steps autonomously, I have to manually iterate to the next image after each upscale... so not really batching.
It turns out, in order to batch it properly, I have to rewrite my script from Python into Scheme language so I can take advantage of filename globbing. Shouldn't be hard, I just have to carve out the time to do it.