Freak2121 wrote: ↑Sat Dec 29, 2018 12:31 pm
Well, I did all the object textures.
Did you do that with Topaz AI? (I kind of missed your
previous post on that while skimming the thread for the first time) The results look really amazing, the photohallucination method seems to provide uncannily good results.
MasonFace wrote: ↑Sat Dec 29, 2018 2:57 pm
If you get time, please do upscale the sprites. Someone else may be able to clean up the edges. Mr. Flibble may have an inventive method.
I had a very simple idea which I used when I figured out how to trick waifu2x into properly handling sprite edges. Use the default black background (so that any dark pixels that were not cleaned up from the original sprites' edges will blend with the background), then convert the result back to the original 8-bit palette (I use mtPaint for very good results in this, you just need to set colour space to RGB from the default sRGB). Then open the image in any editor of your choice and use the simple bucket fill to replace the black background with some high contrast colour, like bright green. Just be sure not to have bucket fill set to smooth edges/smart fill or whatever it's called. You will then probably have a dark outline around each sprite but this can be eliminated by using the select by colour tool in GIMP. Just select all the bright green areas with it (effectively the background), then increase selection by one pixel, remove the selected area. It's crude but it should work.
Another method is to process each image twice, on black and white background, then blend the results with G'MIC (Layers > Blend [median]), and the background will become grey. However I have not tested this with
Daggerfall sprites and it may not work very well because there are the aforementioned dark pixels that were not cleaned up when the sprites were created (this only applies to pre-rendered sprites).
Also
Freak2121, have you tried "softening" the sprites as I described in my post above, before processing them with Topaz AI? I wonder if this might yield any improvement to the edges and/or the overall look of the images.
MasonFace wrote: ↑Sat Dec 29, 2018 4:32 pm
We only need to focus on getting samples that roughly match our desired outcome- we can simply downscale these samples to make the training inputs. The training inputs would work best if we downscale to close to the vanilla resolution and color depth.
At least for textures, I think it is quite possible to generate data sets by using free textures from sources like
OpenGameArt. They come in relatively large resolutions, and can be scaled down to any desired size. Here's a 128x128 and 256x256 source and target pair I made from
this ground texture:
I scaled down the source image to both resolutions using Sinc3 interpolation in GIMP, then converted the 128x128 one to 256 colours using mtPaint (PNN quantification, no dithering, RGB colour space). After this I used the resulting palette to create an indexed version of the larger version.
The downside is that this one might be too noisy because it is based on a photograph. It is my understanding that back in the 90s photo-based textures (such as those found in
Doom for example) were all touched up manually to get a proper look in-game. Maybe there are hand-painted textures available as well too.