▲ | superice 2 days ago | |||||||||||||||||||||||||||||||
That still really isn't a reason. I think I can extrapolate where they are going with this, which will be something like 'I have a ton of 720x400 hardware CRTs sitting around that I need to use/support/deal with'. But that is never explicitly stated, you can completely read it as 'oh it's neat that 80x25 matches up with the number of image lines on old CRT displays and here is the math to show it'. | ||||||||||||||||||||||||||||||||
▲ | db48x 2 days ago | parent [-] | |||||||||||||||||||||||||||||||
Perhaps you don’t know that on a CRT the number of lines that can be displayed is variable. Any CRT can display 400 lines or 399 lines or 1000 lines or however many lines you need or want. On an LCD there is always a fixed number of pixels, no more and no less. You can leave some of those pixels blank if you don’t need them but that’s about it. 720 pixels by 400 pixels is a 5:9 aspect ratio, but the monitor is a 4:3 aspect ratio display. On a CRT the result was an image made up of pixels that were taller than they were wide. 35% taller, to be specific. To reproduce this on an LCD you need to scale the image up to 720×540 pixels which results in every line being drawn as either one or two lines of LCD pixels. Some lines are literally double the height of others. This is super ugly! Of course you could scale it up to 1440×1080, but now you’re just scaling the lines up by a factor of 2.7 instead of 1.35. Some lines are 2 pixels tall and others are 3, which still makes some lines 50% taller than the rest. On a 4K monitor you could scale it up by a factor of 5.4 to 2880×2160 making some lines 5 pixels tall and others 6. This is certainly better but you’ll still be able to tell the difference and it’s still ugly. When you scale an image taken from the real world, such as one from a television program or a movie, then nobody will notice the artifacts. But when you scale pixel graphics, and especially text, then the artifacts spoil the whole thing. There are two other routes you could take. You could scale the text display instead. You could have an 80×33 text display using the 9×16 character cell. This gives you 720×528 pixels, which is close enough to the right ratio that you can just scale it up by a nice integer ratio and just ignore the few wasted pixels at the top and bottom of the screen. But now you’ve squashed the aspect ratio of the characters! Ok, so you could stretch the character cell to 9×22 pixels, redrawing all of the characters by hand to approximate the original shapes. You’ll only have room for 80×24 characters in 720×528 pixels, but that’s much less disappointing than mucking about with the original font. People _grew up_ with that font. They _like_ it. Of course neither of these options can take advantage of real VGA hardware. One of the advantages of VGA was that the CPU only had to manage a 2 kilobyte text buffer while the VGA hardware composited the character data from the font and created the video signal that went to the display. It could do this in real time, meaning latency was actually lower than a whole video frame. If you emulate this on a CPU it’ll be much, much slower than that. If you farm it out to a GPU instead then it’ll be far more expensive. A modern GPU needs tens of billions of transistors to run the shader that emulates what probably took a few thousand transistors on original VGA hardware. A completely modern take on a console would lean into the new ratios and might have a 120×30 text display and a 16×36 character cell, creating a 1920×1080 pixel display that doesn’t need any scaling on most LCD panels. Instead of trying to support the original VGA character set (CP437 as it is sometimes called) and disappointing its fans, it would support Unicode, text shaping, combining characters, BiDi text, emoji, etc, etc. And the compositing would be done in hardware, but not in a shader on a $500 GPU. Or even a $100 GPU. | ||||||||||||||||||||||||||||||||
|