As it is fetching the character data, however, the video chip also copies it to a 40-byte buffer. While the CPU isn't allowed to do anything. Of text, it alternates between fetching characters and fetching their shapes To balance these needs, while it's showing the first scan line of each row The Commodore 64 needed to fetch two bytes per character like the VIC-20,īut needed to output characters to the screen twice as fast (like the Apple). This approach meant that character shapes could be stored in RAM rather than ROM, and reprogrammed as convenient (the character-memory address could also be set to ROM if desired, to save 2K of RAM). The VIC-20 used wider characters, and fetched a character every other CPU cycle on the cycle after each character fetch, it would take the 8 bits of character data along with 3 bits of scan-line count, add that value to a value programmed in a register, fetch a byte from that address, and display that as eight consecutive pixels. The top two bits of the fetched character byte would then select normal, inverse, or blinking mode. The six bits of the fetched character byte would then be used, along with the bottom 3 bits of a scan line character, as an address in a 512x5-bit ROM. Characters were eight scan lines high, so each byte of character data would get fetched eight times per frame-once on each of eight consecutive lines. The Apple II fetched one byte of video memory per CPU cycle, which determined which character should be displayed. It's interesting to note that the Apple II, Vic-20, and Commodore 64, took different approaches to the video timing. The 6502 microprocessor needs to use its memory bus for half of each cycle many computers of the late 1970s and early 1980s took advantage of this by constructing their memory system so it would connect the 6502 to the memory during one half of the cycle, and connect video circuitry during the other half. Video circuit almost always had higher priority since there were no pipelines to buffer the data, and the pixel output of the video signal was time critical. Since RAM was shared, the video circuit used DMA or other method to interrupt CPU for RAM access.There were special chips like intel 8275 that integrates everything (access to buffer memory, generation of pixel clock and sync for NTSC monitors, access to character dot matrics ROM and so on), but sometimes the whole video logic was built with TTL logic ICs (example: Apple II).In such way a screen of 40x25 characters can be saved in just 1 kB. The video circuit had a small ROM containing the 5x7 dot matrics of each character. For text mode, only the ASCII code was saved byte for byte in the memory.Usually composite video with NTSC or PAL color schema was used for home computers to be connected to a television. There were no high resolution interfaces like HDMI or VGA.However there are some differences to a modern video card: There were memory between 1 and 8 kB reserved as video buffer, the CPU calculated the output and saved it in RAM. Yes that is basiclly how video outputs were done thoseadays.
0 Comments
Leave a Reply. |