A regular PAL tv set can be thought of as a low resolution fixed-frequency monitor. Usually, the PAL video characteristics are as follows:
|Number of lines per picture (frame)|
|Field frequency, nominal value (fields/s)|
|Line frequency (Hz)|
|Nominal line period (µs)|
|Line-blanking interval (µs)|
|"Active" scanlines per field||
So, basically there are 625 scanlines per one video frame, divided up to two 312.5-line interlaced fields. Each scanline has a duration of 64 µs.
A new 312.5 scanline field is drawn in every 1/50 second. Since drawing each scanline takes 64 µs (that is, 0.000064 seconds), drawing two fields (a full video frame) takes 2 × 312.5 × 64 µs, which equals to 0.04 seconds (=1/25 seconds). On the other hand, there are 625 × 25 scanlines per second, which equals to 15625 (the line frequency).
Now we have derived and double-checked almost all important numbers in the above table, except for line-blanking interval and active scanlines. These two values define the part of the signal that is considered "active", i.e. that carries the actual image information. But what do these numbers mean?
Even though there are 625 scanlines in a full video frame, only part of them carry the actual image: namely, lines 23 through line 310 in the first field, and lines 336 through 623 in the second field.
But wait, it is a bit more complicated than that: the analog tv standards dictate that only the second half of line 23 is used for active image, and, likewise, only the first half of line 623 is used for active image.
In total, this gives us (nominally) 287.5 + 287.5 = 575 scanlines worth of "active" image data, even though the image (as whole) spans on 576 scanlines (since it begins in the middle of line 23 and abruptly ends in the middle of line 623.)
As it would be quite awkward to handle these half lines on a computer, computer people usually process the image as if the last and first lines were used in full. Hence, for computer image processing purposes it can be thought there are 288 active scanlines in the first field and 288 active scanlines in the second field, giving us a total of 576 active lines (which is a familiar number for anyone who has ever captured "full frame" PAL video signals.) (Once you output your 576-line computer-generated video images, the video encoder chip should automatically snip the second half of the first line off, as well as the first half of the last line.)
OK then, why do we have only 287.5 scanlines worth of "active" image data per field if each field has a total of 312.5 scanlines? Where do the missing 25 scanlines go?
These 25 scanlines (25 × 64 µs = 0.0016 seconds) are reserved for the vertical retrace. It takes approximately 0.0016 seconds for a CRT-based tv to move the electron beam from the bottom of the screen back to the top of the screen.
The electron beam is shut off (or blanked, as the term goes) while it moves back up, so nothing gets drawn on the screen during this time. You can think of these 25 scanlines merely as 64 µs timing units which are used to create a necessary delay in the signal while waiting for the electron beam to move at the top of the screen, rather than as actual scanlines that get drawn on the screen.
Now we know that we have 312.5-line fields, of which 287.5 lines (or in computer terms, 288 lines) are used for image. But is each of these "active" lines completely stuffed up with "active" image data? Or, to put it in another way, do we have 64 µs worth of active signal on each active scanline?
No, that is not the case.
Again, we need to consider the physical restictions of the electron beam that draws the image on the CRT screen. The beam sweeps the scanlines on the screen from the left edge to the right edge. After having drawn one scanline, it needs some slack time to move back to the left edge (while blanked), so that it can start drawing another line. This slack time is called the (horizontal) line-blanking interval, or just horizontal blanking.
The table at the top of this page tells us that the duration of the line-blanking interval is 12 µs. Now we can easily calculate that we have 64 µs - 12 µs = 52 µs time for displaying the active signal on each active scanline.
The vertical resolution of video images quite naturally lends itself to a pixel-based representation because there is a discrete number of "active" scanlines. We can just map these discrete, active scanlines directly to horizontal pixel rows.
There are 287 (and a half) scanlines worth of active picture in each field. Since the adjacent fields will get interlaced together when drawn on the screen, we have a total of 574 complete scanlines and two half scanlines in each video "frame".
As mentioned earlier, computer people do not want to think in terms of half scanlines. In practical situations, we will capture full scanlines instead (even if the other half does not contain legal active video signal), this way getting 288 of them from each field, and 576 in total from a complete video frame (two fields.)
That was easy. Now we know we will always get 576 horizontal pixel rows when digitizing a full-frame PAL video image. But what about the horizontal resolution? How many pixels are there in each row?
We already know that there is 52 µs time for the active image on each scanline, but how do we divide that 52 µs time slice into discrete pixels?
The harsh reality of analog video is that there is no fixed, defined, horizontal resolution. There is just a squiggly analog waveform which lasts 52 µs. If we want to turn it into pixels, we need to determine a sampling rate, and sample the signal at that rate – over the whole 52 µs active length.
Now, there is no single "correct" sampling rate. On the contrary, we could decide to use just about any sampling rate we want to. For example, we could decide that we want to collapse all the information on a single scanline in only ten pixels. Since there is 52 µs worth of squiggly analog waveform for us to sample, dividing that time into ten segments will give us 5.2 µs for each sample, which in turn is 1 / 5.2 µs = 192.308 kHz sampling rate.
Of course, sampling only 10 pixels from each scanline is not very useful – that would be quite a crude horizontal resolution for video images. We could just as well sample 200 pixels (which would give us a 3.85 MHz sampling rate), or 223 pixels (4.29 MHz sampling rate), or 555 pixels (10.67 MHz sampling rate), or 1000 pixels (19.23 MHz sampling rate), or whatever.
However, there are two important standard sampling rates which get used a lot:
13.5 MHz sampling will give you 702 "active" pixels for each scanline (52 µs × 13.5 MHz = 702). In real-life applications, video capture devices often sample 704 or 720 pixels from the signal instead – in other words, a bit wider area than 52 µs, extending a bit into the horizontal blanking range. (This is where the common DVD/DVB/DV resolutions 720×576 and 704×576 come from.)
14.75 MHz sampling will give you 767 "active" pixels for each scanline (52 µs × 14.75 MHz = 767). In real-life applications, one pixel more is usually sampled in order to get nice, round numbers (768×576). Therefore, industry-standard "square" pixels are not exactly square, but reasonably close.
If you are using a commercial video capture device, you do not usually have much say on how it samples the signal. Most video capture devices will use one of the "standard" sampling rates mentioned above, and some of them allow using both, but the sampling rate is not usually freely adjustable.
However, if you are generating a video signal on a computer, you are not restricted to these sampling rates (pixel clocks) but can use whatever rate (pixel clock) you wish or fits your design.
For example, the designers of the Commodore Amiga line of computers chose to use a pixel clock of 70 ns for some of the graphics modes. (70 ns = 70 × -10-9 s = 1 / (70 × 10-9) Hz = approximately 14.286 MHz sampling rate).
The PAL version of the Commodore 64 uses a pixel clock of 7.88 MHz.
Both Amiga and Commodore 64 are restricted in their pixel clocks. There are some even multiples you can use, but the pixel clock is not freely adjustable. However, modern VGA cards and "tv out" style video encoders usually offer a more fine-graded control over the output pixel clocks, so there is a wider range of possible modes you can generate.
Of course, if the pixel clock is anything else than near 14.75 MHz, you will get pixels that are not square on the TV screen (but squashed or elongated in their form.) Even though modern PC's are usually adjusted to use square-pixel modes, many video devices, game consoles and older computers prefer using non-square ones – by design. (This just means you will have to take the shape of the pixels into account when designing graphics for non-square-pixel devices.)
I am glad that you asked!
The active part of the video signal is not usually fully shown on a TV screen. Instead, the very edges of the image get cut off on all sides. (There are both technical and historical reasons for this behavior.)
In any case, when generating graphics (or new video modes) that will be shown on a tv screen, one way or the other, you will need to take the overscan issue in account.
There are two ways to deal with the problem:
Remember the Amiga computers I mentioned in the last section? They used tv sets as their display devices, and the pixel clock was approximately 14.286 MHz (or to put it in a more exact way, 70 ns.) Why this number? Because it allows displaying 640 pixels across a tv screen with minimal overscan safety borders.
640 pixels at 70 ns pixel clock equals to 44.8 µs. In other words, the Amiga draws 640 pixels on the scanline using 44.8 µs of the available "active" area (time). The total length (time) of the "active" area is 52 µs, but since the Amiga only uses the middle 44.8 µs part of this (leaving 3.6 µs borders on each side), the picture is visible even if the tv set overscans and cuts the very edges of the active image area off.
Similarly, the Amiga only uses 512 scanlines (from the middle) of the 576 possible, leaving 32 scanlines blank at top and bottom to compensate for the vertical overscan.
The Swedish Broadcasting Company (Sveriges Television, SVT) recommends the following as 4:3 "action safe area":
|Image width||Image height|
|46.815 µs (632 pixels @ 13.5 MHz)||the centremost 516 scanlines of 576|
This is not too far away from the above-mentioned Amiga video mode:
|Image width||Image height|
|44.800 µs (640 pixels @ 70 ns pixel clock)||the centremost 512 scanlines of 576|
The usable "always visible" safe screen area bobs about somewhere in this neighborhood, but there is no exact way to define it since some tv sets overscan more than the others.
The official PAL-B/G standard only defines interlaced signals. However, with a small modification to the timings, it is possible to derive a video signal that is very close to standard PAL, but has non-interlaced fields. Even though this kind of non-interlaced mode does not follow the official tv standards to the letter, it can be displayed on a regular tv set.
What does this mean?
In interlaced signal, every second field is drawn at half-a-scanline offset. This creates an illusion of more vertical resolution than there actually is. The adjacent interlaced fields are seen stitched together, as if they formed a full-resolution frame.
In non-interlaced signal, there is no half-line offset. The corresponding scanlines in adjacent fields get drawn in exactly in the same places. Or more pedantically: what were formerly known as "fields" have now become (progressive) frames.
The advantage of a non-interlaced mode is that it is flicker-free. Since all fields (and their scanlines) get drawn on the same location, there is no interlace flicker. The image is very stable and clear.
The disadvantage of a non-interlaced mode is that the perceived vertical resolution is only a half of what it seemed to be in the respective interlaced mode.
Why would anyone want to use a non-interlaced mode on a tv screen if it (seemingly) halves the vertical resolution? Isn't the resolution of a regular tv set bad enough to begin with?
8-bit and 16-bit home computers and video game consoles of the 80's regularly used non-interlaced graphics modes on domestic tv sets. This was in part because they had relatively little graphics memory and a relatively incapable graphics chip generating the signal, but also in big part because (unfiltered) computer graphics flicker like hell on an interlaced display. In order to display a stable-looking (non-flickering) interlaced picture, it needs some heavy low-pass filtering in the vertical direction, and it is impossible to get completely rid of it. In a non-interlaced mode, however, the image is rock-steady and clear, without any filtering or processing.
"Well then", I hear you ask, "what can you use these tiny graphics modes for? I mean, in a modern setting?"
The Amiga line of computers is a classic example of effective use of non-interlaced, tv-compatible video modes. One of the standard modes on Amiga is non-interlaced 640×256 with 70 ns pixel clock. (The pixels are of course vertically elongated so that the image looks more like 640×512 if recreated on a square-pixel screen.)
640×256 may not sound like much, but take a look at these examples:
(The original screen captures are all in 640×256 format, but the images above have been scaled and processed to better visualize how they actually show up on a tv screen. You can click on the visualizations to see the original, "raw" framebuffer image data.)
With 640×256 @ 70 ns you can have an overscan-compensated graphics mode that does not flicker at all, and which you can use to fit a clearly readable 80-by-32 character text console on the tv screen using a 8×8 pixel font
This kind of "native" video mode (i.e. a mode where one pixel row in the framebuffer memory maps to exactly one scanline) is a much better match to the tv's capabilities than a fuzzy, interlaced, non-native 640×480 or 800×600 VGA mode that has been heavily low-pass filtered and scaled down (resampled), using the usual "tv out" methods.
|This page is maintained by Jukka Aho. Last updated 11-Feb-2004.|