# Digital Myths



## lcaillo (May 2, 2006)

I was looking to see what was new at the NAB (National Association of Broadcasters) convention and ran across this article from Sept '08 that I thought was good info. For those of you who don't know much about the broadcast side of the industry, NAB is where lots of the newest technologies show up long before we see them in the consumer world. I find it interesting to keep up with what is going on there because it affects what we see. For instance, the prolifereation of lower cost HD cameras will impact local andindependent production greatly in the next few years as many convert to HD as older NTSC gear ages out and the demand for HD increases. Schubin, the author of this article, is always an interesting read when trying to get a perspective on what is going on. I suggest looking up his article on NAB when it comes out soon.

I think the take-home points with this article are that the lines between analog and digital are not as well defined as many would suggest, and that all technologies have their caveats, uses, and advantages. Like I always say, the correct answer is often, "it depends."

This is from Videography Magazine

http://www.videography.com/article/60594

*Ten Myths About Digital 

By Mark Schubin, September 16, 2008*

We live in the digital age. Or do we? What does "digital" mean?

The original Oxford English Dictionary (OED) doesn't seem to help. It notes the word's origin is the Latin digitalis ("of or belonging to the finger"), but, of six definitions, all but one relate to fingers or toes, and that one refers to a key on a musical instrument.

A much later OED supplement finally offered a definition that might be recognizably related to the technology of videography, "Of, pertaining to or using digits: specifically applied to a computer which operates on data in the form of digits or similar discrete elements." The earliest citation of its use is in a 1938 U.S. patent.

For decades, that operation on data in the form of digits or similar discrete elements seemed to be the primary definition of "digital." But is it today? The word seems to have taken on characteristics in modern usage that such operation lacks. Consider these ten myths about digital.

*1. On Feb. 18 of next year, broadcasters will have to start transmitting HDTV.*

There's a massive consumer education program related to that date, but neither mandatory HDTV nor the start of digital broadcasting are involved. U.S. full-power television broadcasters were required to start transmitting digitally long ago. The first digital terrestrial television broadcast (dTTb or just DTT) stations conforming to the U.S. transmission standard went on the air in mid-1996. Other stations had to join them by 1999, 2002, or early 2003, depending on market, network affiliation, and commercial status.

Not all U.S. full-power TV stations made the deadlines, but they are long past, and low-power stations have none. What is likely to happen on Feb. 18 of next year, therefore, is not the start of any DTT broadcasts but the cessation of analog full-power television broadcasts in the United States.

The reason to say it "is likely to" rather than "will" happen is that, although Feb. 18 is in a law, it's not the first date so designated. Analog television broadcasting in the U.S. was originally supposed to cease at the end of 2006. Congress changed its mind, and that's an option for next year's date, too.

It's better to describe the full-power television broadcasting that will remain on the air as DTT rather than DTV because digital television is an ambiguous term. It could easily include digital cable TV, digital satellite transmissions (DirecTV and DISH were both digital from Day One), digital programming delivered by phone company, DVDs, Internet video, or even shooting with a so-called digital camcorder (more on that later).

Finally, there's that misconception that DTT must be HDTV. It's true that, when the Federal Communications Commission (FCC) began its inquiry into advanced television broadcasting in 1987, the goal was the transmission of HDTV. Ten years later, however, the goal was shifted to digital transmission, not HDTV.

The FCC could hardly have been more explicit. The third paragraph of the second page of FCC report MM 97-8, issued on April 3, 1997, says, "The Commission will not require broadcasters to air 'high definition programming.'" In case there was still any doubt, the FCC order put this language on picture definition into Title 47 of the U.S. Code of Federal Regulations, Part 73.624, Digital Television Broadcast Stations, paragraph (a): "The DTV program service provided pursuant to this paragraph must be at least comparable in resolution to the analog television station programming transmitted to viewers on the analog channel."

*2. Digital is another word for binary.*

It's certainly true that most digital circuitry operates on binary data (ones and zeroes, ons and offs, highs and lows, etc.), and it's also true that all binary data are digital. But digital data streams are not necessarily binary.

The U.S. DTT system uses a form of modulation referred to as 8-VSB. "VSB" stands for vestigial sideband, a version of the way analog television has been broadcast throughout the world. More significant is the "8." It indicates that there are eight possible levels for each transmitted data symbol.

Binary transmission is considered tremendously robust. A receiver can more easily tell a "high" from a "low" than a "somewhat high" from a "somewhat higher." But a single stream of digital television programming can require so many bits per second that fitting it into a single broadcast or cable channel is difficult using binary transmission. So, more complex symbols than simply "high" and "low" are used.

Digital cable television typically uses forms of quadrature amplitude modulation (QAM). The version called 256-QAM offers a "constellation" of 256 different combinations of level and "phase," and that's not the limit; 1024-QAM systems have been developed.

Cable can get away with such complex symbols because it has a well-controlled environment. A shielded cable delivers the data stream directly to a receiver. For the noisy broadcast environment, 8-VSB was chosen instead. And error correction is used to assist reception.

*3. Digital is always perfectly reproducible.*

The benefits of binary transmission and recording are sometimes illustrated using the old children's game of "telephone," wherein a message is whispered into one player's ear. That player whispers it into another player's ear, and the process continues until the last player reports aloud what is likely to be a very garbled version of the original. Analog transmission works the same way. It picks up noise, reflections, and other degradations en route from transmitter to receiver.

Just as the words "high" and "low" are likely to survive a game of "telephone," so, too, are actual binary signals likely to be decipherable by receivers--likely, but not guaranteed. Digital data streams are actually analog transmissions of symbols.



If the symbols are sufficiently degraded, errors will occur. Some can be corrected; others can't. The result, depending on the errors and the system, can be loss of resolution, visible blocks or edge bumps, frozen pictures, or nothing whatsoever. In many cases, an analog system might still deliver a viewable picture where a digital system can't.

*4. Digital is better than analog.*

Even ignoring the transmission or recording errors noted above, there are limitations to digital quality based on sampling rate, bit depth (number of bits per sample), and bit-rate-reduction ("compression") algorithm. Frequencies (video fineness of detail or audio pitch or timbre) slightly lower than half the sampling rate can be accurately recovered from a digital receiver, bit depth helps determine the signal-to-noise ratio, and compression algorithms determine what data may be safely discarded, but tend to fail in high-detail scenes with major changes between frames.

Analog systems, on the other hand, can be "tweaked" to improve their quality. The earliest beyond-TV-quality electronic recorders for cinematography were simply improved videotape recorders. And the earliest HDTV recorders were analog, too.

*5. Digital is noise free.*

The first step in digitizing an analog signal is sampling it at moments in time; the second is quantizing, assigning each sample a numerical level. Unfortunately, analog samples are unlikely to fall exactly on a particular digital quantum. Instead of being 100 or 101, they might be 100.2 or 100.65 or 100.37916.

The difference between the actual level and the assigned quantum is an error. All digitization of analog signals introduces error. The more bits, the smaller the error, but the error remains.

When the digital signal is converted back to analog for the audience, the error changes. If the error is correlated to the signal, it is perceived as distortion, an aural change of tonal character or false contour lines in an image. If it's not correlated, it's simply noise.

Audiences are more likely to object to distortion than to noise. And the best way to ensure that errors are uncorrelated is to have noise in the signal, ideally at a level of one half of the level between quanta, and to make sure that the noise passes all the way through the digital processing.

Recently, Thomson has shown that audiences prefer the look of pictures with film-like graininess, rather than free of it. They've come up with a system to add grain to digital signals.

*6. Digital is new.*

If we're living in a digital age of videography, it's not a brave new world. Never mind the 1938 patent. Digital character generators have been used in videography for more than 40 years. Digital timebase correctors are almost as old, and so are digital frame synchronizers and digital video effects, such as picture shrinking and repositioning (keying is an even older digital effect).

Imagine a nonlinear video editing system so advanced that it uses not only a computer and removable hard-disk packs but even a no-ink stylus for marking edit decisions. You've just imagined the 1971 CMX 600.

*7. Digital is more flexible.*

Given that a computer can assign any color to any picture element in any frame of a digital-video sequence, ultimately digital can do anything. But that's an awful lot of work.

When video engineers wanted to create the look of a dream sequence in the old days of tube-based cameras, they just injected an audio tone into the scanning circuitry to make the pictures go wavy. And those old commercials showing people getting bloated when they had indigestion were created using analog computer graphics.

In simple form, a video camera shot a tube-based image, the scanning of which could be easily controlled. Increase the scanning levels, and the picture expands; decrease them, and it shrinks.

Tube-based cameras could also provide a drunk's-eye view of the world through intentional misregistration of the individual color images. They could also provide psychedelic effects by adjusting the electron-beam strength in the imaging tubes so it wouldn't quite "erase" what it had just scanned. And then there are so-called recorder "stunt" modes.

Analog videotape recorders introduced the option of variable-speed playback. Heads simply read different portions of the picture from different tracks. Solid-state memories, unfortunately, have no equivalent functionality.

*8. Bits are bits.*

The nice thing about data streams is that they can be recorded on or transmitted by anything that can handle their rate. But that doesn't mean all bits are equal.

An error in the least-significant bit of a digital audio signal might be imperceptible; an error in the most-significant bit can introduce a very loud pop. An error in the highest-frequency of a bit-rate-reduction block might fly by unnoticed; an error in a header of a sequence could make it unwatchable.

*9. Old media are analog.*

Writing about the new meaning of "avatar" in the language column of The New York Times Magazine on Aug. 10, Aaron Britt described print media as "woefully analog." Perhaps a new meaning of "analog" is "dated" or "obsolete." But print media effectively meet the OED definition of digital, each character on a page being a "discrete element."

Similarly, photographic film might be considered digital, because the photosensitive grains in the emulsion can be only exposed or not, a binary condition. Conversely, so-called digital cameras and camcorders use analog image sensors.

*10. Digital will change everything.*

Nothing changes everything. We don't yet commute by jet pack or even Segway, and, in this long-lasting digital age, in addition to those analog image sensors, we still use lenses, tripods, and many other items that would be recognizable to videographers of yore.

The digital revolution has been, and should continue to be, exciting, but videography is still about the craft of videographers, and their eyes, ears, and, of course, digits.


----------



## Sonnie (Apr 11, 2006)

Good article... :T


----------



## thirsty ear (Mar 24, 2009)

Good read

I have had digital TV in my area for a little while now and I like it in picture quality. However, when living out of town the signal is harder to bring in. With analog I could always get some picture and sound, not great but it was there. With the digital channels and my digital tuner it is either off or on. This could be a problem for many viewers.


----------

