A compression encoder works by identifying the useful part of a signal which is called the entropy and sending this to the decoder. The remainder of the signal is called the redundancy because it can be worked out at the decoder from what is sent. Video compression relies on two basic assumptions. The first is that human sensitivity to noise in the picture is highly dependent on the frequency of the noise. The second is that even in moving pictures there is a great deal of commonality between one picture and the next. Data can be conserved both by raising the noise level where it is less visible and by sending only the difference between one picture and the next. In a typical picture, large objects result in low spatial frequencies whereas small objects result in high spatial frequencies. Human vision detects noise at low spatial frequencies much more readily than at high frequencies. The phenomenon of large-area flicker is an example of this. Spatial frequency analysis also reveals that in many areas of the picture, only a few frequencies dominate and the remainder are largely absent. For example if the picture contains a large, plain object, high frequencies will only be present at the edges. In the body of a plain object, high spatial frequencies are absent and need not be transmitted at all. In MPEG, two-dimensional spatial frequency analysis is performed using the Discrete Cosine Transform (DCT). An array of pixels, typically 8 x 8, is converted into an array of coefficients. The magnitude of each coefficient represents the amount of a particular spatial frequency which is present. Fig.1 shows that in the resulting coefficient block, the coefficient in the top left corner represents the DC component or average brightness of the pixel block. Moving to the right the coefficients represent increasing horizontal spatial frequency. Moving down, the coefficients represent increasing vertical spatial frequency. The coefficient in the bottom right hand corner represents the highest diagonal frequency.