Product request
You are looking for a solution:
Select an option, and we will develop the best offer
for you
What video codecs are for
Codecs are used during the filming, editing, delivery, and playback of video content. Without this technology, content delivery to subscribers and video conferences would be impossible. The codec's purpose is to reduce the file size of videos. So let's dive right in and see how it works and why compression technology is so important.
Why video has to be compressed
Videos take up much more space than images or music, as they may also include an audio track (sometimes multiple), a video track, and subtitles. In addition to all that, their metadata holds service information and audio/video sync data.
A video track consists of frames, each frame is made up of pixels, and each pixel consists of three subpixels: red, green, and blue. Subpixel color data is 8 bits (1 byte). It takes 3 bytes to encode a pixel in one of the 16 million colors. A regular FullHD frame is built of over 2 million pixels, so an entire film would take up hundreds of gigabytes.
A 90-minute FullHD 24-fps video would require 750 GB of space without any audio whatsoever.
Source: Us.Infomir.store
Even more space is required to store uncompressed high-resolution/high-framerate videos, such as 4K (4096×2160) or 8K (7680×4320). Therefore, content like this is unfit for streaming.
Video compression is used to optimize storage and streaming over regular networks. There is a variety of methods for this: mathematical algorithms, prediction, trimming of redundant data, rounding of absolute values, and color processing by channel.
Compression is sometimes referred to as encoding. The reverse process is called video decoding or decompression.
What video codecs are
Codecs are hardware and software tools for video encoding. The term is derived from the English COder/DECoder.
After lossless encoding, the original data can be restored in full. However, it comes at the cost of compression rate. This compression type is used in video filming and post-processing.
Lossy compression is applied during the delivery of video content to client devices: TVs, media players, computers, and smartphones. The stronger the compression, the lower the video quality, and the smaller the file size is.
Codecs trim redundant data on two levels: within the frame and on the frame sequence level.
Intraframe compression
During intraframe compression, codecs process every frame separately. It's akin to how JPEG images are compressed. The algorithm divides the frame into the luminance and chrominance components, reduces its level of detail, and marks similar areas. As a result, you get an exponentially smaller file with minimal loss of quality.
Codecs compress data much like numbers are factored out in maths: instead of 20 zeroes, it's enough to show how many of them there are.
Interframe compression
Often, successive frames look almost identical, so they don't have to be preserved in their entirety. Codecs remove all the repeating information from the image, leaving only the areas that differ. Motion compensation algorithms operate in a similar way.
The interframe difference method works by comparing frames. The resulting file contains only the difference between frames. The motion compensation technique is based on prediction: only reference frames are stored in full, and the frames between them are predicted.
To see what interframe compression looks like, just pause any video during an action-packed scene. If you don't pause on a reference frame, the areas with moving objects will be blurry. It's just that the human eye can't make it out at regular playback speed.
The history of video codec
The history of digital video compression began in 1988 with the release of H.261. The codec benefited from motion compensation, previous frame referencing, color compression, and 8×8 array sampling.
In 1993, MPEG1 took the front stage. The technology relied on future and past reference frames for prediction and supported HD video. The MPEG1 standard was developed for 352×240 video but supported resolutions up to 4095×4095 pixels. Since MPEG1 supported only progressive scanning, it was quickly replaced by newer codecs.
Three years later, one of the most popular video codecs out there came out—MPEG2. It was used in digital TV and DVD. The technology opened new possibilities for audio encoding: the codec supported compression of files with up to 6 audio tracks. MPEG2 maintained high video quality but offered little in the way of compression because it was designed for low-performance devices. It is even still used in the on-air broadcasting, as well as cable and satellite TV.
In 1998, MPEG4 saw the light of day. With its help, a 90-minute film could be put on a regular CD. The codec handled 2D and 3D objects in the frame, supported DRM, as well as, audio and subtitles. Still, MPEG4 was not fit for FullHD video streaming.
In 2003, the H.264 era began. The technology compresses video twice as efficiently as MPEG4, enabling FullHD video streaming over 5 Mbps network channels. Although it's still among the most popular nowadays, the codec falls short when it comes to 4K video compression for streaming, especially on mobile networks.
In 2020, data transfer speeds average 33.7 Mbps on mobile networks, and 76.94 Mbps on cable, which is not enough for 4K H.264 video playback.
Source: Speedtest Global Index
The codec of the future
In 2012, the Joint Collaborative Team on Video Encoding developed the HEVC (H.265) codec. The technology was built on H.264 but offered compression twice as powerful while maintaining the same video quality.
While relying on H.264 techniques, HEVC also brings something new to the table, e.g., parallel processing, which enables the processing of frame areas simultaneously.
The only technical drawback of H.265 is its resource-intensiveness: to encode and decode video, it needs 3–5 times as much processing power than H.264. H.265 is not used as widely as H.264 yet, but it is already supported by many set-top boxes, smart TVs, smartphones, and other devices.
HEVC speeds up the adoption of 4K, and its evolution—FVC (Future Video Codec)—may become a vehicle for 8K video streaming. The developers promise that the new codec will be 50% more efficient in video compression. In October 2019, a draft international standard for H.266 was released, and the first hardware codecs will follow by June 2021.
Without codecs, video storage and delivery would be impossible. New compression technologies enable major and small operators to deliver high-quality content without having to upgrade their network infrastructure endlessly. With codecs, high-resolution video can be streamed even on comparatively low-speed connections: a 15 Mbps channel is enough to watch 4K films on Netflix.
Recommended
Ministra PRO without investments: how can an operator get new cost-free middleware?
Launching an IPTV/OTT project requires an operator to invest in market research, equipment, content purchase, advertising, user devices and many other investments. However, Infomir offers a cost-effective solution that allows operators to save money on one of the most important service elements by offering no-cost IPTV middleware.
Future-proofing IPTV with RDK: convenient platform for operators
Every year, telecommunication technologies reach a new level, offering operators effective solutions and TV advancements. In this article we will focus on one of these advancements – RDK technology. We will explain why hundreds of operators choose this solution and why it has become so important to them.
Ministra PRO: July updates
Vacation time is in full swing, and with members of the Ministra PRO team also getting ready to take their holidays, the July digest will be short. For this edition, we’ve gathered together all the major updates for this year so you can browse through them (while hopefully enjoying your own well-earned break!).