International video coding standard mpeg brief and AVS video key technology

International Video Coding Standard mpeg

Since the 1990s, ITU-T and ISO have developed a series of audio and video coding technology standards (source coding technology standards) and recommendations. These standards and recommendations have greatly promoted the practical and industrialization of multimedia technology. From the perspective of technological progress, the first generation of source coding technology standards MPEG-1 and MPEG-2, which were completed in 1994, have a compression capacity of 50-75 times. Since the beginning of the new century, the second generation of source coding technology standards have been introduced, and the compression efficiency can reach 100-150 times. The second-generation source coding technology standard will re-shuffle the international digital TV and digital audio and video industry that has just formed.
There are two major series of international audio and video codec standards: MPEG series standards developed by ISO/IEC JTC1, MPEG series standards for digital TV, and H.26x series video coding standards and G.7 series for multimedia communication. Audio coding standard.
CCITT (International Telegraph and Telephone Consultative Committee, now incorporated into the International Telecommunication Union ITU) has been proposing a series of audio coding algorithms and international standards since 1984. In 1984, CCITT Study Group 15 established an expert group to study the coding of videophones. After more than five years of research and efforts, CCITT Recommendation H.261 was completed and approved in December 1990. On the basis of H.261, ITU-T completed the H.263 coding standard in 1996. Based on the increase in coding algorithm complexity, H.263 can provide better image quality and lower rate. H.263 encoding is one of the most widely used encoding methods for IP video communication. H.263+, introduced by ITU-T in 1998, is the second version of the H.263 recommendation, which provides 12 new negotiable modes and other features to further improve compression coding performance.
MPEG is the abbreviation of the Moving Picture Expert Group established by the International Joint Organization (ISO) and the International Joint Committee of the International Electrotechnical Commission (ISO/IEC JTC1) in 1988. It is called ISO/IEC JTC1 Subcommittee No. 29 The working group (ISO/IEC JTC1/SC29/WG11) is responsible for the development of international technical standards for compression, decompression, processing and presentation of digital video, audio and other media. Since 1988, the MPEG expert group has held about four international conferences each year. The main content is to develop, revise and develop MPEG series multimedia standards. Video and audio coding standards MPEG-1 (1992) and MPEG-2 (1994), multimedia coding standards based on audiovisual media objects MPEG-4 (1999), multimedia content description standard MPEG-7 (2001), multimedia framework standard MPEG- twenty one. At present, the MPEG series of international standards has become the most influential multimedia technology standard, and has a profound impact on important products of the digital industry, audio-visual consumer electronics, multimedia communications and other information industries.
The CCITT H.261 standard began in 1984 and was completed in 1989. It is a pioneer of MPEG. MPEG-1 and H.261 share common data structures, coding tools, and syntax elements. However, the two are not fully backward compatible. MPEG-1 can be considered as an extended set of H.261. The development of MPEG-1 began in 1988 and was essentially completed in 1992. MPEG-2 can be seen as an extension of MPEG-1, which began in 1990 and was essentially completed in 1994. H.263 began in 1992 and the first edition was completed in 1995. MPEG-4 (which was based on MPEG-2 and H.263) began in 1993, and the first edition was essentially completed in 1998.
The standards that the MPEG Expert Group has and are working on include:
(1) MPEG-1 standard: In November 1992, it officially became an international standard under the name "Compression coding for moving pictures and their accompanying sounds with a digital storage medium rate of 1.5 Mbps". The supported video parameters for MPEG-1 are 352 X 240 X 30 frames/sec or equivalent.
(2) MPEG-2: November 1994 became an international standard (ISO/IEC13818), which is a highly adaptable motion picture and sound coding scheme. The original goal was to compress the video and its audio signal to 10 Mb/s. The experiment can be applied to the coding range of 1.5-60 Mb/s, and even higher. MPEG-2 can be used for compression encoding of digital communication, storage, broadcasting, high definition television, and the like. DVD and digital TV broadcasts use the MPEG-2 standard. After 1994, the MPEG-2 standard has also been expanded and revised.

The MPEG standard of the video encoding and decoding technology in the MPEG standard is mainly based on three major coding tools: adaptive block transform coding (AdapTIve block transform coding) to eliminate spatial redundancy; motion compensated differential pulse code modulation (MoTIon-compensated DPCM) to eliminate time domain redundancy, The two are combined into hybrid coding. Entropy coding is used to eliminate the statistical redundancy generated by the hybrid encoder. There are also some auxiliary tools that complement the main tool to eliminate the residual redundancy of some special parts of the encoded data, or to adjust the encoding according to the specific application, and some encoding tools support formatting the data into a specific bit stream to facilitate Storage and transfer.
Modern entropy coding was first created in the late 1940s; applied to video coding in the late 1960s; and then continuously improved. In the mid-1980s, two-dimensional variable length coding (2D VLC) and arithmetic coding (arithmeTIc coding) methods were introduced.
DPCM was founded in 1952 and was first applied to video coding in the same year. DPCM was originally developed as a spatial coding technique, and by the mid-1970s, DPCM began to be used for time domain coding. DPCM was used as a complete video coding scheme until the early 1980s. Since the early 1970s, the key elements of DPCM and fusion coding technology have merged, and gradually formed a hybrid coding technology, and in the early 1980s developed into a prototype of MPEG.
Transform coding was first used for video in the late 1960s, and substantial development in the first half of the 1970s was considered to achieve the highest resolution in spatial coding. In hybrid coding, transform coding is used to eliminate spatial redundancy, and DCPM is used to eliminate temporal redundancy. Motion compensated prediction technology has greatly improved the performance of time domain DCPM. It was founded in 1969 and developed into the basic form of MPEG in the early 1980s. In the early 1980s, interpolaTIve coding was extended, that is, prediction was performed by multi-frame interpolation, and intermediate frames were predicted by scaled motion vectors. Until the end of the 1980s, bi-directional prediction was born, and the technology evolved to the final form. In recent years (H.264), the quality of prediction has improved, that is, the correlation between different signals has decreased. Therefore, the necessity of transformation is reduced, and H.264 uses a simplified transformation (4 x 4).

The time correspondence between the AVS standard and relevant international standards and the work already carried out by the AVS Working Group are shown below.

Basic Principles of Video Compression The fundamental reason video can be compressed is that video data has a high degree of redundancy. Compression refers to the elimination of redundancy, based on two techniques: statistics and psychovisual.
The basic basis for eliminating statistical redundancy is that the video digitization process uses a regular sampling process in time and space. The video picture is digitized into a regular array of pixels that is dense enough to characterize the highest spatial frequency per point, while most picture frames contain very little or no detail of this highest frequency. Similarly, the selected frame rate is able to characterize the fastest motion in the scene, and an ideal compression system simply describes the instantaneous motion necessary for the scene. In short, an ideal compression system can dynamically adapt to changes in video in time and space, and the amount of data required is much lower than the raw data generated by digital sampling.
Psychological vision technology is mainly aimed at the limits of the human visual system. Human vision has limits in terms of contrast bandwidth, spatial bandwidth (especially color vision), time bandwidth, and the like. Moreover, these limits are not independent of each other, and there is an upper limit to the overall visual system. For example, it is impossible for the human eye to simultaneously perceive high resolution of time and space. Obviously, there is no need to characterize information that cannot be perceived, or that a certain degree of compression loss is not perceived by the human visual system.
The video coding standard is not a single algorithm, but a complete set of coding tools that combine to achieve complete compression. The history of video compression can be traced back to the early 1950s. In the following 30 years, the main compression technologies and tools gradually developed. In the early 1980s, video coding technology was initially formed. Initially each major tool was proposed as a complete solution for video coding. The main lines of technology were developed in parallel, and finally the best performers were combined into a complete solution. The main contributor to the solution integration was the standardization organization. Experts from various countries and organizations have completed the integration of the program, or the coding standard scheme is original by the standards committee. In addition, although some technologies have been proposed many years ago, they have not been practically applied at the time because of the high cost of implementation. Until recent years, the development of semiconductor technology has met the requirements of real-time video processing.


Figure 2 Development of coding tools and standards (Cliff, 2002)
(3) MPEG-4: Noting the need for low-bandwidth applications and the rapid development of interactive graphics applications (synthetic content such as games), interactive multimedia (content distribution and access technologies such as WWW), the MPEG expert group established MPEG-4 The working group to promote integration in the above three areas. In early 1999, MPEG-4 (first edition), which defined the standard framework, became an international standard (ISO/IEC 14496-1). The second edition, which offers a variety of algorithms and tools, became an international standard at the end of 1999 (ISO/IEC 14496- 2) The follow-up is still in the third, fourth and fifth editions.
(4) MPEG-7 and MPEG-21 standards: MPEG-7 is a content expression standard for multimedia information search, filtering, management and processing. In July 2001, it became an international standard. The MPEG-21 under development is focused on a multimedia framework that provides the foundation for all developed and developing standards related to the delivery of multimedia content.

A number of different industries commonly use LED Neon Light Signs – shop, store, home, hotels, bar, restaurant and more. We offer a long-lasting and economical solution for any of your professional, and even personal, Neon Signage needs. Our custom neon sign boards provide both a functional, art and aesthetic purpose. Whether indoors or outdoors, these illuminated neon signs help establish a high-end impression for your business.

Led Neon Light Signs

Led Neon Light Signs,Outdoor Led Open Sign,Led Sign Boards,Outside Neon Lights

Shenzhen Oleda Technology Co.,Ltd , http://www.baiyangsign.com