Multimedia is content that uses a combination of different content forms such as text, images, animations and interactive content. Multimedia contrasts with media that use only rudimentary computer displays such as text-only or traditional forms of printed or hand-produced material. Multimedia can be recorded and played, interacted with or accessed by information content processing devices, such as computerized and electronic devices, but can be part of a live performance. Multimedia devices are electronic media devices used to experience multimedia content. Multimedia is distinguished from mixed media in fine art. In the early years of multimedia the term "rich media" was synonymous with interactive multimedia, "hypermedia" was an application of multimedia; the term multimedia was coined by singer and artist Bob Goldstein to promote the July 1966 opening of his "LightWorks at L'Oursin" show at Southampton, Long Island. Goldstein was aware of an American artist named Dick Higgins, who had two years discussed a new approach to art-making he called "intermedia".
On August 10, 1966, Richard Albarino of Variety borrowed the terminology, reporting: "Brainchild of songscribe-comic Bob Goldstein, the'Lightworks' is the latest multi-media music-cum-visuals to debut as discothèque fare." Two years in 1968, the term "multimedia" was re-appropriated to describe the work of a political consultant, David Sawyer, the husband of Iris Sawyer—one of Goldstein's producers at L'Oursin. In the intervening forty years, the word has taken on different meanings. In the late 1970s, the term referred to presentations consisting of multi-projector slide shows timed to an audio track. However, by the 1990s'multimedia' took on its current meaning. In the 1993 first edition of Multimedia: Making It Work, Tay Vaughan declared "Multimedia is any combination of text, graphic art, sound and video, delivered by computer; when you allow the user – the viewer of the project – to control what and when these elements are delivered, it is interactive multimedia. When you provide a structure of linked elements through which the user can navigate, interactive multimedia becomes hypermedia."The German language society Gesellschaft für deutsche Sprache recognized the word's significance and ubiquitousness in the 1990s by awarding it the title of German'Word of the Year' in 1995.
The institute summed up its rationale by stating " has become a central word in the wonderful new media world". In common usage, multimedia refers to an electronically delivered combination of media including video, still images and text in such a way that can be accessed interactively. Much of the content on the web today falls within this definition; some computers which were marketed in the 1990s were called "multimedia" computers because they incorporated a CD-ROM drive, which allowed for the delivery of several hundred megabytes of video and audio data. That era saw a boost in the production of educational multimedia CD-ROMs; the term "video", if not used to describe motion photography, is ambiguous in multimedia terminology. Video is used to describe the file format, delivery format, or presentation format instead of "footage", used to distinguish motion photography from "animation" of rendered motion imagery. Multiple forms of information content are not considered modern forms of presentation such as audio or video.
Single forms of information content with single methods of information processing are called multimedia to distinguish static media from active media. In the fine arts, for example, Leda Luss Luyken's ModulArt brings two key elements of musical composition and film into the world of painting: variation of a theme and movement of and within a picture, making ModulArt an interactive multimedia form of art. Performing arts may be considered multimedia considering that performers and props are multiple forms of both content and media. Multimedia presentations may be viewed by person on stage, transmitted, or played locally with a media player. A broadcast may be a recorded multimedia presentation. Broadcasts and recordings can be digital electronic media technology. Digital online multimedia streamed. Streaming multimedia may be on-demand. Multimedia games and simulations may be used in a physical environment with special effects, with multiple users in an online network, or locally with an offline computer, game system, or simulator.
The various formats of technological or digital multimedia may be intended to enhance the users' experience, for example to make it easier and faster to convey information. Or in entertainment or art, to transcend everyday experience. Enhanced levels of interactivity are made possible by combining multiple forms of media content. Online multimedia is becoming object-oriented and data-driven, enabling applications with collaborative end-user innovation and personalization on multiple forms of content over time. Examples of these range from multiple forms of content on Web sites like photo galleries with both images and title user-updated, to simulations whose co-efficients, illustrations, animations or videos are modifiable, allowing the multimedia "experience" to be altered without reprogramming. In addition to seeing and hearing, haptic technology enables virtual objects to be felt. Emerging technology involving illusions of taste and smell may enhance the multimedia experience. Multimedia may be broadly divided into linear and non-linear categories: Linea
MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to 1.5 Mbit/s without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting possible. Today, MPEG-1 has become the most compatible lossy audio/video format in the world, is used in a large number of products and technologies; the best-known part of the MPEG-1 standard is the MP3 audio format it introduced. The MPEG-1 standard is published as ISO/IEC 11172 – Information technology—Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s. The standard consists of the following five Parts: Systems Video Audio Conformance testing Reference software Modeled on the successful collaborative approach and the compression technologies developed by the Joint Photographic Experts Group and CCITT's Experts Group on Telephony, the Moving Picture Experts Group working group was established in January 1988.
MPEG was formed to address the need for standard video and audio formats, to build on H.261 to get better quality through the use of more complex encoding methods. It was established in 1988 by the initiative of Leonardo Chiariglione. Development of the MPEG-1 standard began in May 1988. Fourteen video and fourteen audio codec proposals were submitted by individual companies and institutions for evaluation; the codecs were extensively tested for computational complexity and subjective quality, at data rates of 1.5 Mbit/s. This specific bitrate was chosen for transmission over T-1/E-1 lines and as the approximate data rate of audio CDs; the codecs that excelled in this testing were utilized as the basis for the standard and refined further, with additional features and other improvements being incorporated in the process. After 20 meetings of the full group in various cities around the world, 4½ years of development and testing, the final standard was approved in early November 1992 and published a few months later.
The reported completion date of the MPEG-1 standard varies greatly: a complete draft standard was produced in September 1990, from that point on, only minor changes were introduced. The draft standard was publicly available for purchase; the standard was finished with the 6 November 1992 meeting. The Berkeley Plateau Multimedia Research Group developed an MPEG-1 decoder in November 1992. In July 1990, before the first draft of the MPEG-1 standard had been written, work began on a second standard, MPEG-2, intended to extend MPEG-1 technology to provide full broadcast-quality video at high bitrates and support for interlaced video. Due in part to the similarity between the two codecs, the MPEG-2 standard includes full backwards compatibility with MPEG-1 video, so any MPEG-2 decoder can play MPEG-1 videos. Notably, the MPEG-1 standard strictly defines the bitstream, decoder function, but does not define how MPEG-1 encoding is to be performed, although a reference implementation is provided in ISO/IEC-11172-5.
This means that MPEG-1 coding efficiency can drastically vary depending on the encoder used, means that newer encoders perform better than their predecessors. The first three parts of ISO/IEC 11172 were published in August 1993. All known patent searches suggest that, due to its age, MPEG-1 video and Layer I/II audio is no longer covered by any patents and can thus be used without obtaining a licence or paying any fees; the ISO patent database lists one patent for ISO 11172, US 4,472,747, which expired in 2003. The near-complete draft of the MPEG-1 standard was publicly available as ISO CD 11172 by December 6, 1991. Neither the July 2008 Kuro5hin article "Patent Status of MPEG-1, H.261 and MPEG-2", nor an August 2008 thread on the gstreamer-devel mailing list were able to list a single unexpired MPEG-1 video and Layer I/II audio patent. A May 2009 discussion on the whatwg mailing list mentioned US 5,214,678 patent as covering MPEG audio layer II. Filed in 1990 and published in 1993, this patent is now expired.
A full MPEG-1 decoder and encoder, with "Layer 3 audio", could not be implemented royalty free since there were companies that required patent fees for implementations of MPEG-1 Layer 3 Audio as discussed in the MP3 article. All patents in the world connected to MP3 expired 30 December 2017, which makes this format free for use. Despite this as early as on 23 April 2017 Fraunhofer IIS stopped charging for Technicolor's mp3 licensing program for certain mp3 related patents and software. Most popular software for video playback includes MPEG-1 decoding, in addition to any other supported formats; the popularity of MP3 audio has established a massive installed base of hardware that can play back MPEG-1 Audio. "Virtually all digital audio devices" can play back MPEG-1 Audio. Many millions have been sold to-date. Before MPEG-2 became widespread, many digital satellite/cable TV services used MPEG-1 exclusively; the widespread popularity of MPEG-2 with broadcasters means MPEG-1 is playable by most digital cable and satellite set-top boxes, digital disc and tape players, due to backwards compatibility.
MPEG-1 was used for full-screen video on Green Book CD-i, on Vid
University of Aberdeen
The University of Aberdeen is a public research university in Aberdeen, Scotland. It is an ancient university founded in 1495 when William Elphinstone, Bishop of Aberdeen and Chancellor of Scotland, petitioned Pope Alexander VI on behalf of James IV, King of Scots to establish King's College, making it Scotland's third-oldest university and the fifth-oldest in the English-speaking world. Today, Aberdeen is ranked among the top 200 universities in the world and is ranked within the top 30 universities in the United Kingdom. In the 2019 Times Higher Education University Impact Rankings, Aberdeen was ranked 31st in the world for impact on society. Aberdeen was named the 2019 Scottish University of the Year by The Times and Sunday Times Good University Guide; the university as it is comprised was formed in 1860 by a merger between King's College and Marischal College, a second university founded in 1593 as a Protestant alternative to the former. The university's iconic buildings act as symbols of wider Aberdeen Marischal College in the city centre and the crown steeple of King's College in Old Aberdeen.
There are two campuses. Although the original site of the university's foundation, most academic buildings apart from the King's College Chapel and Quadrangle were constructed in the 20th century during a period of significant expansion; the university's Foresterhill campus is next to Aberdeen Royal Infirmary and houses the School of Medicine and Dentistry as well as the School of Medical Sciences. Together these buildings comprise one of Europe's largest health campuses; the annual income of the institution for 2017–18 was £219.5 million of which £56.1 million was from research grants and contracts, with an expenditure of £226.8 million. Aberdeen has 13,500 students from undergraduate to doctoral level, including many international students. An abundant range of disciplines are taught at the university, with 650 undergraduate degree programmes offered in the 2012-13 academic year. Many important figures in the field of theology were educated at the university in its earlier history, giving rise to the Aberdeen doctors in the 17th century and prolific enlightenment philosopher Thomas Reid in the 18th.
Five Nobel laureates have since been associated with Aberdeen. The first university in Aberdeen, King's College, formally The University and King's College of Aberdeen, was founded in February 1495 by William Elphinstone, Bishop of Aberdeen, Chancellor of Scotland, a graduate of the University of Glasgow drafting a request on behalf of King James IV to Pope Alexander VI resulting in a Papal Bull being issued; the university, modelled on that of the University of Paris and intended principally as a law school, soon became the most famous and popular of the Scots seats of learning due to the prestige of Elphinstone and his friend, Hector Boece, the first principal. Despite this founding date, teaching did not start for another ten years, the University of Aberdeen celebrated 500 years of teaching and learning in 2005. Following the Scottish Reformation in 1560, King's College was purged of its Roman Catholic staff but in other respects was resistant to change. George Keith, the fifth Earl Marischal was a moderniser within the college and supportive of the reforming ideas of Peter Ramus.
In April 1593 he founded a second university in Marischal College. It is possible the founding of another college in nearby Fraserburgh by Sir Alexander Fraser, a business rival of Keith, was instrumental in its creation. Aberdeen was unusual at this time for having two universities in one city: as 20th-century University prospectuses observed, Aberdeen had the same number as existed in England at the time. Marischal College offered the Principal of King's College a role in selecting its academics, but this was refused - the first blow in a developing rivalry. Marischal College, in the commercial heart of the city, was quite different in outlook. For example, it was more integrated into the life of the city, such as allowing students to live outwith the College; the two rival colleges clashed, sometimes in court, but in brawls between students on the streets of Aberdeen. As the institutions put aside their differences, a process of attempted mergers began in the 17th century. During this time, both colleges made notable intellectual contributions to the Scottish Enlightenment.
Both colleges supported the Jacobite rebellion and following the defeat of the 1715 rising were purged by the authorities of their academics and officials. The nearest the two colleges had come to full union was as the "Caroline University of Aberdeen", a merger initiated by Charles I of Scotland in 1641. Following the civil conflicts of the Wars of the Three Kingdoms, a more complete unification was attempted following the ratification of Parliament by Oliver Cromwell during the interregnum in 1654; this united university survived until the Restoration whereby all laws made during this period were rescinded by Charles II and the two colleges reverted to independent status. Charles I is still recognised as one of the university's founders, due to his part in creating the Caroline University and his benevolence towards King's College. Further unsuccessful suggestions for union were brought about throughout the 18th and early 19th centuries; the two universities in Aberdeen merged on 15 September 1860 in accord
H.264 or MPEG-4 Part 10, Advanced Video Coding is a block-oriented motion-compensation-based video compression standard. As of 2014, it is one of the most used formats for the recording and distribution of video content, it supports resolutions up to 8192×4320, including 8K UHD. The intent of the H.264/AVC project was to create a standard capable of providing good video quality at lower bit rates than previous standards, without increasing the complexity of design so much that it would be impractical or excessively expensive to implement. An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems, including low and high bit rates and high resolution video, broadcast, DVD storage, RTP/IP packet networks, ITU-T multimedia telephony systems; the H.264 standard can be viewed as a "family of standards" composed of a number of different profiles. A specific decoder decodes at least one, but not all profiles.
The decoder specification describes. H.264 is used for lossy compression, although it is possible to create lossless-coded regions within lossy-coded pictures or to support rare use cases for which the entire encoding is lossless. H.264 was developed by the ITU-T Video Coding Experts Group together with the ISO/IEC JTC1 Moving Picture Experts Group. The project partnership effort is known as the Joint Video Team; the ITU-T H.264 standard and the ISO/IEC MPEG-4 AVC standard are jointly maintained so that they have identical technical content. The final drafting work on the first version of the standard was completed in May 2003, various extensions of its capabilities have been added in subsequent editions. High Efficiency Video Coding, a.k.a. H.265 and MPEG-H Part 2 is a successor to H.264/MPEG-4 AVC developed by the same organizations, while earlier standards are still in common use. H.264 is best known as being one of the video encoding standards for Blu-ray Discs. It is widely used by streaming Internet sources, such as videos from Vimeo, YouTube, the iTunes Store, Web software such as the Adobe Flash Player and Microsoft Silverlight, various HDTV broadcasts over terrestrial and satellite.
H.264 is protected by patents owned by various parties. A license covering most patents essential to H.264 is administered by patent pool MPEG LA. Commercial use of patented H.264 technologies requires the payment of royalties to MPEG LA and other patent owners. MPEG LA has allowed the free use of H.264 technologies for streaming Internet video, free to end users, Cisco Systems pays royalties to MPEG LA on behalf of the users of binaries for its open source H.264 encoder. The H.264 name follows the ITU-T naming convention, where the standard is a member of the H.26x line of VCEG video coding standards. The standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L. It is thus common to refer to the standard with names such as H.264/AVC, AVC/H.264, H.264/MPEG-4 AVC, or MPEG-4/H.264 AVC, to emphasize the common heritage. It is referred to as "the JVT codec", in reference to the Joint Video Team organization that developed it.
Some software programs internally identify this standard as AVC1. In early 1998, the Video Coding Experts Group issued a call for proposals on a project called H.26L, with the target to double the coding efficiency in comparison to any other existing video coding standards for a broad variety of applications. VCEG was chaired by Gary Sullivan; the first draft design for that new standard was adopted in August 1999. In 2000, Thomas Wiegand became VCEG co-chair. In December 2001, VCEG and the Moving Picture Experts Group formed a Joint Video Team, with the charter to finalize the video coding standard. Formal approval of the specification came in March 2003; the JVT was chaired by Gary Sullivan, Thomas Wiegand, Ajay Luthra. In June 2004, the Fidelity range extensions project was finalized. From January 2005 to November 2007, the JVT was working on an extension of H.264/AVC towards scalability by an Annex called Scalable Video Coding. The JVT management team was extended by Jens-Rainer Ohm. From July 2006 to November 2009, the JVT worked on Multiview Video Coding, an extension of H.264/AVC towards free viewpoint television and 3D television.
That work included the development of two new profiles of the standard: the Multiview High Profile and the Stereo High Profile. The standardization of the first version of H.264/AVC was completed in May 2003. In the first project to extend the original standard, the JVT developed what was called the Fidelity Range Extensions; these extens
Cardiff University is a public research university in Cardiff, Wales. Founded in 1883 as the University College of South Wales and Monmouthshire, it became one of the founding colleges of the University of Wales in 1893, in 1997 received its own degree-awarding powers, it merged with the University of Wales Institute of Science and Technology in 1988. The college adopted the public name of Cardiff University in 1999, in 2005 this became its legal name, when it became an independent university awarding its own degrees; the third oldest university institution in Wales, it is composed of three colleges: Arts and Social Sciences. Cardiff is the only Welsh member of the Russell Group of research-intensive British universities, it is recognised as providing high-quality, research-based university education, placed between 100th and 200th in the world by the four major international rankings, in the top 60 in all three UK achievement tables. It ranked 5th in the UK among multi-faculty institutions for the quality of its research and 17th for its Research Power in the 2014 Research Excellence Framework.
For 2017–2018, Cardiff had a turnover of £516.1 million, including £106.0 million from research grants and contracts. The university has an undergraduate enrolment of 23,085 and a total enrolment of 31,595 making it one of the ten largest universities in the UK; the Cardiff University Students' Union works to promote the interests of the student body within the University and further afield. The university's sports teams compete in the British Universities and Colleges Sport leagues. Discussions on the founding of a university college in South Wales began in 1879, when a group of Welsh and English MPs urged the government to consider the poor provision of higher and intermediate education in Wales and "the best means of assisting any local effort which may be made for supplying such deficiency."In October 1881, William Gladstone's government appointed a departmental committee to conduct "an enquiry into the nature and extent of intermediate and higher education in Wales", chaired by Lord Aberdare and consisting of Viscount Emlyn, Reverend Prebendary H. G. Robinson, Henry Richard, John Rhys and Lewis Morris.
The Aberdare Report, as it came to be known, took evidence from a wide range of sources and over 250 witnesses and recommended a college each for North Wales and South Wales, the latter to be located in Glamorgan and the former to be the established University College of Wales in Aberystwyth. The committee cited the unique Welsh national identity and noted that many students in Wales could not afford to travel to University in England or Scotland, it advocated a national degree-awarding university for Wales, composed of regional colleges, which should be non-sectarian in nature and exclude the teaching of theology. After the recommendation was published, Cardiff Corporation sought to secure the location of the college in Cardiff, on 12 December 1881 formed a University College Committee to aid the matter. There was competition to be the site between Cardiff. On 12 March 1883, after arbitration, a decision was made in Cardiff's favour; this was strengthened by the need to consider the interests of Monmouthshire, at that time not incorporated into Wales, the greater sum received by Cardiff in support of the college, through a public appeal that raised £37,000 and a number of private donations, notably from the Lord Bute and Lord Windsor.
In April Lord Aberdare was appointed as the College's first president. The possible locations considered included Cardiff Arms Park, Cathedral Road, Moria Terrace, before the site of the Old Royal Infirmary buildings on Newport Road was chosen; the University College of South Wales and Monmouthshire opened on 24 October 1883 with courses in Biology, English, German, History, Latin and Astronomy, Welsh and Philosophy, Physics. It was incorporated by Royal Charter the following year, this being the first in Wales to allow the enrolment of women, forbidding religious tests for entry. John Viriamu Jones was appointed as the University's first Principal at the age of 27; as Cardiff was not an independent university and could not award its own degrees, it prepared its students for examinations of the University of London or for further study at Oxford or Cambridge. In 1888 the University College at Cardiff and that of North Wales proposed to the University College Wales at Aberystwyth joint action to gain a university charter for Wales, modelled on that of Victoria University, a confederation of new universities in Northern England.
Such a charter was granted to the new University of Wales in 1893, allowing the colleges to award degrees as members. The Chancellor was set ex officio as the Prince of Wales, the position of operational head would rotate among heads of the colleges. In 1885, Aberdare Hall opened as the first hall of residence, allowing women access to the university; this remains a single-sex hall. In 1904 came the appointment of the first female associate professor in the UK, Millicent Mackenzie, who in 1910 became the first female full professor at a chartered UK university. In 1901 Principal Jones persuaded Cardiff Corporation to give the college a five-acre site in Cathays Park. Soon after, in 1905, work on a new building commenced under the architect W. D. Caröe. Money ran short for the project, however. Although the side-wings were completed in the 1960s, the planned Great Hall has n
High Efficiency Image File Format
High Efficiency Image File Format is a file format for individual images and image sequences. It was developed by the Moving Picture Experts Group and is defined by MPEG-H Part 12; the MPEG group claims that twice as much information can be stored in a HEIF image as in a JPEG image of the same size, resulting in a better quality image. The HEIF specification defines the means of storing High Efficiency Video Codec -encoded intra images and HEVC-encoded image sequences in which inter prediction is applied in a constrained manner. HEIF files are compatible with the ISO Base Media File Format and can include other media streams, such as timed text and audio. HEIF image files are stored with filename extensions.heif or.heic. The requirements and main use cases of HEIF were defined in 2013; the technical development of the specification took about one and a half years and was finalized in the summer of 2015. HEIF files can store the following types of data: Image Items: Storage of individual images, image properties and thumbnail.
Image Derivations: Derived images enable non-destructive image editing, are created on the fly by the rendering software using editing instructions stored separately in the HEIF file. These instructions and images are stored separately in the HEIF file, describe specific transformations to be applied to the input images; the storage overhead of derived images is small. Image Sequences: Storage of multiple time-related and/or temporally predicted images, their properties and thumbnails. Different prediction options can be used in order to exploit the temporal and spatial similarities between the images. Hence, file sizes can be drastically reduced when tens of images are stored in the same HEIF file. Auxiliary Image Items: Storage of image data which complements another image item. An alpha plane or a depth map are examples for such images; these data used in various forms to complement another image item. Image Metadata: Storage of EXIF, XMP and similar metadata which accompany the images stored in the HEIF file.
HEVC image players are required to support rectangular cropping and rotation by 90, 180, 270 degrees. The primary use case for the mandatory support for rotation by 90 degrees is for images where the camera orientation is incorrectly detected or inferred; the rotation requirement makes it possible to manually adjust the orientation of an image or image sequence without the needing to re-encode it. Cropping enables the image to be re-framed without re-encoding. Samples in image sequence tracks must be either intra-coded images or inter-picture predicted images with reference to only intra-coded images; these constraints of inter-picture prediction reduce the decoding latency for accessing any particular image within an HEVC image sequence track. As HEIF is a container format, it can contain still images and image sequences that are coded in different formats; these include HEVC and H.264/MPEG-4 AVC, though other coding formats may be added in the future. The two main filename extensions are.heif or.heic, along with a less common.avci, used for H.264/AVC encoded files.
In Apple's implementation, for single images they have chosen the latter.heic filename extension as the only one they will produce for photos, which indicates that it went through HEVC encoding. However, they will support playback of both H.264/AVC encoded.avci files, also.heif files created on other devices that are encoded using any codec, provided that codec is supported by them. It is known that in macOS Mojave, Apple has implemented HEIF in creating the Dynamic Desktop feature. HEIF is supported by the following among others: Operating systemsMicrosoft Windows 10 macOS High Sierra iOS 11 Android PieImage editing softwareAdobe Lightroom GIMP ImageMagick Krita Zoner Photo Studio X Pixelmator GraphicConverterOtherNokia provides an open source Java HEIF decoder for web browsers The open source library "libheif" supports reading and writing HEIF files A free image codec called CopyTrans HEIC, available for Windows 7/8.1 supports opening HEIF files in Windows Photo Viewer Read HEIF image metadata with the free software PIE Picture Information Extractor iMazing HEIC Converter is a free application to convert HEIC files to JPEG on Windows and macOS Messages - the Android SMS/RCS app HEIF itself is a container, when containing images and image sequences encoded in a particular format its use becomes subject to the licensing of patents on the coding format.
Advanced Video Coding – an older encoding format for video and images, first standardized in 2003 High Efficiency Video Coding – an encoding format for video and images, first standardized in 2013 ISO base media file format – a file format standard that covers HEIF and other formatted multimedia files, first standardized in 2001 MPEG-H – a suite of standards that includes HEIF and HEVC Better Portable Graphics – another image file format using HEVC encoding, published by an individual author in 2014 AV1 Image File Format – a rival container format under development by the Alliance for Open Media, based on the AV1 video codec WebP — an image format
MPEG Surround known as Spatial Audio Coding is a lossy compression format for surround sound that provides a method for extending mono or stereo audio services to multi-channel audio in a backwards compatible fashion. The total bit rates used for the core and the MPEG Surround data are only higher than the bit rates used for coding of the core. MPEG Surround adds a side-information stream to the core bit stream. Legacy stereo playback systems will ignore this side-information while players supporting MPEG Surround decoding will output the reconstructed multi-channel audio. Moving Picture Experts Group issued a call for proposals on MPEG Spatial Audio Coding in March 2004; the group decided that the technology that would be the starting point in standardization process, would be a combination of the submissions from two proponents - Fraunhofer IIS / Agere Systems and Coding Technologies / Philips. The MPEG Surround standard was developed by the Moving Picture Experts Group and published as ISO/IEC 23003-1 in 2007.
It was the first standard of MPEG-D standards group, formally known as ISO/IEC 23003 - MPEG audio technologies. MPEG Surround was defined as one of the MPEG-4 Audio Object Types in 2007. There is the MPEG-4 Low Delay MPEG Surround object type, published in 2010; the Spatial Audio Object Coding was published as MPEG-D Part 2 - ISO/IEC 23003-2 in 2010 and it extends MPEG Surround standard by re-using its spatial rendering capabilities while retaining full compatibility with existing receivers. MPEG SAOC system allows users on the decoding side to interactively control the rendering of each individual audio object. There is the Unified Speech and Audio Coding which will be defined in MPEG-D Part 3 - ISO/IEC 23003-3 and ISO/IEC 14496-3:2009/Amd 3. MPEG-D MPEG Surround parametric coding tools are integrated into the USAC codec; the core could be coded with any audio codec. Low bitrates are possible when using HE-AAC v2 as the core codec. MPEG Surround coding uses our capacity to perceive sound in the 3D and captures that perception in a compact set of parameters.
Spatial perception is attributed to three parameters, or cues, describing how humans localize sound in the horizontal plane: Interaural level difference, Interaural time difference and Interaural coherence. This three concepts are illustrated in next image. Direct, or first-arrival, waveforms from the source hit the left ear at time, while direct sound received by the right ear is diffracted around the head, with time delay and level attenuation, associated; these two effects result in ITD and ILD are associated with the main source. At last, in a reverberant environment, reflected sound from the source, or sound from diffuse source, or uncorrelated sound can hit both ears, all of them are related with IC. MPEG Surround uses interchannel differences in level and coherence equivalent to the ILD, ITD and IC parameters; the spatial image is captured by a multichannel audio signal relative to a transmitted downmix signal. These parameters are encoded in a compact form so as to decode the parameters and the transmitted signal and to synthesize a high quality multichannel representation.
MPEG Surround encoder receives a multichannel audio signal,x1 to xN where the number of input channels is N. The most important aspect of the encoding process is that a downmix signal, xt1 and xt2, stereo, is derived from the multichannel input signal, it is this downmix signal, compressed for transmission over the channel rather than the multichannel signal; the encoder may be able to exploit the downmix process so as to be more advantageous. It not only creates a faithful equivalent of the multichannel signal in the mono or stereo downmix, but creates the best possible multichannel decoding based on the downmix and encoded spatial cues as well. Alternatively, the downmix could be supplied externally; the MPEG Surround encoding process could be ignored by the compression algorithm used for the transmitted channels. It could be any type of high-performance compression algorithms such as MPEG-1 Layer III, MPEG-4 AAC or MPEG-4 High Efficiency AAC, or it could be PCM; the MPEG Surround technique allows for compatibility with existing and future stereo MPEG decoders by having the transmitted downmix appear to stereo MPEG decoders to be an ordinary stereo version of the multichannel signal.
Compatibility with stereo decoders is desirable since stereo presentation will remain pervasive due to the number of applications in which listening is via headphones, such as portable music players. MPEG Surround supports a mode in which the downmix is compatible with popular matrix surround decoders, such as Dolby Pro-Logic. Due to the small channel bandwidth, the large cost of transmission equipment and transmission licenses and the desire to maximize user choices by providing many programs, the majority of existing or planned digital broadcasting systems cannot provide multichannel sound to the users. DRM+ was designed to be capable of transmitting MPEG Surround and such broadcasting was successfully demonstrated. MPEG Surround's backward compatibility and low overhead provides one way to add multichannel sound to DAB without reducing audio quality or impacting other services; the majority of digital TV broadcasts use stereo audio