DV is a format for storing digital video. It was launched in 1995 with joint efforts of leading producers of video camera recorders, the original DV specification, known as Blue Book, was standardized within the IEC61834 family of standards. These standards define common features such as physical videocassettes, recording modulation method, part 2 describes the specifics of 525-60 and 625-50 systems. The IEC standards are available as publications sold by IEC and ANSI, in 2003, DV was joined by a successor format HDV, which used the same tape format with a different video codec. Some cameras at the time had the ability to switch between DV and HDV recording modes, DV uses lossy compression of video while audio is stored uncompressed. An intraframe video compression scheme is used to video on a frame-by-frame basis with the discrete cosine transform. Closely following ITU-R Rec.601 standard, DV video employs interlaced scanning with the sampling frequency of 13.5 MHz. This results in 480 scanlines per complete frame for the 60 Hz system, in both systems the active area contains 720 pixels per scanline, with 704 pixels used for content and 16 pixels on the sides left for digital blanking.
The same frame size is used for 4,3 and 16,9 frame aspect ratios, prior to the DCT compression stage, chroma subsampling is applied to the source video in order to reduce the amount of data to be compressed. Baseline DV uses 4,1,1 subsampling in its 60 Hz variant and 4,2,0 subsampling in the 50 Hz variant. Audio can be stored in either of two forms, 16-bit Linear PCM stereo at 48 kHz sampling rate, or four nonlinear 12-bit PCM channels at 32 kHz sampling rate. In addition, the DV specification supports 16-bit audio at 44.1 kHz, in practice, the 48 kHz stereo mode is used almost exclusively. The audio and metadata are packaged into 80-byte Digital Interface Format blocks which are multiplexed into a 150-block sequence. DIF blocks are the units of DV streams and can be stored as computer files in raw form or wrapped in such file formats as Audio Video Interleave, QuickTime. One video frame is formed from either 10 or 12 such sequences, depending on scanning rate, which results in a rate of about 25 Mbit/s for video.
When written to tape, each corresponds to one complete track. This means that the sound may be +/- ⅓ frame out of sync with the video, this is the maximum drift of the audio/video synchronization, it is not compounded throughout the recording. Sony and Panasonic created their versions of DV, which use the same compression scheme
Bink Video is a proprietary video file format developed by RAD Game Tools, and primarily used for full-motion video sequences in video games. It has been used in over 10,000 games for Windows, Mac OS, Xbox 360, GameCube, PlayStation 3, PC, PlayStation 2, Nintendo DS, and Sony PSP. The format includes its own video and audio codecs, supporting resolutions from 320×240 all the way up to high definition video and it is bundled as part of the RAD Video Tools along with RAD Game Tools previous video codec, Smacker video. It is a hybrid block-transform and wavelet codec capable of using 16 different encoding techniques allowing it to any type of video. The codec places emphasis on lower decoding requirements over other video codecs with specific optimizations for the different computer game consoles it supports, the format was reverse-engineered by the FFmpeg project and Bink decoding is supported by the open-source libavcodec library. Bink2, a new version of the format, was released in 2013 and this new format is available for Windows, Mac OS, Sony PS4, PS3, and PS Vita, Xbox One, Xbox 360, Nintendo Wii and Wii U, Android and iOS.
Smacker video RAD Game Tools Site about Bink Games using Bink Video Bink Video - MultimediaWiki
Multimedia is content that uses a combination of different content forms such as text, images, animations and interactive content. Multimedia contrasts with media that use only rudimentary computer displays such as text-only or traditional forms of printed or hand-produced material, Multimedia devices are electronic media devices used to store and experience multimedia content. Multimedia is distinguished from mixed media in art, for example. The term rich media is synonymous with interactive multimedia, the term multimedia was coined by singer and artist Bob Goldstein to promote the July 1966 opening of his LightWorks at LOursin show at Southampton, Long Island. Goldstein was perhaps aware of an American artist named Dick Higgins, two years later, in 1968, the term multimedia was re-appropriated to describe the work of a political consultant, David Sawyer, the husband of Iris Sawyer—one of Goldsteins producers at LOursin. In the intervening forty years, the word has taken on different meanings, in the late 1970s, the term referred to presentations consisting of multi-projector slide shows timed to an audio track.
However, by the 1990s multimedia took on its current meaning, in the 1993 first edition of Multimedia, Making It Work, Tay Vaughan declared Multimedia is any combination of text, graphic art, sound and video that is delivered by computer. When you allow the user – the viewer of the project – to control what, when you provide a structure of linked elements through which the user can navigate, interactive multimedia becomes hypermedia. The German language society Gesellschaft für deutsche Sprache recognized the words significance, the institute summed up its rationale by stating has become a central word in the wonderful new media world. In common usage, multimedia refers to an electronically delivered combination of media including video, still images, much of the content on the web today falls within this definition as understood by millions. That era saw a boost in the production of educational multimedia CD-ROMs, the term video, if not used exclusively to describe motion photography, is ambiguous in multimedia terminology.
Video is often used to describe the format, delivery format. Multiple forms of content are often not considered modern forms of presentation such as audio or video. Likewise, single forms of content with single methods of information processing are often called multimedia. Performing arts may be considered multimedia considering that performers and props are multiple forms of content and media. Multimedia presentations may be viewed by person on stage, transmitted, a broadcast may be a live or recorded multimedia presentation. Broadcasts and recordings can be analog or digital electronic media technology. Digital online multimedia may be downloaded or streamed, streaming multimedia may be live or on-demand
Society of Motion Picture and Television Engineers
SMPTE Membership is open to any individual or organization with interest in the subject matter. SMPTE standards documents are copyrighted and may be purchased from the SMPTE website, standard documents may be purchased by the general public. Significant standards promulgated by SMPTE include, All film and television formats and media. The society sponsors many awards, the oldest of which are the SMPTE Progress Medal, the Samuel Warner Memorial Medal, SMPTE has a number of student Chapters and sponsors scholarships for college students in the motion imaging disciplines. A group within the standards committees has begun to work on the definition of the SMPTE 3D Home Master. SMPTE, instituted in 1999, a committee for the foundations of Digital Cinema. The SMPTE presents awards to individuals for outstanding contributions in fields of the society, James Cameron Edwin Catmull Birney Dayton Clyde D. Smith Roderick Snell Dr. Kees Immink Stanley N. Baron William C. Miller Bernard J. Lechner Ray Dolby Harold E.
Edgerton Vladimir K. Zworykin John G. Frayne Walt Disney Chuck Pagano James M. DeFilippis Bernard J. Lechner Stanley N. Baron William F. Schreiber Adrian Ettlinger Joseph A. Flaherty, goldmark W. R. G. Baker Albert Rose Charles Ginsburg Robert E. Shelby Arthur V. Loughren Otto H. Recent recipients are Andrew Laszlo James MacKay Dr. Roderick T. Swartz
MPEG-1 is a standard for lossy compression of video and audio. Today, MPEG-1 has become the most widely compatible lossy audio/video format in the world, perhaps the best-known part of the MPEG-1 standard is the MP3 audio format it introduced. The MPEG-1 standard is published as ISO/IEC11172 – Information technology—Coding of moving pictures, MPEG was formed to address the need for standard video and audio formats, and to build on H.261 to get better quality through the use of more complex encoding methods. Development of the MPEG-1 standard began in May 1988, fourteen video and fourteen audio codec proposals were submitted by individual companies and institutions for evaluation. The codecs were extensively tested for computational complexity and subjective quality and this specific bitrate was chosen for transmission over T-1/E-1 lines and as the approximate data rate of audio CDs. The codecs that excelled in testing were utilized as the basis for the standard and refined further, with additional features.
The reported completion date of the MPEG-1 standard varies greatly, a complete draft standard was produced in September 1990. The draft standard was publicly available for purchase, the standard was finished with the 6 November 1992 meeting. The Berkeley Plateau Multimedia Research Group developed an MPEG-1 decoder in November 1992, due in part to the similarity between the two codecs, the MPEG-2 standard includes full backwards compatibility with MPEG-1 video, so any MPEG-2 decoder can play MPEG-1 videos. This means that MPEG-1 coding efficiency can vary depending on the encoder used. The first three parts of ISO/IEC11172 were published in August 1993, the ISO patent database lists one patent for ISO11172, US4,472,747, which expired in 2003. The near-complete draft of the MPEG-1 standard was publicly available as ISO CD11172 by December 6,1991, a May 2009 discussion on the whatwg mailing list mentioned US5,214,678 patent as possibly covering MPEG audio layer II. Filed in 1990 and published in 1993, this patent is now expired, most popular software for video playback includes MPEG-1 decoding, in addition to any other supported formats.
The popularity of MP3 audio has established a massive installed base of hardware that can play back MPEG-1 Audio, virtually all digital audio devices can play back MPEG-1 Audio. Many millions have been sold to-date, before MPEG-2 became widespread, many digital satellite/cable TV services used MPEG-1 exclusively. The widespread popularity of MPEG-2 with broadcasters means MPEG-1 is playable by most digital cable and satellite set-top boxes, MPEG-1 was used for full-screen video on Green Book CD-i, and on Video CD. The Super Video CD standard, based on VCD, uses MPEG-1 audio exclusively, the DVD-Video format uses MPEG-2 video primarily, but MPEG-1 support is explicitly defined in the standard. The DVD-Video standard originally required MPEG-1 Layer II audio for PAL countries, MPEG-1 Layer II audio is still allowed on DVDs, although newer extensions to the format, like MPEG Multichannel, are rarely supported
Computing platform means in general sense, where any piece of software is executed. It may be the hardware or the system, even a web browser or other application. The term computing platform can refer to different abstraction levels, including a hardware architecture, an operating system. In total it can be said to be the stage on which programs can run. For example, an OS may be a platform that abstracts the underlying differences in hardware, platforms may include, Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS, this is referred to as running on bare metal, a browser in the case of web-based software. The browser itself runs on a platform, but this is not relevant to software running within the browser. An application, such as a spreadsheet or word processor, which hosts software written in a scripting language. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform, software frameworks that provide ready-made functionality.
Cloud computing and Platform as a Service, the social networking sites Twitter and facebook are considered development platforms. A virtual machine such as the Java virtual machine, applications are compiled into a format similar to machine code, known as bytecode, which is executed by the VM. A virtualized version of a system, including virtualized hardware, OS, software. These allow, for instance, a typical Windows program to run on what is physically a Mac, some architectures have multiple layers, with each layer acting as a platform to the one above it. In general, a component only has to be adapted to the layer immediately beneath it, the JVM, the layer beneath the application, does have to be built separately for each OS
C++ is a general-purpose programming language. It has imperative, object-oriented and generic programming features, while providing facilities for low-level memory manipulation and it was designed with a bias toward system programming and embedded, resource-constrained and large systems, with performance and flexibility of use as its design highlights. C++ is a language, with implementations of it available on many platforms and provided by various organizations, including the Free Software Foundation, LLVM, Intel. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2014 as ISO/IEC14882,2014. The C++ programming language was standardized in 1998 as ISO/IEC14882,1998. The current C++14 standard supersedes these and C++11, with new features, the C++17 standard is due in 2017, with the draft largely implemented by some compilers already, and C++20 is the next planned standard thereafter. Many other programming languages have influenced by C++, including C#, D, Java.
In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on C with Classes, the motivation for creating a new language originated from Stroustrups experience in programming for his Ph. D. thesis. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing, remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, portable, as well as C and Simulas influences, other languages influenced C++, including ALGOL68, Ada, CLU and ML. Initially, Stroustrups C with Classes added features to the C compiler, including classes, derived classes, strong typing, furthermore, it included the development of a standalone compiler for C++, Cfront. In 1985, the first edition of The C++ Programming Language was released, the first commercial implementation of C++ was released in October of the same year. In 1989, C++2.0 was released, followed by the second edition of The C++ Programming Language in 1991.
New features in 2.0 included multiple inheritance, abstract classes, static functions, const member functions. In 1990, The Annotated C++ Reference Manual was published and this work became the basis for the future standard. Later feature additions included templates, namespaces, new casts, after a minor C++14 update released in December 2014, various new additions are planned for 2017 and 2020. According to Stroustrup, the name signifies the nature of the changes from C. This name is credited to Rick Mascitti and was first used in December 1983, when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit
Huffyuv is a lossless video codec created by Ben Rudiak-Gould which is meant to replace uncompressed YCbCr as a video capture format. The codec can compress in the RGB color space, lossless means that the output from the decompressor is bit-for-bit identical with the original input to the compressor. Lossless only occurs when the color space matches the input and output color space. When the color spaces do not match, a low loss compression is performed, huffyuvs algorithm is similar to that of lossless JPEG, in that it predicts each sample and Huffman-encodes the error. The original implementation was written for Windows by Ben Rudiak-Gould and published under the terms of the GPL, the Huffyuv 1.1 was released in 2000. The implementation is considered very fast, giving a throughput of up to 38 megabytes per second on a 416 MHz Celeron. The official Huffyuv has not had a new release since 2002, Huffyuv 2.1.1 with CCESP patch 0.2.5 was released to address problems particularly for compatibility with Cinema Craft Encoder.
Huffyuv 2.2 is available on some sites, but is reported to have problems on some computer systems. Huffyuv MT is a version that uses a different FourCC. There is a currently developed fork of the code named Lagarith which offers better compression at the cost of reduced speed on uniprocessor systems
International Electrotechnical Commission
The IEC manages three global conformity assessment systems that certify whether equipment, system or components conform to its International Standards. The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, at that time the International System of Electrical and Magnetic Units was agreed to. The IEC was instrumental in developing and distributing standards for units of measurement, particularly Gauss, Hertz and it first proposed a system of standards, the Giorgi System, which ultimately became the SI, or Système International d’unités. In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical and this effort continues, and the International Electrotechnical Vocabulary remains an important work in the electrical and electronic industries. The CISPR – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC, originally located in London, the commission moved to its current headquarters in Geneva in 1948.
It has regional centres in Asia-Pacific, Latin America and North America, the IEC is the worlds leading international organization in its field, and its standards are adopted as national standards by its members. The work is done by some 10,000 electrical and electronics experts from industry, academia, test labs, IEC standards have numbers in the range 60000–79999 and their titles take a form such as IEC60417, Graphical symbols for use on equipment. Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC27 became IEC60027. Standards of the 60000 series are preceded by EN to indicate that the IEC standard is adopted by CENELEC as a European standard. The IEC cooperates closely with the International Organization for Standardization and the International Telecommunication Union, Standards developed jointly with ISO such as ISO/IEC26300, ISO/IEC27001, and CASCO ISO/IEC17000 series, carry the acronym of both organizations.
The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 - Information Technology, as well as conformity assessment standards developed by ISO CASCO, other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045-1. IEC standards are being adopted by other certifying bodies such as BSI, CSA, UL & ANSI/INCITS, SABS, SAI, SPC/GB, IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard. The IEC is made up of members, called national committees, national committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, about 90% of those who prepare IEC standards work in industry