1.
Xiph.Org Foundation
–
Xiph. Org Foundation is a non-profit organization that produces free multimedia formats and software tools. It focuses on the Ogg family of formats, and the most successful one has been Vorbis, as of 2013, the current development work is focusing on Daala, an open and patent-free video format and codec designed to compete with the patented High Efficiency Video Coding and VP9. These include Speex, a codec designed for speech, and FLAC. The Xiph. Org Foundation has criticized Microsoft and the RIAA for their lack of openness and they also condemn the RIAA for their support of projects such as Secure Digital Music Initiative. In 2008, the Free Software Foundation listed the Xiph. Org projects as High Priority Free Software Projects, chris Montgomery, creator of the Ogg container format, founded the Xiphophorus company and later the Xiph. Org Foundation. The first work that became the Ogg media projects started in 1994, the name Xiph abbreviates the original organizational name, Xiphophorus, named after the common swordtail fish, Xiphophorus hellerii. The name Xiphophorus company was used until 2002, when it was renamed to Xiph. Org Foundation, in 2002, the Xiph. Org Foundation defined itself on its website as a non-profit corporation dedicated to protecting the foundations of Internet multimedia from control by private interests. In March 2003, the Xiph. Org Foundation was recognized by the IRS as a 501 Non-Profit Organization, Ogg – a multimedia container format, a reference implementation, and the native file and stream format for the Xiph
2.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization
3.
Ogg
–
Ogg is a free, open container format maintained by the Xiph. Org Foundation. The creators of the Ogg format state that it is unrestricted by software patents and is designed to provide for efficient streaming and its name is derived from ogging, jargon from the computer game Netrek. The Ogg container format can multiplex a number of independent streams for audio, video, text, in the Ogg multimedia framework, Theora provides a lossy video layer. The audio layer is most commonly provided by the music-oriented Vorbis format or its successor Opus, lossless audio compression formats include FLAC, and OggPCM. Before 2007, the. ogg filename extension was used for all files whose content used the Ogg container format, since 2007, the Xiph. Org Foundation recommends that. ogg only be used for Ogg Vorbis audio files. As of August 4,2011, the current version of the Xiph. Org Foundations reference implementation, is libogg 1.3.0, another version, libogg2, has been in development, but is awaiting a rewrite as of 2008. Both software libraries are free software, released under the New BSD License, Ogg reference implementation was separated from Vorbis on September 2,2000. It is sometimes assumed that the name Ogg comes from the character of Nanny Ogg in Terry Pratchetts Discworld novels, Ogg is derived from ogging, jargon from the computer game Netrek, which came to mean doing something forcefully, possibly without consideration of the drain on future resources. At its inception, the Ogg project was thought to be somewhat ambitious given the power of the PC hardware of the time, still, to quote the same reference, Vorbis, on the other hand is named after the Terry Pratchett character from the book Small Gods. The Ogg Vorbis project started in 1993 and it was originally named Squish but that name was already trademarked, so the project underwent a name change. The new name, OggSquish, was used until 2001 when it was changed again to Ogg, Ogg has since come to refer to the container format, which is now part of the larger Xiph. org multimedia project. Today, Squish refers to an audio coding format typically used with the Ogg container format. The Ogg bitstream format, spearheaded by the Xiph, the format consists of chunks of data each called an Ogg page. Each page begins with the characters, OggS, to identify the file as Ogg format, a serial number and page number in the page header identifies each page as part of a series of pages making up a bitstream. Multiple bitstreams may be multiplexed in the file where pages from each bitstream are ordered by the time of the contained data. Bitstreams may also be appended to existing files, a known as chaining. A BSD-licensed library, called libvorbis, is available to encode and decode data from Vorbis streams, independent Ogg implementations are used in several projects such as RealPlayer and a set of DirectShow filters. Mogg, the Multi-Track-Single-Logical-Stream Ogg-Vorbis, is the multi-channel or multi-track Ogg file format, the following is the field layout of an Ogg page header, Capture pattern –32 bits The capture pattern or sync code is a magic number used to ensure synchronization when parsing Ogg files
4.
International standard
–
International standards are standards developed by international standards organizations. International standards are available for consideration and use worldwide, the most prominent organization is the International Organization for Standardization. International standards may be used either by application or by a process of modifying an international standard to suit local conditions. Technical barriers arise when different groups together, each with a large user base. Establishing international standards is one way of preventing or overcoming this problem, the implementation of standards in industry and commerce became highly important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts. Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800, maudslays work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization, some companies in-house standards spread a bit within their industries. Joseph Whitworths screw thread measurements were adopted as the first national standard by companies around the country in 1841 and it came to be known as the British Standard Whitworth, and was widely adopted in other countries. By the end of the 19th century differences in standards between companies were making trade increasingly difficult and strained, the Engineering Standards Committee was established in London in 1901 as the worlds first national standards body. After the First World War, similar national bodies were established in other countries, by the mid to late 19th century, efforts were being made to standardize electrical measurement. An important figure was R. E. B, Crompton, who became concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings for voltage, frequency, current, adjacent buildings would have totally incompatible electrical systems simply because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for a standard for electric engineering. In 1904, Crompton represented Britain at the Louisiana Purchase Exposition in Saint Louis as part of a delegation by the Institute of Electrical Engineers. He presented a paper on standardisation, which was so well received that he was asked to look into the formation of a commission to oversee the process. By 1906 his work was complete and he drew up a permanent constitution for the first international standards organization, the body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the bodys first President, the International Federation of the National Standardizing Associations was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 during World War II, after the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee with a proposal to form a new global standards body. List of international common standards List of technical standard organisations Global Frameworks and standards organized along function lines, accessed 2014 ^ Cordova
5.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor
6.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
7.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
8.
Unix-like
–
A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell, there is no standard for defining the term, and some difference of opinion is possible as to the degree to which a given operating system or application is Unix-like. The Open Group owns the UNIX trademark and administers the Single UNIX Specification and they do not approve of the construction Unix-like, and consider it a misuse of their trademark. Other parties frequently treat Unix as a genericized trademark, in 2007, Wayne R. Gray sued to dispute the status of UNIX as a trademark, but lost his case, and lost again on appeal, with the court upholding the trademark and its ownership. Unix-like systems started to appear in the late 1970s and early 1980s, many proprietary versions, such as Idris, UNOS, Coherent, and UniFlex, aimed to provide businesses with the functionality available to academic users of UNIX. These largely displaced the proprietary clones, growing incompatibility among these systems led to the creation of interoperability standards, including POSIX and the Single UNIX Specification. Various free, low-cost, and unrestricted substitutes for UNIX emerged in the 1980s and 1990s, including 4. 4BSD, Linux, some of these have in turn been the basis for commercial Unix-like systems, such as BSD/OS and OS X. The various BSD variants are notable in that they are in fact descendants of UNIX, however, the BSD code base has evolved since then, replacing all of the AT&T code. Since the BSD variants are not certified as compliant with the Single UNIX Specification, dennis Ritchie, one of the original creators of Unix, expressed his opinion that Unix-like systems such as Linux are de facto Unix systems. Eric S. Raymond and Rob Landley have suggested there are three kinds of Unix-like systems, Genetic UNIX Those systems with a historical connection to the AT&T codebase. Most commercial UNIX systems fall into this category, so do the BSD systems, which are descendants of work done at the University of California, Berkeley in the late 1970s and early 1980s. Some of these systems have no original AT&T code but can trace their ancestry to AT&T designs. Trademark or branded UNIX These systems—largely commercial in nature—have been determined by the Open Group to meet the Single UNIX Specification and are allowed to carry the UNIX name, many ancient UNIX systems no longer meet this definition. Around 2001, Linux was given the opportunity to get a certification including free help from the POSIX chair Andrew Josey for the price of one dollar. Some non-Unix-like operating systems provide a Unix-like compatibility layer, with degrees of Unix-like functionality. IBM z/OSs UNIX System Services is sufficiently complete to be certified as trademark UNIX, cygwin and MSYS both provide a GNU environment on top of the Microsoft Windows user API, sufficient for most common open source software to be compiled and run. Subsystem for Unix-based Applications provides Unix-like functionality as a Windows NT subsystem, Windows Subsystem for Linux provides a Linux-compatible kernel interface developed by Microsoft and containing no Linux code, with Ubuntu user-mode binaries running on top of it
9.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000
10.
MacOS
–
Within the market of desktop, laptop and home computers, and by web usage, it is the second most widely used desktop OS after Microsoft Windows. Launched in 2001 as Mac OS X, the series is the latest in the family of Macintosh operating systems, Mac OS X succeeded classic Mac OS, which was introduced in 1984, and the final release of which was Mac OS9 in 1999. An initial, early version of the system, Mac OS X Server 1.0, was released in 1999, the first desktop version, Mac OS X10.0, followed in March 2001. In 2012, Apple rebranded Mac OS X to OS X. Releases were code named after big cats from the release up until OS X10.8 Mountain Lion. Beginning in 2013 with OS X10.9 Mavericks, releases have been named after landmarks in California, in 2016, Apple rebranded OS X to macOS, adopting the nomenclature that it uses for their other operating systems, iOS, watchOS, and tvOS. The latest version of macOS is macOS10.12 Sierra, macOS is based on technologies developed at NeXT between 1985 and 1997, when Apple acquired the company. The X in Mac OS X and OS X is pronounced ten, macOS shares its Unix-based core, named Darwin, and many of its frameworks with iOS, tvOS and watchOS. A heavily modified version of Mac OS X10.4 Tiger was used for the first-generation Apple TV, Apple also used to have a separate line of releases of Mac OS X designed for servers. Beginning with Mac OS X10.7 Lion, the functions were made available as a separate package on the Mac App Store. Releases of Mac OS X from 1999 to 2005 can run only on the PowerPC-based Macs from the time period, Mac OS X10.5 Leopard was released as a Universal binary, meaning the installer disc supported both Intel and PowerPC processors. In 2009, Apple released Mac OS X10.6 Snow Leopard, in 2011, Apple released Mac OS X10.7 Lion, which no longer supported 32-bit Intel processors and also did not include Rosetta. All versions of the system released since then run exclusively on 64-bit Intel CPUs, the heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, and then launched in 1989 and its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. This led Apple to purchase NeXT in 1996, allowing NeXTSTEP, then called OPENSTEP, previous Macintosh operating systems were named using Arabic numerals, e. g. Mac OS8 and Mac OS9. The letter X in Mac OS Xs name refers to the number 10 and it is therefore correctly pronounced ten /ˈtɛn/ in this context. However, a common mispronunciation is X /ˈɛks/, consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API, the consumer version of Mac OS X was launched in 2001 with Mac OS X10.0. Reviews were variable, with praise for its sophisticated, glossy Aqua interface
11.
Microsoft Windows
–
Microsoft Windows is a metafamily of graphical operating systems developed, marketed, and sold by Microsoft. It consists of families of operating systems, each of which cater to a certain sector of the computing industry with the OS typically associated with IBM PC compatible architecture. Active Windows families include Windows NT, Windows Embedded and Windows Phone, defunct Windows families include Windows 9x, Windows 10 Mobile is an active product, unrelated to the defunct family Windows Mobile. Microsoft introduced an operating environment named Windows on November 20,1985, Microsoft Windows came to dominate the worlds personal computer market with over 90% market share, overtaking Mac OS, which had been introduced in 1984. Apple came to see Windows as an encroachment on their innovation in GUI development as implemented on products such as the Lisa. On PCs, Windows is still the most popular operating system, however, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones. In 2014, the number of Windows devices sold was less than 25% that of Android devices sold and this comparison however may not be fully relevant, as the two operating systems traditionally target different platforms. As of September 2016, the most recent version of Windows for PCs, tablets, smartphones, the most recent versions for server computers is Windows Server 2016. A specialized version of Windows runs on the Xbox One game console, Microsoft, the developer of Windows, has registered several trademarks each of which denote a family of Windows operating systems that target a specific sector of the computing industry. It now consists of three operating system subfamilies that are released almost at the time and share the same kernel. Windows, The operating system for personal computers, tablets. The latest version is Windows 10, the main competitor of this family is macOS by Apple Inc. for personal computers and Android for mobile devices. Windows Server, The operating system for server computers, the latest version is Windows Server 2016. Unlike its clients sibling, it has adopted a strong naming scheme, the main competitor of this family is Linux. Windows PE, A lightweight version of its Windows sibling meant to operate as an operating system, used for installing Windows on bare-metal computers. The latest version is Windows PE10.0.10586.0, Windows Embedded, Initially, Microsoft developed Windows CE as a general-purpose operating system for every device that was too resource-limited to be called a full-fledged computer. The following Windows families are no longer being developed, Windows 9x, Microsoft now caters to the consumers market with Windows NT. Windows Mobile, The predecessor to Windows Phone, it was a mobile operating system
12.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses
13.
Lossy compression
–
In information technology, lossy compression or irreversible compression is the class of data encoding methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storage, handling, different versions of the photo of the cat above show how higher degrees of approximation create coarser images as more details are removed. This is opposed to data compression which does not degrade the data. The amount of data reduction possible using lossy compression is often higher than through lossless techniques. Well-designed lossy compression technology often reduces file sizes significantly before degradation is noticed by the end-user, even when noticeable by the user, further data reduction may be desirable. Lossy compression is most commonly used to compress multimedia data, especially in such as streaming media. By contrast, lossless compression is required for text and data files, such as bank records. A picture, for example, is converted to a file by considering it to be an array of dots and specifying the color. If the picture contains an area of the color, it can be compressed without loss by saying 200 red dots instead of red dot. The original data contains an amount of information, and there is a lower limit to the size of file that can carry all the information. Basic information theory says there is an absolute limit in reducing the size of this data. When data is compressed, its entropy increases, and it cannot increase indefinitely, as an intuitive example, most people know that a compressed ZIP file is smaller than the original file, but repeatedly compressing the same file will not reduce the size to nothing. Most compression algorithms can recognize when further compression would be pointless, in many cases, files or data streams contain more information than is needed for a particular purpose. Developing lossy compression techniques as closely matched to human perception as possible is a complex task, the terms irreversible and reversible are preferred over lossy and lossless respectively for some applications, such as medical image compression, to circumvent the negative implications of loss. The type and amount of loss can affect the utility of the images, artifacts or undesirable effects of compression may be clearly discernible yet the result still useful for the intended purpose. Or lossy compressed images may be visually lossless, or in the case of medical images and this is because uncompressed audio can only reduce file size by lowering bit rate or depth, whereas compressing audio can reduce size while maintaining bit rate and depth. This compression becomes a loss of the least significant data. From this point of view, perceptual encoding is not essentially about discarding data, green, and 50 pixels of blue vs. red, which are proportional to human sensitivity to each component
14.
Vorbis
–
Vorbis is a free and open-source software project headed by the Xiph. Org Foundation. The project produces an audio coding format and software reference encoder/decoder for lossy audio compression, Vorbis is most commonly used in conjunction with the Ogg container format and it is therefore often referred to as Ogg Vorbis. Vorbis is a continuation of audio compression development started in 1993 by Chris Montgomery, intensive development began following a September 1998 letter from the Fraunhofer Society announcing plans to charge licensing fees for the MP3 audio format. The Vorbis project started as part of the Xiphophorus companys Ogg project, Chris Montgomery began work on the project and was assisted by a growing number of other developers. They continued refining the source code until the Vorbis file format was frozen for 1.0 in May 2000, originally licensed as LGPL, in 2001 the Vorbis license was changed to the BSD license to encourage adoption with endorsement of Richard Stallman. A stable version of the software was released on July 19,2002. The Xiph. Org Foundation maintains an implementation, libvorbis. There are also some fine-tuned forks, most notably aoTuV, that offer better audio quality and these improvements are periodically merged back into the reference codebase. Vorbis is named after a Discworld character, Exquisitor Vorbis in Small Gods by Terry Pratchett, the Ogg format, however, is not named after Nanny Ogg, another Discworld character, the name is in fact derived from ogging, jargon that arose in the computer game Netrek. The Vorbis format has proven popular among supporters of free software and they argue that its higher fidelity and completely free nature, unencumbered by patents, make it a well-suited replacement for patented and restricted formats like MP3. Vorbis has different uses for consumer products, many video game titles store in-game audio as Vorbis, including Amnesia, The Dark Descent, Grand Theft Auto, San Andreas, Halo, Combat Evolved, Minecraft, and World of Warcraft, among others. Popular software players support Vorbis playback either natively or through an external plugin, a number of websites, including Wikipedia, use it. Others include Jamendo and Mindawn, as well as national radio stations like JazzRadio, Absolute Radio, NPR, Radio New Zealand. The Spotify audio streaming service uses Vorbis for its audio streams, also, the French music site Qobuz offers its customers the possibility to download their purchased songs in Vorbis format, as does the American music site Bandcamp. However, by 2014, not many further significant tests had been made, listening tests have attempted to find the best quality lossy audio codecs at certain bitrates. Mid to low bitrates, private tests in 2005 at 80 kbit/s and 96 kbit/s showed that aoTuV Vorbis had a better quality than other lossy audio formats, high bitrates, most people do not hear significant differences. However, trained listeners can often hear significant differences between codecs at identical bitrates, and aoTuV Vorbis performed better than LC-AAC, MP3, due to the ever-evolving nature of audio codecs, the results of many of these tests have become outdated. Listening tests are carried out as ABX tests, i. e. the listener has to identify an unknown sample X as being A or B
15.
Public domain
–
The term public domain has two senses of meaning. Anything published is out in the domain in the sense that it is available to the public. Once published, news and information in books is in the public domain, in the sense of intellectual property, works in the public domain are those whose exclusive intellectual property rights have expired, have been forfeited, or are inapplicable. Examples for works not covered by copyright which are therefore in the domain, are the formulae of Newtonian physics, cooking recipes. Examples for works actively dedicated into public domain by their authors are reference implementations of algorithms, NIHs ImageJ. The term is not normally applied to situations where the creator of a work retains residual rights, as rights are country-based and vary, a work may be subject to rights in one country and be in the public domain in another. Some rights depend on registrations on a basis, and the absence of registration in a particular country, if required. Although the term public domain did not come into use until the mid-18th century, the Romans had a large proprietary rights system where they defined many things that cannot be privately owned as res nullius, res communes, res publicae and res universitatis. The term res nullius was defined as not yet appropriated. The term res communes was defined as things that could be enjoyed by mankind, such as air, sunlight. The term res publicae referred to things that were shared by all citizens, when the first early copyright law was first established in Britain with the Statute of Anne in 1710, public domain did not appear. However, similar concepts were developed by British and French jurists in the eighteenth century, instead of public domain they used terms such as publici juris or propriété publique to describe works that were not covered by copyright law. The phrase fall in the domain can be traced to mid-nineteenth century France to describe the end of copyright term. In this historical context Paul Torremans describes copyright as a coral reef of private right jutting up from the ocean of the public domain. Because copyright law is different from country to country, Pamela Samuelson has described the public domain as being different sizes at different times in different countries. According to James Boyle this definition underlines common usage of the public domain and equates the public domain to public property. However, the usage of the public domain can be more granular. Such a definition regards work in copyright as private property subject to fair use rights, the materials that compose our cultural heritage must be free for all living to use no less than matter necessary for biological survival
16.
On2 Technologies
–
On2 Technologies, formerly known as The Duck Corporation, was a small publicly traded company, founded in 1992 and headquartered in Clifton Park, New York, that designed video codec technology. It created a series of video codecs called TrueMotion, in February 2010, On2 Technologies was acquired by Google for an estimated $124.6 million. On2s VP8 technology became the core of Googles WebM video file format, while known by the name The Duck Corporation, they developed TrueMotion S, a codec that was used by some games for FMV sequences during the 1990s. The original office of the Duck Corporation was founded in New York City by Daniel B, Miller, Victor Yurkovsky, and Stan Marder. In 1994 Duck opened its first satellite engineering office in Colonie, Miller became CEO of newly renamed On2 Technologies until Doug McIntyre was hired in late 2000, when Miller resumed his role as CTO. CEOs after McIntyre included Bill Joll and Matt Frost, after Millers departure in 2003, newly promoted CTO Eric Ameres moved the primary engineering office to upstate NYs capital region. Ameres later departed in 2007 to pursue other research as part of the opening of the Experimental Media, after Ameres departure in 2007 Paul Wilkins served as co-CTO with Jim Bankoski. Wilkins was founder of Metavisual which was acquired by On2 in 1999 to bring the VP3 codec to market, the VP3 codec became the basis of On2s future codecs as well as the basis of the open source Theora video codec. In 1995, The Duck Corporation raised $1.5 million in funding from Edelson Technology Partners. In 1997, they raised an additional $5.5 million in a venture round primarily financed by Citigroup Ventures, in 1999, The Duck Corporation merged with Applied Capital Funding, Inc. a public company on the American Stock Exchange. The merged entity was first renamed On2. Com and then On2 Technologies, oNTs price peaked at a little over $40 per share, briefly giving the company a market cap in excess of $1 billion. The Quickband Acquisition was effected pursuant to an Asset Purchase Agreement dated as of March 9,2000 by and among the Company, Quickband, on November,3,2000, On2 acquired the game engine development company Eight Cylinders Studios. In May 2007, On2 announced an agreement to acquire Finnish Hantro Products, the acquisition was finalized on November 1,2007. In November 2008, On2 announced that it would partner with Zencoder to create Flix Cloud, Flix Cloud launched in April 2009. On 5 August 2009, Google offered to acquire On2 Technologies for $106.5 million in Google stock, on 7 January 2010, Google increased its takeover offer to $133.9 million. On February 17,2010, stockholders of On2 Technologies voted to accept Googles increased offer, on 19 February 2010, the transaction was completed, valued at approximately $124.6 million. According to the company itself, development started in the early 1990s, the first versions of the codec were mainly targeted at and used for full motion video scenes in computer games. One of the competitive advantages in this field was that, unlike MPEG, it does not require a separate decoder
17.
Windows Media Video
–
Windows Media Video is the name of a series of video codecs and their corresponding video coding formats developed by Microsoft. It is part of the Windows Media framework, WMV consists of three distinct codecs, The original video compression technology known as WMV, was originally designed for Internet streaming applications, as a competitor to RealVideo. The other compression technologies, WMV Screen and WMV Image, cater for specialized content, after standardization by the Society of Motion Picture and Television Engineers, WMV version 9 was adopted for physical-delivery formats such as HD DVD and Blu-ray Disc and became known as VC-1. Microsoft also developed a digital format called Advanced Systems Format to store video encoded by Windows Media Video. In 2003, Microsoft drafted a video compression specification based on its WMV9 format, the standard was officially approved in March 2006 as SMPTE 421M, better known as VC-1, thus making the WMV9 format an open standard. VC-1 became one of the three formats for the Blu-ray video disc, along with H. 262/MPEG-2 Part 2 and H. 264/MPEG-4 AVC. A WMV file uses the Advanced Systems Format container format to encapsulate the encoded multimedia content, while the ASF can encapsulate multimedia in other encodings than those the WMV file standard specifies, those ASF files should use the. ASF file extension and not the. WMV file extension. Although WMV is generally packed into the ASF container format, it can also be put into the Matroska or AVI container format, the resulting files have the. MKV and. AVI file extensions, respectively. One common way to store WMV in an AVI file is to use the WMV9 Video Compression Manager codec implementation, Windows Media Video is the most recognized video compression format within the WMV family. Usage of the term WMV often refers to the Microsoft Windows Media Video format only and its main competitors are MPEG-4 AVC, AVS, RealVideo, and MPEG-4 ASP. The first version of the format, WMV7, was introduced in 1999, continued proprietary development led to newer versions of the format, but the bit stream syntax was not frozen until WMV9. WMV9 also introduced a new profile titled Windows Media Video 9 Professional, which is activated automatically whenever the video resolution exceeds 300,000 pixels and it is targeted towards high-definition video content, at resolutions such as 720p and 1080p. The Simple and Main profile levels in WMV9 are compliant with the same levels in the VC-1 specification. The Advanced Profile in VC-1 is implemented in a new WMV format called Windows Media Video 9 Advanced Profile and it improves compression efficiency for interlaced content and is made transport-independent, making it able to be encapsulated in an MPEG transport stream or RTP packet format. The format is not compatible with previous WMV9 formats, however, WMV is a mandatory video format for PlaysForSure-certified online stores and devices, as well as Portable Media Center devices. The Microsoft Zune, Xbox 360, Windows Mobile-powered devices with Windows Media Player, as well as many uncertified devices, WMV HD mandates the use of WMV9 for its certification program, at quality levels specified by Microsoft. WMV used to be the only supported video format for the Microsoft Silverlight platform, Windows Media Video Screen are video formats that specialise in screencast content. They can capture live screen content, or convert video from third-party screen-capture programs into WMV9 Screen files and they work best when the source material is mainly static and contains a small color palette
18.
BBC
–
The British Broadcasting Corporation is a British public service broadcaster. It is headquartered at Broadcasting House in London, the BBC is the worlds oldest national broadcasting organisation and the largest broadcaster in the world by number of employees. It employs over 20,950 staff in total,16,672 of whom are in public sector broadcasting, the total number of staff is 35,402 when part-time, flexible, and fixed contract staff are included. The BBC is established under a Royal Charter and operates under its Agreement with the Secretary of State for Culture, Media and Sport. The fee is set by the British Government, agreed by Parliament, and used to fund the BBCs radio, TV, britains first live public broadcast from the Marconi factory in Chelmsford took place in June 1920. It was sponsored by the Daily Mails Lord Northcliffe and featured the famous Australian Soprano Dame Nellie Melba, the Melba broadcast caught the peoples imagination and marked a turning point in the British publics attitude to radio. However, this public enthusiasm was not shared in official circles where such broadcasts were held to interfere with important military and civil communications. By late 1920, pressure from these quarters and uneasiness among the staff of the licensing authority, the General Post Office, was sufficient to lead to a ban on further Chelmsford broadcasts. But by 1922, the GPO had received nearly 100 broadcast licence requests, John Reith, a Scottish Calvinist, was appointed its General Manager in December 1922 a few weeks after the company made its first official broadcast. The company was to be financed by a royalty on the sale of BBC wireless receiving sets from approved manufacturers, to this day, the BBC aims to follow the Reithian directive to inform, educate and entertain. The financial arrangements soon proved inadequate, set sales were disappointing as amateurs made their own receivers and listeners bought rival unlicensed sets. By mid-1923, discussions between the GPO and the BBC had become deadlocked and the Postmaster-General commissioned a review of broadcasting by the Sykes Committee and this was to be followed by a simple 10 shillings licence fee with no royalty once the wireless manufactures protection expired. The BBCs broadcasting monopoly was made explicit for the duration of its current broadcast licence, the BBC was also banned from presenting news bulletins before 19.00, and required to source all news from external wire services. Mid-1925 found the future of broadcasting under further consideration, this time by the Crawford committee, by now the BBC under Reiths leadership had forged a consensus favouring a continuation of the unified broadcasting service, but more money was still required to finance rapid expansion. Wireless manufacturers were anxious to exit the loss making consortium with Reith keen that the BBC be seen as a service rather than a commercial enterprise. The recommendations of the Crawford Committee were published in March the following year and were still under consideration by the GPO when the 1926 general strike broke out in May. The strike temporarily interrupted newspaper production and with restrictions on news bulletins waived the BBC suddenly became the source of news for the duration of the crisis. The crisis placed the BBC in a delicate position, the Government was divided on how to handle the BBC but ended up trusting Reith, whose opposition to the strike mirrored the PMs own
19.
Max Headroom (TV series)
–
The series was based on the Channel 4 British TV pilot produced by Chrysalis, Max Headroom,20 Minutes into the Future. The series is often mistaken as an American-produced show due to the setting, cinemax aired the UK pilot followed by a six-week run of highlights from The Max Headroom Show, a music video show where Headroom appears between music videos. ABC took an interest in the pilot and asked Chrysalis/Lakeside to produce the series for American audiences, the show went into production in late 1986 and ran for six episodes in the first season and eight in season two. In 1987, the story told in Max Headroom,20 Minutes into the Future, the film was re-shot as a pilot program for a new series broadcast by the U. S. The pilot featured plot changes and some minor visual touches, the only original cast retained for the series were Matt Frewer and Amanda Pays, a third original cast member, W. Morgan Sheppard, joined the series as Blank Reg in later episodes. Among the non-original cast, Jeffrey Tambor co-starred as Murray, Edison Carters neurotic producer, the series is set in a futuristic dystopia ruled by an oligarchy of television networks. Television technology has advanced to the point that viewers physical movements and thoughts can be monitored through their television sets, almost all non-television technology has been discontinued or destroyed. Max Headroom was canceled part-way into its season, the entire series. Comico Comics had plans to publish a novel based on the story. A few posters were produced for comic shops, with a picture of Max Headroom saying Comics will never be the same again, Edison Carter was a hard-hitting reporter for Network 23, who sometimes uncovered things that his superiors in the network would have preferred kept private. Eventually, one of these instances required him to flee his workspace, Edison cares about his co-workers, especially Theora Jones and Bryce Lynch, and he has a deep respect for his producer, Murray. According to a personal statistics file displayed on a screen in the series, Edison is 62. Theora Jones was played by Amanda Pays and first appeared in the British-made television pilot film for the series, along with Matt Frewer and W. Morgan Sheppard, Pays was one of only three cast members to also appear in the American-made series that followed. Theora was Network 23s star controller and, working with the star reporter, Edison Carter. She was also the pseudo-love-interest of Edison Carter, but that subplot was not explored fully on the show before it was cancelled. Network 23s personnel files list her father as unknown, her mother as deceased, Bryce Lynch, a child prodigy and computer hacker, is Network 23s one-man technology research department. His birthdate is shown on-screen to be October 7,1988, in the stereotypical hacker ethos, Bryce has few principles and fewer loyalties. He seems to accept any task, even morally questionable ones and this, in turn, makes him a greater asset to the technological needs and demands of the network, and the whims of its executives and stars
20.
WebM
–
WebM is a video file format. It is primarily intended to offer an alternative to use in the HTML5 video tag. It has a sister project WebP for images, the development of the format is sponsored by Google, and the corresponding software is distributed under a BSD license. The WebM container is based on a profile of Matroska, WebM initially supported VP8 video and Vorbis audio streams. In 2013 it was updated to accommodate VP9 video and Opus audio, native WebM support by Mozilla Firefox, Opera, and Google Chrome was announced at the 2010 Google I/O conference. Internet Explorer 9 requires third-party WebM software, Safari for Mac OS X relies on QuickTime to play web media, which as of 1 April 2011, does not support WebM unless a third-party plug-in is installed. In January 2011, Google announced that the WebM Project Team will release plugins for Internet Explorer, as of 9 June 2012, a public preview version of this plug-in is available for Internet Explorer 9. VLC media player, MPlayer and K-Multimedia Player have native support for playing WebM files, FFmpeg can encode and decode VP8 videos when built with support for libvpx, the VP8/VP9 codec library of the WebM project, as well as mux/demux WebM-compliant files. On 23 July 2010, Fiona Glaser, Ronald Bultje, through testing they determined that ffvp8 was faster than Googles own libvpx decoder. MKVToolNix, the popular Matroska creation tools, have implemented support for multiplexing/demultiplexing WebM-compliant files out of the box, haali Media Splitter has also announced support for muxing/demuxing of WebM. As of version 1.4.9, the LiVES video editor has support for realtime decoding, MPC-HC as of SVN2071 and higher builds supports WebM playback with internal VP8 decoder based on FFmpegs code. The full decoding support for WebM is available in MPC-HC since version 1.4.2499.0, Android is WebM-enabled since version 2.3 - Gingerbread, which was first made available via the Nexus S mobile phone and streamable since Android 4.0. In September 2015, Microsoft announced that the Edge browser in Windows 10 will add support for WebM. iOS does not natively play WebM, WebM Project licenses VP8 hardware accelerators to semiconductor companies for 1080p encoding and decoding at zero cost. AMD, ARM and Broadcom have announced support for hardware acceleration of the WebM format, intel is also considering hardware-based acceleration for WebM in its Atom-based TV chips if the format gains popularity. Qualcomm and Texas Instruments have announced support, with support coming to the TI OMAP processor. Chips&Media have announced a hardware decoder for VP8 that can decode full HD resolution VP8 streams at 60 frames per second. Nvidia is supporting VP8 and provides both hardware decoding and encoding in the Tegra 4 and Tegra 4i SoCs, Nvidia announced 3D video support for WebM through HTML5 and their Nvidia 3D Vision technology. On 7 January 2011, Rockchip released the worlds first chip to host a full implementation of 1080p VP8 decoding
21.
Discrete cosine transform
–
A discrete cosine transform expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in science and engineering, from lossy compression of audio and images, in particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform, but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on data with even symmetry. There are eight standard DCT variants, of four are common. The most common variant of discrete cosine transform is the type-II DCT and its inverse, the type-III DCT, is correspondingly often called simply the inverse DCT or the IDCT. Multidimensional DCTs are developed to extend the concept of DCT on MD Signals, there are several algorithms to compute MD DCT. A new variety of fast algorithms are developed to reduce the computational complexity of implementing DCT. For strongly correlated Markov processes, the DCT can approach the efficiency of the Karhunen-Loève transform. As explained below, this stems from the boundary conditions implicit in the cosine functions, a related transform, the modified discrete cosine transform, or MDCT, is used in AAC, Vorbis, WMA, and MP3 audio compression. The DCT is used in JPEG image compression, MJPEG, MPEG, DV, Daala, there, the two-dimensional DCT-II of N × N blocks are computed and the results are quantized and entropy coded. In this case, N is typically 8 and the DCT-II formula is applied to each row, due to enhancement in the hardware, software and introduction of several fast algorithms, the necessity of using M-D DCTs is rapidly increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform, like any Fourier-related transform, discrete cosine transforms express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the discrete Fourier transform, a DCT operates on a function at a number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines. However, this difference is merely a consequence of a deeper distinction. That is, once you write a function f as a sum of sinusoids, you can evaluate that sum at any x, the DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an even extension of the original function, however, because DCTs operate on finite, discrete sequences, two issues arise that do not apply for the continuous cosine transform. First, one has to specify whether the function is even or odd at both the left and right boundaries of the domain, second, one has to specify around what point the function is even or odd
22.
Chroma subsampling
–
It is used in many video encoding schemes — both analog and digital — and also in JPEG encoding. Digital signals are compressed to save transmission time and reduce file size. In compressed images, for example, the 4,2,2 YCbCr scheme requires two-thirds the bandwidth of RGB and this reduction results in almost no visual difference as perceived by the viewer. Because the human system is less sensitive to the position and motion of color than luminance. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate, in video systems, this is achieved through the use of color difference components. The signal is divided into a component and two color difference components. In human vision there are three channels for detection, and for many color systems, three channels is sufficient for representing most colors. For example, red, green, blue or magenta, yellow, but there are other ways to represent the color. In many video systems, the three channels are luminance and two chroma channels, in video, the luma and chroma components are formed as a weighted sum of gamma-corrected RGB components instead of linear RGB components. As a result, luma must be distinguished from luminance, indeed, similar bleeding can occur also with gamma =1, whence the reversing of the order of operations between gamma correction and forming the weighted sum can make no difference. The chroma can influence the luma specifically at the pixels where the subsampling put no chroma, the parts are, J, horizontal sampling reference. A, number of samples in the first row of J pixels. B, number of changes of chrominance samples between first and second row of J pixels, may be omitted if alpha component is not present, and is equal to J when present. This notation is not valid for all combinations and has exceptions, the mapping examples given are only theoretical and for illustration. Also note that the diagram does not indicate any chroma filtering, to calculate required bandwidth factor relative to 4,4,4, one needs to sum all the factors and divide the result by 12. Each of the three YCbCr components have the sample rate, thus there is no chroma subsampling. This scheme is used in high-end film scanners and cinematic post production. Note that 4,4,4 may instead be referring to RGB color space, formats such as HDCAM SR can record 4,4,4 RGB over dual-link HD-SDI
23.
B-frame
–
In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types, the three major picture types used in the different video algorithms are I, P and B. They are different in the characteristics, I‑frames are the least compressible. P‑frames can use data from previous frames to decompress and are more compressible than I‑frames, B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression. There are three types of pictures used in compression, I‑frames, P‑frames and B‑frames. An I‑frame is an Intra-coded picture, in effect a fully specified picture, P‑frames and B‑frames hold only part of the image information, so they need less space to store than an I‑frame and thus improve video compression rates. A P‑frame holds only the changes in the image from the previous frame, for example, in a scene where a car moves across a stationary background, only the cars movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P‑frame, P‑frames are also known as delta‑frames. A B‑frame saves even more space by using differences between the current frame and both the preceding and following frames to specify its content. While the terms frame and picture are often used interchangeably, strictly speaking, a frame is a complete image captured during a known time interval, and a field is the set of odd-numbered or even-numbered scanning lines composing a partial image. For example, in 1080 full HD mode, there are 1080 lines of pixels, an odd field consists of pixel information for lines 1,3. And even field has pixel information of lines 2,4, frames that are used as a reference for predicting other frames are referred to as reference frames. In the latest international standard, known as H. 264/MPEG-4 AVC, a slice is a spatially distinct region of a frame that is encoded separately from any other region in the same frame. In that standard, instead of I-frames, P-frames, and B-frames, there are I-slices, P-slices, also in H.264 are found several additional types of frames/slices, SI‑frames/slices, Facilitates switching between coded streams, contains SI-macroblocks. SI- SP‑frames will allow for increases in the error resistance, when such frames are used along with a smart decoder, it is possible to recover the broadcast streams of damaged DVDs. I-frames are coded without reference to any frame except themselves, may be generated by an encoder to create a random access point. May also be generated when differentiating image details prohibit generation of effective P or B-frames, typically require more bits to encode than other frame types. Often, I‑frames are used for access and are used as references for the decoding of other pictures
24.
Interlaced video
–
Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times and this enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon. This effectively doubles the resolution as compared to non-interlaced footage. Interlaced signals require a display that is capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals, Interlaced scan refers to one of two common methods for painting a video image on an electronic display screen by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame, one field contains all odd-numbered lines in the image, the other contains all even-numbered lines. A Phase Alternating Line -based television set display, for example, the two sets of 25 fields work together to create a full frame every 1/25 of a second, but with interlacing create a new half frame every 1/50 of a second. To display interlaced video on progressive displays, playback applies deinterlacing to the video signal. The European Broadcasting Union has argued against interlaced video in production and they recommend 720p 50 fps for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video formats such as DV, DVB. Progressive scan captures, transmits, and displays an image in a similar to text on a page—line by line. The interlaced scan pattern in a CRT display also completes such a scan, the first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all even numbered lines and this scan of alternate lines is called interlacing. A field is an image that contains only half of the lines needed to make a complete picture, persistence of vision makes the eye perceive the two fields as a continuous image. In the days of CRT displays, the afterglow of the displays phosphor aided this effect, interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan of twice the perceived frame rate and refresh rate. To prevent flicker, all analog broadcast television systems used interlacing, format identifiers like 576i 50 and 720p 50 specify the frame rate for progressive scan formats, but for interlaced formats they typically specify the field rate. This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate
25.
FLAC
–
FLAC is an audio coding format for lossless compression of digital audio, and is also the name of the reference codec implementation. Digital audio compressed by FLACs algorithm can typically be reduced to 50–60% of its original size, FLAC is an open format with royalty-free licensing and a reference implementation which is free software. FLAC has support for metadata tagging, album art. Development was started in 2000 by Josh Coalson, the bit-stream format was frozen when FLAC entered beta stage with the release of version 0.5 of the reference implementation on 15 January 2001. Version 1.0 was released on 20 July 2001, on 29 January 2003, the Xiph. Org Foundation and the FLAC project announced the incorporation of FLAC under the Xiph. org banner. Xiph. org is behind other free compression formats such as Vorbis, Theora, Speex and Opus. Version 1.3.0 was released on 26 May 2013, at which point development was moved to the Xiph. org git repository. flac files, the reference implementation is free software. The source code for libFLAC and libFLAC++ is available under the BSD license, and the sources for flac, metaflac, in its stated goals, the FLAC project encourages its developers not to implement copy prevention features of any kind. FLAC supports only fixed-point samples, not floating-point and it can handle any PCM bit resolution from 4 to 32 bits per sample, any sampling rate from 1 Hz to 655,350 Hz in 1 Hz increments, and any number of channels from 1 to 8. Channels can be grouped in some cases, for stereo and 5.1 channel surround. FLAC uses CRC checksums for identifying corrupted frames when used in a streaming protocol, FLAC allows for a Rice parameter between 0 and 16. FLAC uses linear prediction to convert the audio samples, there are two steps, the predictor and the error coding. The predictor can be one of four types, the difference between the predictor and the actual sample data is calculated and is known as the residual. The residual is stored efficiently using Golomb-Rice coding and it also uses run-length encoding for blocks of identical samples, such as silent passages. For tagging, FLAC uses the system as Vorbis comments. The libFLAC API is organized into streams, seekable streams, most FLAC applications will generally restrict themselves to encoding/decoding using libFLAC at the file level interface. LibFLAC uses a compression level parameter that varies from 0 to 8, the compressed files are always perfect, lossless representations of the original data. Although the compression process involves a tradeoff between speed and size, the process is always quite fast and not very dependent on the level of compression
26.
Source code
–
In computing, source code is any collection of computer instructions, possibly with comments, written using a human-readable programming language, usually as ordinary text. The source code of a program is designed to facilitate the work of computer programmers. The source code is often transformed by an assembler or compiler into binary machine code understood by the computer, the machine code might then be stored for execution at a later time. Alternatively, source code may be interpreted and thus immediately executed, most application software is distributed in a form that includes only executable files. If the source code were included it would be useful to a user, programmer or a system administrator, the Linux Information Project defines source code as, Source code is the version of software as it is originally written by a human in plain text. The notion of source code may also be more broadly, to include machine code and notations in graphical languages. It is therefore so construed as to include code, very high level languages. Often there are several steps of program translation or minification between the source code typed by a human and an executable program. The earliest programs for stored-program computers were entered in binary through the front panel switches of the computer and this first-generation programming language had no distinction between source code and machine code. When IBM first offered software to work with its machine, the code was provided at no additional charge. At that time, the cost of developing and supporting software was included in the price of the hardware, for decades, IBM distributed source code with its software product licenses, until 1983. Most early computer magazines published source code as type-in programs, Source code can also be stored in a database or elsewhere. The source code for a piece of software may be contained in a single file or many files. Though the practice is uncommon, a source code can be written in different programming languages. For example, a program written primarily in the C programming language, in some languages, such as Java, this can be done at run time. The code base of a programming project is the larger collection of all the source code of all the computer programs which make up the project. It has become practice to maintain code bases in version control systems. Moderately complex software customarily requires the compilation or assembly of several, sometimes dozens or even hundreds, in these cases, instructions for compilations, such as a Makefile, are included with the source code