Parental controls are features which may be included in digital television services and video games, mobile devices and software that allow parents to restrict the access of content to their children. These controls were created to assist parents in their ability to restrict certain content viewable by their children; this may be content they deem inappropriate for their age, maturity level or feel is aimed more at an adult audience. Parental controls fall into four categories: content filters, which limit access to age inappropriate content. Content filters were the first popular type of parental controls to limit access to Internet content. Television stations began to introduce V-Chip technology to limit access to television content. Modern usage controls are able to restrict a range of explicit content such as explicit songs and movies, they are able to turn devices off during specific times of the day, limiting the volume output of devices, with GPS technology becoming affordable, it is now possible to locate devices such as mobile phones.
The demand for parental control methods that restrict content has increased over the decades due to the rising availability of the Internet. A recent ICM survey showed that a quarter of young people under the age of 12 had been exposed to online pornography. Restricting helps in cases when children are exposed to inappropriate content by accident. Monitoring may be effective for lessening acts of cyberbullying within the internet, it is unclear whether parental controls will affect online harassment in children, as little is known about the role the family plays in protecting children from undesirable experiences online. Psychologically, Cyberbullying could be more harmful to the victim than traditional bullying. Studies done in the past have shown. A lack of parental controls in the household could enable kids to be apart of cyberbullying or be the victim of cyberbullying. Parents have access to 100% free online platforms to control the websites that their child goes on by restricting it or controlling the content that they can view.
Behavioral control consists of controlling the amount of time a child spends online, or how much the child can view. Psychological control involves parents trying to influence children's behavior. Several techniques exist for creating parental controls for blocking websites. Add-on parental control software may monitor API in order to observe applications such as a web browser or Internet chat application and to intervene according to certain criteria, such as a match in a database of banned words. All parental control software includes a password or other form of authentication to prevent unauthorized users from disabling it Techniques involving a proxy server are used. A web browser is set to send requests for web content to the proxy server rather than directly to the web server intended; the proxy server fetches the web page from the server on the browser's behalf and passes on the content to the browser. Proxy servers can inspect the data being sent and received and intervene depending on various criteria relating to content of the page or the URL being requested, for example, using a database of banned words or banned URLs.
The proxy method's major disadvantage is that it requires that the client application to be configured to utilize the proxy, if it is possible for the user to reconfigure applications to access the Internet directly rather than going through the proxy this control is bypassed. Proxy servers themselves may be used to circumvent parental controls. There are other techniques used to bypass parental controls; the computer usage management method, unlike content filters, is focused on empowering the parents to balance the computing environment for children by regulating gaming. The main idea of these applications is to allow parents to introduce a learning component into the computing time of children, who must earn gaming time while working through educational contents. Network-based parental control devices have emerged; these devices working as a firewall router use packet filtering, DNS Response Policy Zone and Deep packet inspection methods to block inappropriate web content. These methods have been used in governmental communication networks.
Another form of these devices made for home networks has been developed. These devices plug into the home router and create a new wireless network, designed for kids to connect to; the increased use of mobile devices that include full featured internet browsers and downloadable applications has created a demand for parental controls on these devices. Some examples of mobile devices that contain parental controls include cell phones, e-readers. In November 2007, Verizon was the first carrier to offer age-appropriate content filters as well as the first to offer generic content filters, recognizing that mobile devices were used to access all manner of content from movies and music to short-code programs and websites. In June 2009, in iPhone OS 3.0, Apple was the first company to provide a built-in mechanism on mobile devices to create age brackets for users that would block unwanted applications from being downloaded to the device. In the following years, the developers of all major operating systems have presented in-built tools for parental control, including Linux, Android and the more business-oriented platform Blackberry.
There are als
A Linux distribution is an operating system made from a software collection, based upon the Linux kernel and a package management system. Linux users obtain their operating system by downloading one of the Linux distributions, which are available for a wide variety of systems ranging from embedded devices and personal computers to powerful supercomputers. A typical Linux distribution comprises a Linux kernel, GNU tools and libraries, additional software, documentation, a window system, a window manager, a desktop environment. Most of the included software is free and open-source software made available both as compiled binaries and in source code form, allowing modifications to the original software. Linux distributions optionally include some proprietary software that may not be available in source code form, such as binary blobs required for some device drivers. A Linux distribution may be described as a particular assortment of application and utility software, packaged together with the Linux kernel in such a way that its capabilities meet the needs of many users.
The software is adapted to the distribution and packaged into software packages by the distribution's maintainers. The software packages are available online in so-called repositories, which are storage locations distributed around the world. Beside glue components, such as the distribution installers or the package management systems, there are only few packages that are written from the ground up by the maintainers of a Linux distribution. Six hundred Linux distributions exist, with close to five hundred out of those in active development; because of the huge availability of software, distributions have taken a wide variety of forms, including those suitable for use on desktops, laptops, mobile phones and tablets, as well as minimal environments for use in embedded systems. There are commercially backed distributions, such as Fedora, openSUSE and Ubuntu, community-driven distributions, such as Debian, Slackware and Arch Linux. Most distributions come ready to use and pre-compiled for a specific instruction set, while some distributions are distributed in source code form and compiled locally during installation.
Linus Torvalds developed the Linux kernel and distributed its first version, 0.01, in 1991. Linux was distributed as source code only, as a pair of downloadable floppy disk images – one bootable and containing the Linux kernel itself, the other with a set of GNU utilities and tools for setting up a file system. Since the installation procedure was complicated in the face of growing amounts of available software, distributions sprang up to simplify this. Early distributions included the following: H. J. Lu's "Boot-root", the aforementioned disk image pair with the kernel and the absolute minimal tools to get started, in late 1991 MCC Interim Linux, made available to the public for download in February 1992 Softlanding Linux System, released in 1992, was the most comprehensive distribution for a short time, including the X Window System Yggdrasil Linux/GNU/X, a commercial distribution first released in December 1992The two oldest and still active distribution projects started in 1993; the SLS distribution was not well maintained, so in July 1993 a new distribution, called Slackware and based on SLS, was released by Patrick Volkerding.
Dissatisfied with SLS, Ian Murdock set to create a free distribution by founding Debian, which had its first release in December 1993. Users were attracted to Linux distributions as alternatives to the DOS and Microsoft Windows operating systems on IBM PC compatible computers, Mac OS on the Apple Macintosh, proprietary versions of Unix. Most early adopters were familiar with Unix from school, they embraced Linux distributions for their low cost, availability of the source code for most or all of the software included. The distributions were a convenience, offering a free alternative to proprietary versions of Unix but they became the usual choice for Unix or Linux experts. To date, Linux has become more popular in server and embedded devices markets than in the desktop market. For example, Linux is used on over 50% of web servers, whereas its desktop market share is about 3.7%. Many Linux distributions provide an installation system akin to that provided with other modern operating systems. On the other hand, some distributions, including Gentoo Linux, provide only the binaries of a basic kernel, compilation tools, an installer.
Distributions are segmented into packages. Each package contains service. Examples of packages are a library for handling the PNG image format, a collection of fonts or a web browser; the package is provided as compiled code, with installation and removal of packages handled by a package management system rather than a simple file archiver. Each package intended for such a PMS contains meta-information such as a package description, "dependencies"; the package management system can evaluate this meta-information to allow package searches, to perform an automatic upgrade to a newer version, to check that all dependencies of a package are fulfilled, and/or to fulfill them automatically. Alth
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo
FM Towns system is a Japanese variant of PC, built by Fujitsu from February 1989 to the summer of 1997. It started as a proprietary PC variant intended for multimedia applications and PC games, but became more compatible with regular PCs. In 1993, the FM Towns Marty was released; the "FM" part of the name means "Fujitsu Micro" like their earlier products, while the "Towns" part is derived from the code name the system was assigned while in development, "Townes". This refers to Charles Townes, one of the winners of the 1964 Nobel Prize in Physics, following a custom of Fujitsu at the time to code name PC products after Nobel Prize winners; the e in "Townes" was dropped when the system went into production to make it clearer that the term was to be pronounced like the word "towns" rather than the potential "tow-nes". Fujitsu decided to release a new home computer after the FM-7 was technologically overcome by NEC's PC-8801. During the life of the FM-7, Fujitsu had learned that software sales drove hardware sales, in order to acquire usable software the new computer was to be based on Fujitsu's "FMR50" system architecture.
The FMR50 system, released at 1986, was another x86/DOS-based computer similar to NEC's popular PC-9801. The FMR50 computers were sold with moderate success in Japanese offices in Japanese government offices. There were hundreds of software packages available for the FMR, including Lotus 1-2-3, WordStar, dBASE III. With this basis of compatibility, the more multimedia-friendly FM Towns was created. NEC's PC-9801 computers were widespread and dominated in the 1980s, at one point reaching 70% of the 16/32-bit computer market. However, they sounds. Just as Commodore saw an opening for the Amiga in some global markets against the IBM PC, a computer with improved graphics and sound was considered to overcome the PC-9801 in the home-use field in Japan. With many multimedia innovations for its time, the FM Towns was that system, though for a number of reasons it never broke far beyond the boundaries of its niche market status; the FM Towns lost much of its uniqueness by adding a DOS/V compatibility mode switch, until Fujitsu discontinued making FM Towns specific hardware and software and moved to focus on the IBM PC clones that many Japanese manufacturers—who were not players in the PC market—were building by the mid to late 1990s.
To this day, Fujitsu is known for its laptop PCs globally, FM Towns users have been relegated to a small community of aficionados. Several variants were built, its package includes a mouse and a microphone. The earlier, more distinctive models featuring a vertical CD-ROM tray on the front of the case were referred to as the "Gray" Towns, were the ones most directly associated with the "FM Towns" brand. Most featured 3 memory expansion slots and used 72-pin non-parity SIMMs with a required timing of 100ns or less and a recommended timing of 60ns. Hard drives are not standard equipment, are not required for most uses; the OS is loaded from CD-ROM by default. A SCSI Centronics 50/SCSI-1/Full-Pitch port is provided for connecting external SCSI disk drives, is the most common way to connect a hard drive to an FM Towns PC. Although internal drives are rare, there is a hidden compartment with a SCSI 50-pin connector where a hard drive may be connected, however the power supply module does not provide the required Molex connector to power the drive.
The video output is 15 kHz RGB using the same DB15 connector and pinouts as the PC-9801. The operating system used is Windows 3.0/3.1/95 and a graphical OS called Towns OS, based on MS-DOS and the Phar Lap DOS extender. Most games for the system were written in protected mode Assembly and C using the Phar Lap DOS extender; these games utilize the Towns OS API for handling several graphic modes, sounds, a mouse, CD-audio. The FM Towns is capable of booting its graphical Towns OS straight from CD in 1989 - two years before Amiga CDTV booted its GUI-based AmigaOS 1.3 from internal CD drive and the CD-bootable System 7 was released for the Macintosh in 1991, five years before the El Torito specification standardized boot-CDs on IBM PC compatibles in 1994. To boot the system from CD-ROM, the FM TOWNS has a "hidden C:" ROM drive in which a minimum MS-DOS system, CD-ROM driver and MSCDEX. EXE are installed; this minimal DOS system runs first, the DOS system reads and executes the Towns OS IPL stored in CD-ROM after that.
The Towns OS CD-ROM has an IPL, MS-DOS system, DOS extender, Towns API. A minimal DOS system that allows the CD-ROM drive to be accessed is contained in a system ROM. Various Linux and BSD distributions have been ported to the FM Towns system, including Debian and Gentoo. A version of GNU called GNU for FM Towns was released in 1990; the FM Towns features video modes
Yggdrasil Linux/GNU/X, or LGX, is a discontinued early Linux distribution developed by Yggdrasil Computing, Incorporated, a company founded by Adam J. Richter in Berkeley, California. Yggdrasil was the first company to create a live CD Linux distribution. Yggdrasil Linux described itself as a "Plug-and-Play" Linux distribution, automatically configuring itself for the hardware; the last release of Yggdrasil was in 1995. Yggdrasil is the World Tree of Norse mythology; the name was chosen because Yggdrasil took disparate pieces of software and assembled them into a complete product. Yggdrasil's company motto was "Free Software For The Rest of Us". Yggdrasil is compliant with the Unix Filesystem Hierarchy Standard. Yggdrasil announced their ‘bootable Linux/GNU/X-based UNIX clone for PC compatibles’ on 24 November 1992 and made the first release on 8 December 1992; this alpha release contained the 0.98.1 version of the Linux kernel, the v11r5 version of the X Window System supporting up to 1024x768 with 256 colours, various GNU utilities such as their C/C++ compiler, the GNU debugger, bison and make, TeX, Ghostscript, the elvis and Emacs editors, various other software.
Yggdrasil's alpha release required a 386 computer with 100 MB hard disk. The alpha release was missing some of the source code of some such as elvis. A beta release was made on 18 February 1993; the beta's cost was US$60. LGX's beta release in 1993 contained the 0.99.5 version of the Linux kernel, along with other software from GNU and X. By 22 August 1993, the Yggdrasil company had sold over 3100 copies of the LGX beta distribution; the production release version carried a pricetag of US$99. However, Yggdrasil was offered for free to any developer whose software was included with the CD distribution. According to an email from the company's founder the marginal cost of each subscription was $35.70. Early Yggdrasil releases were available from stores selling CD-ROM software. Adam J. Richter started the Yggdrasil company together with Bill Selmeier. Richter spoke to Michael Tiemann about setting up a business, but was not interested in joining forces with Cygnus. Richter was a member of League for Programming Freedom.
Richter was using only a 200 MB hard disk when building the alpha release of LGX, which prevented him from being able to include the source code of some of the packages contained in the CDROM. Yggdrasil Incorporated published some of the early Linux compilation books, such as The Linux Bible: The GNU Testament, contributed to file system and X Window System functionality of Linux in the early days of their operation; the company moved to San Jose, California in 1996. In 1996, Yggdrasil Incorporated released the Winter 1996 edition of Linux Internet Archives; the company remained active until at least year 2000, when it released the Linux Open Source DVD, but its website was taken offline afterwards and the company has not released anything since. The company's last corporate filing was in January 2004; the California Secretary of State lists it. The company once made an offer to donate 60% of the Yggdrasil CDROM sales revenues to the Computer Systems Research Group, but founder Adam J. Richter indicated that the company would lose too much money and changed the offer accordingly, while still maintaining donations to CSRG.
The company had volume discount plans. Arena, a web browser once developed by Yggdrasil Computing MCC Interim Linux Yggrasil Linux/GNU/X operating system distribution from 1995 ibiblio's mirror of 1996's release of Yggrasil Linux/GNU/X operating system distribution DistroWatch on Yggdrasil
Sound recording and reproduction
Sound recording and reproduction is an electrical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, instrumental music, or sound effects. The two main classes of sound recording technology are analog digital recording. Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record. In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling.
This lets the audio data be transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Prior to the development of sound recording, there were mechanical systems, such as wind-up music boxes and player pianos, for encoding and reproducing instrumental music. Long before sound was first recorded, music was recorded—first by written music notation also by mechanical devices. Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered organ that played interchangeable cylinders. According to Charles B. Fowler, this "...cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century."
The Banū Mūsā brothers invented an automatic flute player, which appears to have been the first programmable machine. Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound in stone representations, although this theory has not been conclusively proved. In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders. Similar designs appeared in barrel organs, musical clocks, barrel pianos, music boxes. A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth of a steel comb; the fairground organ, developed in 1892, used a system of accordion-folded punched cardboard books. The player piano, first demonstrated in 1876, used a punched paper scroll that could store a long piece of music; the most sophisticated of the piano rolls were hand-played, meaning that the roll represented the actual performance of an individual, not just a transcription of the sheet music.
This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. A 1908 U. S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, between 1,000,000 and 1,500,000 piano rolls produced; the first device that could record actual sounds as they passed through the air was the phonautograph, patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville. The earliest known recordings of the human voice are phonautograph recordings, called phonautograms, made in 1857, they consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of Au Clair de la Lune, a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file.
On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris explaining his proposed method, called the paleophone. Though no trace of a working paleophone was found, Cros is remembered as the earliest inventor of a sound recording and reproduction machine; the first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s; the development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. The next major technical development was the invention of the gramophone record credited to Emile Berliner and patented in 1887, though others had demonstrated simi
Compact disc is a digital optical disc data storage format, co-developed by Philips and Sony and released in 1982. The format was developed to store and play only sound recordings but was adapted for storage of data. Several other formats were further derived from these, including write-once audio and data storage, rewritable media, Video Compact Disc, Super Video Compact Disc, Photo CD, PictureCD, CD-i, Enhanced Music CD; the first commercially available audio CD player, the Sony CDP-101, was released October 1982 in Japan. Standard CDs have a diameter of 120 millimetres and can hold up to about 80 minutes of uncompressed audio or about 700 MiB of data; the Mini CD has various diameters ranging from 60 to 80 millimetres. At the time of the technology's introduction in 1982, a CD could store much more data than a personal computer hard drive, which would hold 10 MB. By 2010, hard drives offered as much storage space as a thousand CDs, while their prices had plummeted to commodity level. In 2004, worldwide sales of audio CDs, CD-ROMs and CD-Rs reached about 30 billion discs.
By 2007, 200 billion CDs had been sold worldwide. From the early 2000s CDs were being replaced by other forms of digital storage and distribution, with the result that by 2010 the number of audio CDs being sold in the U. S. had dropped about 50% from their peak. In 2014, revenues from digital music services matched those from physical format sales for the first time. American inventor James T. Russell has been credited with inventing the first system to record digital information on an optical transparent foil, lit from behind by a high-power halogen lamp. Russell's patent application was filed in 1966, he was granted a patent in 1970. Following litigation and Philips licensed Russell's patents in the 1980s; the compact disc is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals. Prototypes were developed by Sony independently in the late 1970s. Although dismissed by Philips Research management as a trivial pursuit, the CD became the primary focus for Philips as the LaserDisc format struggled.
In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. After a year of experimentation and discussion, the Red Book CD-DA standard was published in 1980. After their commercial release in 1982, compact discs and their players were popular. Despite costing up to $1,000, over 400,000 CD players were sold in the United States between 1983 and 1984. By 1988, CD sales in the United States surpassed those of vinyl LPs, by 1992 CD sales surpassed those of prerecorded music cassette tapes; the success of the compact disc has been credited to the cooperation between Philips and Sony, which together agreed upon and developed compatible hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company, allowed the CD to dominate the at-home music market unchallenged. In 1974, Lou Ottens, director of the audio division of Philips, started a small group with the aim to develop an analog optical audio disc with a diameter of 20 cm and a sound quality superior to that of the vinyl record.
However, due to the unsatisfactory performance of the analog format, two Philips research engineers recommended a digital format in March 1974. In 1977, Philips established a laboratory with the mission of creating a digital audio disc; the diameter of Philips's prototype compact disc was set at 11.5 cm, the diagonal of an audio cassette. Heitaro Nakajima, who developed an early digital audio recorder within Japan's national public broadcasting organization NHK in 1970, became general manager of Sony's audio department in 1971, his team developed a digital PCM adaptor audio tape recorder using a Betamax video recorder in 1973. After this, in 1974 the leap to storing digital audio on an optical disc was made. Sony first publicly demonstrated an optical digital audio disc in September 1976. A year in September 1977, Sony showed the press a 30 cm disc that could play 60 minutes of digital audio using MFM modulation. In September 1978, the company demonstrated an optical digital audio disc with a 150-minute playing time, 44,056 Hz sampling rate, 16-bit linear resolution, cross-interleaved error correction code—specifications similar to those settled upon for the standard compact disc format in 1980.
Technical details of Sony's digital audio disc were presented during the 62nd AES Convention, held on 13–16 March 1979, in Brussels. Sony's AES technical paper was published on 1 March 1979. A week on 8 March, Philips publicly demonstrated a prototype of an optical digital audio disc at a press conference called "Philips Introduce Compact Disc" in Eindhoven, Netherlands. Sony executive Norio Ohga CEO and chairman of Sony, Heitaro Nakajima were convinced of the format's commercial potential and pushed further development despite widespread skepticism; as a result, in 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. Led by engineers Kees Schouhamer Immink and Toshitada Doi, the research pushed forward laser and optical disc technology. After a year of experimentation and discussion, the task force produced the Red Book CD-DA standard. First published in 1980, the stand