Creative Technology Ltd. is a global technology company headquartered in Jurong East, Singapore with additional offices in offices in Silicon Valley, Dublin and Shanghai. The principal activities of the company and its subsidiaries consist of the design and distribution of digitized sound and video boards and related multimedia and personal digital entertainment products, it partners with mainboard manufacturers and laptop brands to embed its Sound Blaster technology on their products. Creative Technology was founded in 1981 by childhood friends and Ngee Ann Polytechnic schoolmates Sim Wong Hoo and Ng Kai Wa. A computer repair shop in Pearl's Centre in Chinatown, the company developed an add-on memory board for the Apple II computer. Creative spent $500,000 developing the Cubic CT, an IBM-compatible PC adapted for the Chinese language and featuring multimedia features like enhanced color graphics and a built-in audio board capable of producing speech and melodies. With lack of demand for multilingual computers and few multimedia software applications available, the Cubic was a commercial failure.
Shifting focus from language to music, Creative developed the Creative Music System, a PC add-on card. Sim established Creative Labs, Inc. in the United States' Silicon Valley and convinced software developers to support the sound card, re-named Game Blaster and marketed by RadioShack's Tandy division. The success of this audio interface led to the development of the standalone Sound Blaster sound card, introduced at the 1989 COMDEX show just as the multimedia PC market, fueled by Intel's 386 card and Windows 3.0, took off. The success of Sound Blaster helped grow Creative's revenue from $5.4 million USD in 1989 to $658 million USD in 1994. In 1993, the year after Creative’s Initial Public Offering, in 1992, former Ashton-Tate CEO Ed Esber joined Creative Labs as CEO to assemble a management team to support the company’s rapid growth. Esber brought in a team of US executives, including Rich Buchanan, Gail Pomerantz, Rich Sorkin; this group played key roles in reversing a brutal market share decline caused by intense competition from Mediavision at the high end and Aztech at the low end.
Sorkin, in particular strengthened the company's brand position through crisp licensing and an aggressive defense of Creative's intellectual property positions while working to shorten product development cycles. At the same time and the original founders of the company had differences of opinion on the strategy and positioning of the company. Esber exited in 1995, followed by Buchanan and Pomerantz. Following Esber’s departure, Sorkin was promoted to General Manager of Audio and Communications Products and Executive Vice-President of Business Development and Corporate Investments, before leaving Creative in 1996 to run Elon Musk’s first startup and internet pioneer Zip2. By 1996, Creative's revenues had peaked at $1.6 billion USD. With pioneering investments in VOIP and media streaming, Creative was well-positioned to take advantage of the internet era, but ventured into the CD-ROM market and was forced to write off nearly $100 million USD in inventory when the market collapsed due to a flood of cheaper alternatives.
The firm had maintained a strong foothold in the EISA PC audio market until 14 July 1997 when Aureal Semiconductor entered the soundcard market with their competitive PCI AU8820 Vortex 3D sound technology. The firm at the time was in development of their own in house PCI audio cards but were finding little success adopting to the PCI standard. In January 1998 in order to facilitate a working PCI audio technology, the firm made the acquisition of Ensoniq for US$77 million. On March 5, 1998 the firm sued Aureal with patent infringement claims over a MIDI caching technology held by E-mu Systems. Aureal filed a counterclaim stating the firm was intentionally interfering with its business prospects, had defamed them, commercially disparaged, engaged in unfair competition with intent to slow down Aureals sales and acted fraudulently; the suit had come only days after Aureal gained a fair market with the AU8820 Vortex1. In August 1998 the Sound Blaster Live! was the firm's first sound card developed for the PCI bus in order to compete with upcoming Aureal AU8830 Vortex2 sound chip.
Aureal at this time were making fliers comparing their new AU8830 chips to the now shipping Sound Blaster Live!. The specifications within these fliers comparing the AU8830 to the Sound Blaster Live! EMU10K1 chip sparked another flurry of lawsuits against Aureal, this time claiming Aureal had falsely advertised the Sound Blaster Live!'s capabilities. In December 1999 after numerous lawsuits, Aureal won a favourable ruling but went bankrupt as a result of legal costs and their investors pulling out, their assets were acquired by Creative through the bankruptcy court in September 2000 for US$32 million. The firm had in effect removed their only major direct competitor in the 3D gaming audio market, excluding their acquisition of Sensaura. In April 1999, the firm launched the NOMAD line of digital audio players that would introduce the MuVo and ZEN series of portable media players. In November 2004, the firm announced a $100 million marketing campaign to promote their digital audio products, including the ZEN range of MP3 players.
The firm applied for U. S. Patent 6,928,433 on 5 January 2001 and was awarded the patent on 9 August 2005; the ZEN Patent was awarded to the firm for the invention of user interface for portable media players. This opened the way for potential legal action against the other competing players; the firm took legal actions against Apple in May 2006. In August, 2
MIDI is a technical standard that describes a communications protocol, digital interface, electrical connectors that connect a wide variety of electronic musical instruments and related audio devices for playing and recording music. A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument; this could be sixteen different digital instruments, for example. MIDI carries event messages, data that specify the instructions for music, including a note's notation, velocity, panning to the right or left of stereo, clock signals; when a musician plays a MIDI instrument, all of the key presses, button presses, knob turns and slider changes are converted into MIDI data. One common MIDI application is to play a MIDI keyboard or other controller and use it to trigger a digital sound module to generate sounds, which the audience hears produced by a keyboard amplifier. MIDI data can be recorded to a sequencer to be edited or played back.
A file format that stores and exchanges the data is defined. Advantages of MIDI include small file size, ease of modification and manipulation and a wide choice of electronic instruments and synthesizer or digitally-sampled sounds. A MIDI recording of a performance on a keyboard could sound like a piano or other keyboard instrument. A MIDI recording is not an audio signal, as with a sound recording made with a microphone. Prior to the development of MIDI, electronic musical instruments from different manufacturers could not communicate with each other; this meant that a musician could not, for example, plug a Roland keyboard into a Yamaha synthesizer module. With MIDI, any MIDI-compatible keyboard can be connected to any other MIDI-compatible sequencer, sound module, drum machine, synthesizer, or computer if they are made by different manufacturers. MIDI technology was standardized in 1983 by a panel of music industry representatives, is maintained by the MIDI Manufacturers Association. All official MIDI standards are jointly developed and published by the MMA in Los Angeles, the MIDI Committee of the Association of Musical Electronics Industry in Tokyo.
In 2016, the MMA established the MIDI Association to support a global community of people who work, play, or create with MIDI. In the early 1980s, there was no standardized means of synchronizing electronic musical instruments manufactured by different companies. Manufacturers had their own proprietary standards to synchronize instruments, such as CV/gate and Digital Control Bus. Roland founder Ikutaro Kakehashi felt the lack of standardization was limiting the growth of the electronic music industry. In June 1981, he proposed developing a standard to Oberheim Electronics founder Tom Oberheim, who had developed his own proprietary interface, the Oberheim System. Kakehashi felt the system was too cumbersome, spoke to Sequential Circuits president Dave Smith about creating a simpler, cheaper alternative. While Smith discussed the concept with American companies, Kakehashi discussed it with Japanese companies Yamaha and Kawai. Representatives from all companies met to discuss the idea in October.
Using Roland's DCB as a basis and Sequential Circuits engineer Chet Wood devised a universal synthesizer interface to allow communication between equipment from different manufacturers. Smith proposed this standard at the Audio Engineering Society show in November 1981; the standard was discussed and modified by representatives of Roland, Korg and Sequential Circuits. Kakehashi favored the name Universal Musical Interface, pronounced you-me, but Smith felt this was "a little corny". However, he liked the use of "instrument" instead of "synthesizer", proposed the name Musical Instrument Digital Interface. Moog Music founder Robert Moog announced MIDI in the October 1982 issue of Keyboard. At the 1983 Winter NAMM Show, Smith demonstrated a MIDI connection between Prophet 600 and Roland JP-6 synthesizers; the MIDI specification was published in August 1983. The MIDI standard was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their work; the first MIDI synthesizers were the Roland Jupiter-6 and the Prophet 600, both released in 1982.
1983 saw the release of the first MIDI drum machine, the Roland TR-909, the first MIDI sequencer, the Roland MSQ-700. The first computers to support MIDI were the NEC PC-88 and PC-98 in 1982, the MSX released in 1983. MIDI's appeal was limited to professional musicians and record producers who wanted to use electronic instruments in the production of popular music; the standard allowed different instruments to communicate with each other and with computers, this spurred a rapid expansion of the sales and production of electronic instruments and music software. This interoperability allowed one device to be controlled from another, which reduced the amount of hardware musicians needed. MIDI's introduction coincided with the dawn of the personal computer era and the introduction of samplers and digital synthesizers; the creative possibilities brought about by MIDI technology are credited for helping revive the music industry in the 1980s. MIDI introduced capabilities. MIDI sequencing makes it possible for
A synthesizer or synthesiser is an electronic musical instrument that generates audio signals that may be converted to sound. Synthesizers may imitate traditional musical instruments such as piano, vocals, or natural sounds such as ocean waves, they are played with a musical keyboard, but they can be controlled via a variety of other devices, including music sequencers, instrument controllers, guitar synthesizers, wind controllers, electronic drums. Synthesizers without built-in controllers are called sound modules, are controlled via USB, MIDI or CV/gate using a controller device a MIDI keyboard or other controller. Synthesizers use various methods to generate electronic signals. Among the most popular waveform synthesis techniques are subtractive synthesis, additive synthesis, wavetable synthesis, frequency modulation synthesis, phase distortion synthesis, physical modeling synthesis and sample-based synthesis. Synthesizers were first used in pop music in the 1960s. In the late 1970s, synths were used in progressive rock and disco.
In the 1980s, the invention of the inexpensive Yamaha DX7 synth made digital synthesizers available. 1980s pop and dance music made heavy use of synthesizers. In the 2010s, synthesizers are used in many genres, such as pop, hip hop, metal and dance. Contemporary classical music composers from the 20th and 21st century write compositions for synthesizer; the beginnings of the synthesizer are difficult to trace, as it is difficult to draw a distinction between synthesizers and some early electric or electronic musical instruments. One of the earliest electric musical instruments, the Musical Telegraph, was invented in 1876 by American electrical engineer Elisha Gray, he accidentally discovered the sound generation from a self-vibrating electromechanical circuit, invented a basic single-note oscillator. This instrument used steel reeds with oscillations created by electromagnets transmitted over a telegraph line. Gray built a simple loudspeaker device into models, consisting of a vibrating diaphragm in a magnetic field, to make the oscillator audible.
This instrument was a remote electromechanical musical instrument that used telegraphy and electric buzzers that generated fixed timbre sound. Though it lacked an arbitrary sound-synthesis function, some have erroneously called it the first synthesizer. In 1897 Thaddeus Cahill was granted his first patent for an electronic musical instrument, which by 1901 he had developed into the Telharmonium capable of additive synthesis. Cahill's business was unsuccessful for various reasons, but similar and more compact instruments were subsequently developed, such as electronic and tonewheel organs including the Hammond organ, invented in 1935. In 1906, American engineer Lee de Forest invented the first amplifying vacuum tube, the Audion whose amplification of weak audio signals contributed to advances in sound recording and film, the invention of early electronic musical instruments including the theremin, the ondes martenot, the trautonium. Most of these early instruments used heterodyne circuits to produce audio frequencies, were limited in their synthesis capabilities.
The ondes martenot and trautonium were continuously developed for several decades developing qualities similar to synthesizers. In the 1920s, Arseny Avraamov developed various systems of graphic sonic art, similar graphical sound and tonewheel systems were developed around the world. In 1938, USSR engineer Yevgeny Murzin designed a compositional tool called ANS, one of the earliest real-time additive synthesizers using optoelectronics. Although his idea of reconstructing a sound from its visible image was simple, the instrument was not realized until 20 years in 1958, as Murzin was, "an engineer who worked in areas unrelated to music". In the 1930s and 1940s, the basic elements required for the modern analog subtractive synthesizers — electronic oscillators, audio filters, envelope controllers, various effects units — had appeared and were utilized in several electronic instruments; the earliest polyphonic synthesizers were developed in the United States. The Warbo Formant Orgel developed by Harald Bode in Germany in 1937, was a four-voice key-assignment keyboard with two formant filters and a dynamic envelope controller.
The Hammond Novachord released in 1939, was an electronic keyboard that used twelve sets of top-octave oscillators with octave dividers to generate sound, with vibrato, a resonator filter bank and a dynamic envelope controller. During the three years that Hammond manufactured this model, 1,069 units were shipped, but production was discontinued at the start of World War II. Both instruments were the forerunners of the electronic organs and polyphonic synthesizers. In the 1940s and 1950s, before the popularization of electronic organs and the introductions of combo organs, manufacturers developed various portable monophonic electronic instruments with small keyboards; these small instruments consisted of an electronic oscillator, vibrato effect, passive filters. Most were designed for conventional ensembles, rather than as experimental instruments for electronic music studios, but contributed to the evolution of modern synthesizers; these instruments include the Solovox, Multimonica and Clavioline.
In the late 1940s, Canadian inventor and composer, Hugh Le Caine invented the Electronic Sackbut, a voltage-controlled electronic musical instrument that provided the earliest real-time control of three aspects of sound —corresponding to today's touch-sensitive keyboard and modulation controllers. The controllers were impl
E-MU Systems was a software synthesizer, audio interface, MIDI interface, MIDI keyboard manufacturer. Founded in 1971 as a synthesizer maker, E-mu was a pioneer in samplers, sample-based drum machines and low-cost digital sampling music workstations. After its acquisition in 1993, E-mu Systems was a wholly owned subsidiary of Creative Technology, Ltd. In 1998, E-mu was combined with Ensoniq, another synthesizer and sampler manufacturer acquired by Creative Technology. E-mu was last based in California, on the outskirts of Silicon Valley. E-mu Systems was founded in Santa Cruz, CA by Dave Rossum, a UCSC student and two of his friends from Caltech, Steve Gabriel and Jim Ketcham, with the goal to build their own modular synthesizers. Scott Wedge, who would become president, joined that summer. In 1972, E-mu became a company and patenting a digitally scanned polyphonic keyboard, licensed for use by Oberheim Electronics in the 4-Voice and 8-Voice synthesizers and by Dave Smith in the Sequential Circuits Prophet-5.
E-mu, along with Solid State Micro Technology developed several synthesizer module IC chips, that were used by both E-mu and many other synthesizer companies. With the financial benefit of the royalties that came from working with these other synthesizer manufacturers, E-mu designed the Audity, their first non-modular synthesizer, showing it at the 1980 AES Convention. With a price of $69,200, only one machine was produced. At that same convention and Rossum saw the Fairlight CMI and the Linn LM-1. Recognizing the trend of digital samplers, they realized that E-mu had the technology to bring a lower-priced sampler to market; the Emulator debuted in 1981 at a list price of $7,900 less than the $30,000 Fairlight. Following the Emulator, E-mu released the first programmable drum machine with samples built-in priced below $1,000, the E-mu Drumulator; the Drumulator's success was followed by the Emulator II and III, the SP-12 drum sampler, the Emax series of samplers. In 1989, E-mu introduced the Proteus, a rackmount sound module, containing pre-recorded samples in ROM.
At its introduction, the Proteus had a large library of high-quality samples priced much lower than the competition. The success of the Proteus spurred the development of several additional versions, including the Proteus XR, an orchestral version, a world music version. In 1987, E-mu's SP-1200 drum sampler offered an "all-in-one" box for sequencing not only drum sounds, but looping samples, it became the instrument of choice for hip hop producers. In 1993, E-mu began working on PC soundcard synthesis. Creative Wave Blaster II and Sound Blaster AWE32 used EMU8000 effect processor. Throughout the 1990s, E-mu made many different sound modules along the lines of the Proteus series. E-mu made unsuccessful attempts at breaking into the digital multitrack recorder with the Darwin hard-disk recording system. In 1998, E-mu was combined with Ensoniq, another synthesizer and sampler manufacturer acquired by Creative Technology. In 2001 E-mu's sound modules were repackaged in the form of a line of tabletop units, the XL7 and MP7 Command Stations, each featuring 128-voice polyphony, advanced synthesis features, a versatile multitrack sequencer.
A complementary line of keyboard synthesizers was released using the same technology. Subsequent products from E-mu were in software form. In 2004 E-mu released the Emulator X, a PC-based version of its hardware samplers with extended synthesis capabilities. While a PCI card is used for audio input and output, the algorithms no longer run on dedicated hardware but in software on the PC. Proteus X, a software-based sample player, was released in 2005. During 2003-2007, E-mu designed and published a series of high-fidelity "Digital Audio Systems", intended for professional, semi-professional and computer audio enthusiast use, they were released under the name E-MU. The card names are number-coded for the number of physical inputs and outputs: 0404, 1212m, 1616, 1616m, 1820 and 1820m, where 1616 is a CardBus version and the rest for PCI, while "m" denotes extra high-quality analogue outputs and inputs; the 1820m was touted as the series' flagship product until the 1616M were released. All of the cards had drivers for Microsoft Windows 2000 and versions that were current at time of the respective products' release..
Only a beta version driver was released for Windows 7. Apple Macintosh support appeared to be pending, but may have been affected by Apple's migration towards Intel. While the core DSP chip of the cards is the same one designed by E-MU and used in Creative's Sound Blaster Audigy2 cards, official press releases for the E-MU sound cards have emphasized Creative's lack of input on the design, the in-house development of the cards and drivers — that is, they wanted to distinguish their "own" series from Creative's signature Sound Blasters. Notably, the cards and drivers omit internal'wavetable' sample-based MIDI synthesis, Creative's proprietary EAX sound routines and anything associated with the parent company. Although the cards were rushed into market and came bundled with raw drivers, they have met with rather favourable reviews. 1973 - E-mu Modular System 1980 - Audity 1981 - Emulato
IBM PC compatible
IBM PC compatible computers are computers similar to the original IBM PC, XT, AT, able to use the same software and expansion cards. Such computers used to be referred to as PC clones, or IBM clones, they duplicate exactly all the significant features of the PC architecture, facilitated by IBM's choice of commodity hardware components and various manufacturers' ability to reverse engineer the BIOS firmware using a "clean room design" technique. Columbia Data Products built the first clone of the IBM personal computer by a clean room implementation of its BIOS. Early IBM PC compatibles used the same computer bus as AT models; the IBM AT compatible bus was named the Industry Standard Architecture bus by manufacturers of compatible computers. The term "IBM PC compatible" is now a historical description only, since IBM has ended its personal computer sales. Descendants of the IBM PC compatibles comprise the majority of personal computers on the market presently with the dominant operating system being Microsoft Windows, although interoperability with the bus structure and peripherals of the original PC architecture may be limited or non-existent.
Some computers ran MS-DOS but had enough hardware differences that IBM compatible software could not be used. Only the Macintosh kept significant market share without compatibility with the IBM PC. IBM decided in 1980 to market a low-cost single-user computer as as possible in response to Apple Computer's success in the burgeoning microcomputer market. On 12 August 1981, the first IBM PC went on sale. There were three operating systems available for it; the least expensive and most popular was PC DOS made by Microsoft. In a crucial concession, IBM's agreement allowed Microsoft to sell its own version, MS-DOS, for non-IBM computers; the only component of the original PC architecture exclusive to IBM was the BIOS. IBM at first asked developers to avoid writing software that addressed the computer's hardware directly, to instead make standard calls to BIOS functions that carried out hardware-dependent operations; this software would run on any machine using MS-DOS or PC-DOS. Software that directly addressed the hardware instead of making standard calls was however.
Software addressing IBM PC hardware in this way would not run on MS-DOS machines with different hardware. The IBM PC was sold in high enough volumes to justify writing software for it, this encouraged other manufacturers to produce machines which could use the same programs, expansion cards, peripherals as the PC; the 808x computer marketplace excluded all machines which were not hardware- and software-compatible with the PC. The 640 KB barrier on "conventional" system memory available to MS-DOS is a legacy of that period. Rumors of "lookalike", compatible computers, created without IBM's approval, began immediately after the IBM PC's release. InfoWorld wrote on the first anniversary of the IBM PC that The dark side of an open system is its imitators. If the specs are clear enough for you to design peripherals, they are clear enough for you to design imitations. Apple... has patents on two important components of its systems... IBM, which has no special patents on the PC, is more vulnerable. Numerous PC-compatible machines—the grapevine says 60 or more—have begun to appear in the marketplace.
By June 1983 PC Magazine defined "PC'clone'" as "a computer accommodate the user who takes a disk home from an IBM PC, walks across the room, plugs it into the'foreign' machine". Because of a shortage of IBM PCs that year, many customers purchased clones instead. Columbia Data Products produced the first computer more or less compatible with the IBM PC standard during June 1982, soon followed by Eagle Computer. Compaq announced its first IBM PC compatible in the Compaq Portable; the Compaq was the first sewing machine-sized portable computer, 100% PC-compatible. The company could not copy the BIOS directly as a result of the court decision in Apple v. Franklin, but it could reverse-engineer the IBM BIOS and write its own BIOS using clean room design. At the same time, many manufacturers such as Tandy/RadioShack, Hewlett-Packard, Digital Equipment Corporation, Texas Instruments, Tulip and Olivetti introduced personal computers that supported MS-DOS, but were not software- or hardware-compatible with the IBM PC.
Tandy described the Tandy 2000, for example, as having a "'next generation' true 16-bit CPU", with "More speed. More disk storage. More expansion" than the IBM PC or "other MS-DOS computers". While admitting in 1984 that many MS-DOS programs did not support the computer, the company stated that "the most popular, sophisticated software on the market" was available, either or "over the next six months". Like IBM, Microsoft's intention was that application writers would write to the application programming interfaces in MS-DOS or the firmware BIOS, that this would form what would now be termed a hardware abstraction layer; each computer would have its own Original Equipment Manufacturer version of MS-DOS, customized to its hardware. Any software written for MS-DOS would operate on any MS-DOS computer, despite variations in hardware design; this expectation seemed reasonable in the computer marketplace of the time. Until Microsoft was based on computer languages such as BASIC; the established small system operating software was CP/M from Digital Research, in use both at the hobbyist level and by the more professional of t
Sound recording and reproduction
Sound recording and reproduction is an electrical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, instrumental music, or sound effects. The two main classes of sound recording technology are analog digital recording. Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record. In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling.
This lets the audio data be transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Prior to the development of sound recording, there were mechanical systems, such as wind-up music boxes and player pianos, for encoding and reproducing instrumental music. Long before sound was first recorded, music was recorded—first by written music notation also by mechanical devices. Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered organ that played interchangeable cylinders. According to Charles B. Fowler, this "...cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century."
The Banū Mūsā brothers invented an automatic flute player, which appears to have been the first programmable machine. Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound in stone representations, although this theory has not been conclusively proved. In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders. Similar designs appeared in barrel organs, musical clocks, barrel pianos, music boxes. A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth of a steel comb; the fairground organ, developed in 1892, used a system of accordion-folded punched cardboard books. The player piano, first demonstrated in 1876, used a punched paper scroll that could store a long piece of music; the most sophisticated of the piano rolls were hand-played, meaning that the roll represented the actual performance of an individual, not just a transcription of the sheet music.
This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. A 1908 U. S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, between 1,000,000 and 1,500,000 piano rolls produced; the first device that could record actual sounds as they passed through the air was the phonautograph, patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville. The earliest known recordings of the human voice are phonautograph recordings, called phonautograms, made in 1857, they consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of Au Clair de la Lune, a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file.
On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris explaining his proposed method, called the paleophone. Though no trace of a working paleophone was found, Cros is remembered as the earliest inventor of a sound recording and reproduction machine; the first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s; the development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. The next major technical development was the invention of the gramophone record credited to Emile Berliner and patented in 1887, though others had demonstrated simi
A sound card is an internal expansion card that provides input and output of audio signals to and from a computer under control of computer programs. The term sound card is applied to external audio interfaces used for professional audio applications. Sound functionality can be integrated onto the motherboard, using components similar to those found on plug-in cards; the integrated sound system is still referred to as a sound card. Sound processing hardware is present on modern video cards with HDMI to output sound along with the video using that connector. Typical uses of sound cards or sound card functionality include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation and entertainment and video projection. Sound cards are used for computer-based communication such as voice over IP and teleconferencing. Sound cards use a digital-to-analog converter, which converts recorded or generated digital signal data into an analog format.
The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as a TRS phone connector. If the number and size of connectors is too large for the space on the backplate, the connectors will be off-board using a breakout box, an auxiliary backplate, or a panel mounted at the front; some cards include a sound chip to support production of synthesized sounds for real-time generation of music and sound effects using minimal data and CPU time. A common external connector is the microphone connector, for signals from a microphone or other low-level input device. Input through a microphone jack can be used, for example, by speech recognition or voice over IP applications. Most sound cards have a line in connector for an analog input from a cassette tape or other sound source that has higher voltage levels than a microphone. In either case, the sound card uses an analog-to-digital converter to digitize this signal; the card may use direct memory access to transfer the samples to the main memory, from where a recording software may write it to the hard disk for storage, editing, or further processing.
An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration such as 2.0, 2.1, 5.1, or other configuration. Sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, many older sound chips could accommodate three voices, but only one audio channel for output, requiring all voices to be mixed together. Cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel. For some years, most PC sound cards have had multiple FM synthesis voices which were used for MIDI music; the full capabilities of advanced cards are not used. Modern low-cost integrated sound cards such as audio codecs like those meeting the AC'97 standard and some lower-cost expansion sound cards still work this way.
These devices may provide more than two sound output channels, but they have no actual hardware polyphony for either sound effects or MIDI reproduction – these tasks are performed in software. This is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware. In the early days of'wavetable' sample-based synthesis, some sound card manufacturers advertised polyphony on the MIDI capabilities alone. In this case, the card's output channel is irrelevant. Instead, the polyphony measurement applies to the number of MIDI instruments the sound card is capable of producing at one given time. Today, a sound card providing actual hardware polyphony, regardless of the number of output channels, is referred to as a "hardware audio accelerator", although actual voice polyphony is not the sole prerequisite, with other aspects such as hardware acceleration of 3D sound, positional audio and real-time DSP effects being more important. Since digital sound playback has become available and single and provided better performance than synthesis, modern sound cards with hardware polyphony do not use DACs with as many channels as voices.
The final playback stage is performed by an external DAC with fewer channels than voices. The Tandy 1000 and the PCjr used the same soundchip, but the Tandy 1000 utilesed the Audio IN pin, whereas the PCjr did not; this allowed the tandy to produce the speaker sound at the same time as the SN74689 Connectors on the sound cards are color-coded as per the PC System Design Guide. They will have symbols with arrows and soundwaves that are associated with each jack position, the meaning of each is given below