Sound recording and reproduction
Sound recording and reproduction is an electrical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, instrumental music, or sound effects. The two main classes of sound recording technology are analog digital recording. Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record. In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling.
This lets the audio data be transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Prior to the development of sound recording, there were mechanical systems, such as wind-up music boxes and player pianos, for encoding and reproducing instrumental music. Long before sound was first recorded, music was recorded—first by written music notation also by mechanical devices. Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered organ that played interchangeable cylinders. According to Charles B. Fowler, this "...cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century."
The Banū Mūsā brothers invented an automatic flute player, which appears to have been the first programmable machine. Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound in stone representations, although this theory has not been conclusively proved. In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders. Similar designs appeared in barrel organs, musical clocks, barrel pianos, music boxes. A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth of a steel comb; the fairground organ, developed in 1892, used a system of accordion-folded punched cardboard books. The player piano, first demonstrated in 1876, used a punched paper scroll that could store a long piece of music; the most sophisticated of the piano rolls were hand-played, meaning that the roll represented the actual performance of an individual, not just a transcription of the sheet music.
This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. A 1908 U. S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, between 1,000,000 and 1,500,000 piano rolls produced; the first device that could record actual sounds as they passed through the air was the phonautograph, patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville. The earliest known recordings of the human voice are phonautograph recordings, called phonautograms, made in 1857, they consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of Au Clair de la Lune, a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file.
On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris explaining his proposed method, called the paleophone. Though no trace of a working paleophone was found, Cros is remembered as the earliest inventor of a sound recording and reproduction machine; the first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s; the development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. The next major technical development was the invention of the gramophone record credited to Emile Berliner and patented in 1887, though others had demonstrated simi
Audio magazine was a periodical published from 1947 to 2000, was America's longest-running audio magazine. Audio published reviews of audio products and audio technology as well as informational articles on topics such as acoustics and the art of listening. Audio claimed to be the successor of Radio magazine, established in 1917. Audio began life in Mineola, New York in 1947 as Audio Engineering for the purpose of publishing new developments in audio engineering. In 1948, the Audio Engineering Society was established and in 1953 they began publishing their definitive, scholarly periodical, the Journal of the Audio Engineering Society. Audio Engineering magazine dropped the word "engineering" in 1954 and shifted to a more consumer- and hobbyist-oriented focus while retaining a serious scientific viewpoint. In 1966, Audio's headquarters were moved to Philadelphia and the periodical was printed by North American Publishing Company. In 1979, CBS moved operations to New York. CBS bought a group of magazines from Ziff-Davis, including sometime competitor Stereo Review, which soon found itself sharing office space with Audio.
In October 1987, Peter Diamandis led a management buyout of the CBS magazine division with 19 magazines with $650 million of financing from Prudential Insurance. Diamandis Communications Inc. soon sold seven magazines for $243 million and in April 1988 sold Audio and the rest of the magazines to Hachette Filipacchi Médias for $712 million. Peter Diamandis remained in control of the magazine group and in 1989 bought competing audio magazine High Fidelity and merged its subscription and advertiser lists with those of Stereo Review, firing High Fidelity's staff and shutting down its printing. Audio's final appearance was the combined February/March issue in 2000. Hachette Filipacchi Media U. S. group publisher Tony Catalano told reporters that trouble in the high-performance audio sector led to the cancellation of the magazine. Sound & Vision, the successor to Stereo Review, would become the publishing group's sole magazine containing reviews of home audio equipment. Eugene "Gene" Pitts III served for more than 22 years as Audio's editor before being replaced in 1995 by Michael Riggs, executive editor of Stereo Review and former editor of High Fidelity, joined in 1999 by Corey Greenberg in an eleventh-hour attempt to revive sagging advertising revenues.
Pitts went on to buy The Audiophile Voice in 1995 from The Audiophile Society, a club in the tri-state area around New York City. Audio magazine was known for its equipment reviews, which were unusual for their concentration on objective measurement and specifications rather than subjective opinion. Audio's contributors included respected audio engineers, many active in AES. Harry F. Olson, Howard A. Chinn, John K. Hilliard, Harvey Fletcher and Hermon Hosmer Scott, all AES Gold Medal awardees, were among the pioneering audio experts who took their discoveries to Audio's pages. Richard Heyser, inventor of time delay spectrometry, wrote articles for Audio in the 1980s including his column Audio's Rosetta Stone, he reviewed loudspeakers during his short tenure. Don Keele followed Heyser. Don Davis, founder of Syn-Aud-Con, wrote occasional letters to the editor. Ken Pohlmann, digital audio author and educator, David Clark, founder of the David Clark company and expert in unbiased double-blind test procedures and originator of the ABX test, wrote articles for Audio.
In 1972, Robert W. "Bob" Carver wrote an article about his 700 watt amplifier design, the Phase Linear PL-700. Thereafter, Carver products were reviewed in the magazine. Bob Carver wrote an article about his development of sonic holography, an experiment in psychoacoustics as applied to loudspeaker physics. In 1984, a column called Auricles appeared, providing purely subjective equipment reviews that did not include performance measurements or emphasize specifications. New contributors who were not engineers were invited to review audio products. After a decade of Auricles, at least one observer characterized the change in editorial content as an indulgence in "fantasy"
"Audio" is a song by pop music group LSD. The song was released on 10 May 2018, marks the group's second release, following "Genius"; the song's music video was directed by Ernest Desumbila. It opens with Diplo finding a fresh LSD tape in the glovebox; as he hits play on “Audio”, the clip jumps to a young girl walking home from school, who encounters an animated Sia balloon floating through the air. The girl takes the balloon with her to a sprawling parking lot, where a dance routine breaks out once the "Audio" chorus drops. More psychedelic animations fill the scenery as Diplo speeds through the parched Los Angeles riverbed and Labrinth wanders the otherwise empty city. Credits adapted from Tidal. Diplo – production, programming Labrinth – production, programming King Henry – production, programming Jr Blender – programming, co-production Gustave Rudman – production Manny Marroquin – mix engineering Chris Galland – mix engineering Randy Merrill – master engineering Bart Schoudel – engineering Robin Florent – engineering assistance Scott Desmarais – engineering assistance Diplo discography Labrinth discography Sia discography
Audio is the debut album by Blue Man Group, released on December 7, 1999, through Virgin Records. The album was nominated for the Grammy Award for Best Pop Instrumental Album; this album was released in two versions: The DVD had 5.1-channel versions of the music in both DVD-Video and DVD-Audio formats and a CD that had a 2-channel stereo mix of each track. A behind-the-scenes video of the album is viewable on a promotional 2000 VHS known as Audio Video; this video is included as a bonus on the Audio 5.1 Surround Sound DVD. Heather Phares of Allmusic.com rated Audio three out of five stars. She explained that it "reflects over a decade's worth of musical and theatrical innovation." Although she stated that "the spectacle of the group playing its sculptural, surreal-looking instruments is absent from the album," she concluded her review by calling it "an album that proves the Blue Man Group is as innovative in the studio as it is onstage." Producer - Todd Perlmutter Engineer - Andrew Schneider Mastered By - Bob Ludwig Mixed By - Mike Fraser Phil Stanton - Performer, Drums, Timpani Matt Goldman - Performer, Cimbalom, Gong, Drums Chris Wink - Performer, Cimbalom, Drums, Cuica Larry Heinemann - Chapman Stick, Guitar, Cuica Ian Pai - Drums, Percussion Christian Dyas - Zither, Guitar, Electronics Todd Perlmutter - Percussion, Drums Jamie Edwards - Performer Chris Bowen - Drums Clem Waldman - Drums Cräg Rodriguez - Drums, Percussion Jeff Quay - Drums & Byron Estep - Guitar John Kimbrough - Guitar Bradford Reed - Zither David Corter - Zither Elvis Lederer - Zither & Zither Jens Fischer - Zither Audio at BlueMan.com Audio at MusicBrainz
Radio broadcasting is transmission by radio waves intended to reach a wide audience. Stations can be linked in radio networks to broadcast a common radio format, either in broadcast syndication or simulcast or both; the signal types can be digital audio. The earliest radio stations did not carry audio. For audio broadcasts to be possible, electronic detection and amplification devices had to be incorporated; the thermionic valve was invented in 1904 by the English physicist John Ambrose Fleming. He developed a device he called an "oscillation valve"; the heated filament, or cathode, was capable of thermionic emission of electrons that would flow to the plate when it was at a higher voltage. Electrons, could not pass in the reverse direction because the plate was not heated and thus not capable of thermionic emission of electrons. Known as the Fleming valve, it could be used as a rectifier of alternating current and as a radio wave detector; this improved the crystal set which rectified the radio signal using an early solid-state diode based on a crystal and a so-called cat's whisker.
However, what was still required was an amplifier. The triode was patented on March 4, 1906, by the Austrian Robert von Lieben independent from that, on October 25, 1906, Lee De Forest patented his three-element Audion, it wasn't put to practical use until 1912 when its amplifying ability became recognized by researchers. By about 1920, valve technology had matured to the point where radio broadcasting was becoming viable. However, an early audio transmission that could be termed a broadcast may have occurred on Christmas Eve in 1906 by Reginald Fessenden, although this is disputed. While many early experimenters attempted to create systems similar to radiotelephone devices by which only two parties were meant to communicate, there were others who intended to transmit to larger audiences. Charles Herrold started broadcasting in California in 1909 and was carrying audio by the next year.. In The Hague, the Netherlands, PCGG started broadcasting on November 6, 1919, making it, arguably the first commercial broadcasting station.
In 1916, Frank Conrad, an electrical engineer employed at the Westinghouse Electric Corporation, began broadcasting from his Wilkinsburg, Pennsylvania garage with the call letters 8XK. The station was moved to the top of the Westinghouse factory building in East Pittsburgh, Pennsylvania. Westinghouse relaunched the station as KDKA on November 2, 1920, as the first commercially licensed radio station in America; the commercial broadcasting designation came from the type of broadcast license. The first licensed broadcast in the United States came from KDKA itself: the results of the Harding/Cox Presidential Election; the Montreal station that became CFCF began broadcast programming on May 20, 1920, the Detroit station that became WWJ began program broadcasts beginning on August 20, 1920, although neither held a license at the time. In 1920, wireless broadcasts for entertainment began in the UK from the Marconi Research Centre 2MT at Writtle near Chelmsford, England. A famous broadcast from Marconi's New Street Works factory in Chelmsford was made by the famous soprano Dame Nellie Melba on 15 June 1920, where she sang two arias and her famous trill.
She was the first artist of international renown to participate in direct radio broadcasts. The 2MT station began to broadcast regular entertainment in 1922; the BBC was amalgamated in 1922 and received a Royal Charter in 1926, making it the first national broadcaster in the world, followed by Czech Radio and other European broadcasters in 1923. Radio Argentina began scheduled transmissions from the Teatro Coliseo in Buenos Aires on August 27, 1920, making its own priority claim; the station got its license on November 19, 1923. The delay was due to the lack of official Argentine licensing procedures before that date; this station continued regular broadcasting of entertainment and cultural fare for several decades. Radio in education soon followed and colleges across the U. S. began adding radio broadcasting courses to their curricula. Curry College in Milton, Massachusetts introduced one of the first broadcasting majors in 1932 when the college teamed up with WLOE in Boston to have students broadcast programs.
Broadcasting service is – according to Article 1.38 of the International Telecommunication Union´s Radio Regulations – defined as «A radiocommunication service in which the transmission are intended for direct reception by the general public. This service may include sound transmissions, television transmissions or other types of transmission.» Definitions identical to those contained in the Annexes to the Constitution and Convention of the International Telecommunication Union are marked "" or "" respectively. A radio broadcasting station is associated with wireless transmission, though in practice broadcasting transmission take place using both wires and radio waves; the point of this is that anyone with the appropriate receiving technology can receive the broadcast. In line to ITU Radio Regulations each broadcasting station shall be classified by the service in which it operates permanently or temporarily. Broadcasting by radio takes several forms; these include FM stations. There are several subtypes, namely commercial broadcasting, non-commercial educational public broadcasting and non-profit varieties as well as community radio, student-run campus radio stations, and
Digital audio is sound, recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is encoded as numerical samples in continuous sequence. For example, in CD audio, samples are taken 44100 times per second each with 16 bit sample depth. Digital audio is the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s, it replaced analog audio technology in many areas of audio engineering and telecommunications in the 1990s and 2000s. In a digital audio system, an analog electrical signal representing the sound is converted with an analog-to-digital converter into a digital signal using pulse-code modulation; this digital signal can be recorded, edited and copied using computers, audio playback machines, other digital tools. When the sound engineer wishes to listen to the recording on headphones or loudspeakers, a digital-to-analog converter performs the reverse process, converting a digital signal back into an analog signal, sent through an audio power amplifier and to a loudspeaker.
Digital audio systems may include compression, storage and transmission components. Conversion to a digital format allows convenient manipulation, storage and retrieval of an audio signal. Unlike analog audio, in which making copies of a recording results in generation loss and degradation of signal quality, digital audio allows an infinite number of copies to be made without any degradation of signal quality. Digital audio technologies are used in the recording, mass-production, distribution of sound, including recordings of songs, instrumental pieces, sound effects, other sounds. Modern online music distribution depends on digital recording and data compression; the availability of music as data files, rather than as physical objects, has reduced the costs of distribution. Before digital audio, the music industry distributed and sold music by selling physical copies in the form of records and cassette tapes. With digital-audio and online distribution systems such as iTunes, companies sell digital sound files to consumers, which the consumer receives over the Internet.
An analog audio system converts physical waveforms of sound into electrical representations of those waveforms by use of a transducer, such as a microphone. The sounds are stored on an analog medium such as magnetic tape, or transmitted through an analog medium such as a telephone line or radio; the process is reversed for reproduction: the electrical audio signal is amplified and converted back into physical waveforms via a loudspeaker. Analog audio retains its fundamental wave-like characteristics throughout its storage, transformation and amplification. Analog audio signals are susceptible to noise and distortion, due to the innate characteristics of electronic circuits and associated devices. Disturbances in a digital system do not result in error unless the disturbance is so large as to result in a symbol being misinterpreted as another symbol or disturb the sequence of symbols, it is therefore possible to have an error-free digital audio system in which no noise or distortion is introduced between conversion to digital format, conversion back to analog.
A digital audio signal may optionally be encoded for correction of any errors that might occur in the storage or transmission of the signal. This technique, known as channel coding, is essential for broadcast or recorded digital systems to maintain bit accuracy. Eight-to-fourteen modulation is a channel code used in the audio compact disc. A digital audio system starts with an ADC; the ADC converts at a known bit resolution. CD audio, for example, has a sampling rate of 44.1 kHz, has 16-bit resolution for each stereo channel. Analog signals that have not been bandlimited must be passed through an anti-aliasing filter before conversion, to prevent the aliasing distortion, caused by audio signals with frequencies higher than the Nyquist frequency. A digital audio signal may be transmitted. Digital audio can be stored on a CD, a digital audio player, a hard drive, a USB flash drive, or any other digital data storage device; the digital signal may be altered through digital signal processing, where it may be filtered or have effects applied.
Sample-rate conversion including upsampling and downsampling may be used to conform signals that have been encoded with a different sampling rate to a common sampling rate prior to processing. Audio data compression techniques, such as MP3, Advanced Audio Coding, Ogg Vorbis, or FLAC, are employed to reduce the file size. Digital audio can be carried over digital audio interfaces such as AES3 or MADI. Digital audio can be carried over a network using audio over Ethernet, audio over IP or other streaming media standards and systems. For playback, digital audio must be converted back to an analog signal with a DAC which may use oversampling. Pulse-code modulation was invented by British scientist Alec Reeves in 1937 and was used in telecommunications applications long before its first use in commercial broadcast and recording. Commercial digital recording was pioneered in Japan by NHK and Nippon Columbia and their Denon brand, in the 1960s; the first commercial digital recordings were released in 1971.
The BBC began to experiment with digital audio in the 1960s. By the early 1970s, it had developed a 2-channel recorder
In physics, sound is a vibration that propagates as an audible wave of pressure, through a transmission medium such as a gas, liquid or solid. In human physiology and psychology, sound is the reception of such waves and their perception by the brain. Humans can only hear sound waves as distinct pitches when the frequency lies between about 20 Hz and 20 kHz. Sound waves above 20 kHz is not perceptible by humans. Sound waves below 20 Hz are known as infrasound. Different animal species have varying hearing ranges. Acoustics is the interdisciplinary science that deals with the study of mechanical waves in gases and solids including vibration, sound and infrasound. A scientist who works in the field of acoustics is an acoustician, while someone working in the field of acoustical engineering may be called an acoustical engineer. An audio engineer, on the other hand, is concerned with the recording, manipulation and reproduction of sound. Applications of acoustics are found in all aspects of modern society, subdisciplines include aeroacoustics, audio signal processing, architectural acoustics, electro-acoustics, environmental noise, musical acoustics, noise control, speech, underwater acoustics, vibration.
Sound is defined as " Oscillation in pressure, particle displacement, particle velocity, etc. propagated in a medium with internal forces, or the superposition of such propagated oscillation. Auditory sensation evoked by the oscillation described in." Sound can be viewed as a wave motion in air or other elastic media. In this case, sound is a stimulus. Sound can be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a sensation. Sound can propagate through a medium such as air and solids as longitudinal waves and as a transverse wave in solids; the sound waves are generated by a sound source, such as the vibrating diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium; as the source continues to vibrate the medium, the vibrations propagate away from the source at the speed of sound, thus forming the sound wave. At a fixed distance from the source, the pressure and displacement of the medium vary in time.
At an instant in time, the pressure and displacement vary in space. Note that the particles of the medium do not travel with the sound wave; this is intuitively obvious for a solid, the same is true for liquids and gases. During propagation, waves can be refracted, or attenuated by the medium; the behavior of sound propagation is affected by three things: A complex relationship between the density and pressure of the medium. This relationship, affected by temperature, determines the speed of sound within the medium. Motion of the medium itself. If the medium is moving, this movement may increase or decrease the absolute speed of the sound wave depending on the direction of the movement. For example, sound moving through wind will have its speed of propagation increased by the speed of the wind if the sound and wind are moving in the same direction. If the sound and wind are moving in opposite directions, the speed of the sound wave will be decreased by the speed of the wind; the viscosity of the medium.
Medium viscosity determines the rate. For many media, such as air or water, attenuation due to viscosity is negligible; when sound is moving through a medium that does not have constant physical properties, it may be refracted. The mechanical vibrations that can be interpreted as sound can travel through all forms of matter: gases, liquids and plasmas; the matter that supports the sound is called the medium. Sound cannot travel through a vacuum. Sound is transmitted through gases and liquids as longitudinal waves called compression waves, it requires a medium to propagate. Through solids, however, it can be transmitted as transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves are waves of alternating shear stress at right angle to the direction of propagation. Sound waves may be "viewed" using parabolic objects that produce sound; the energy carried by an oscillating sound wave converts back and forth between the potential energy of the extra compression or lateral displacement strain of the matter, the kinetic energy of the displacement velocity of particles of the medium.
Although there are many complexities relating to the transmission of sounds, at the point of reception, sound is dividable into two simple elements: pressure and time. These fundamental elements form the basis of all sound waves, they can be used to describe, in every sound we hear. In order to understand the sound more a complex wave such as the one shown in a blue background on the right of this text, is separated into its component parts, which are a combination of various sound wave frequencies. Sound waves are simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties: Frequency, or its inverse, wavelength Amplitude, sound pressure or Intensity Speed of sound DirectionSound, perceptible by humans has frequencies from abou