ITunes is a media player, media library, online radio broadcaster, and mobile device management application developed by Apple Inc. It is used to play and organize digital downloads of music and video on personal computers running the macOS, the iTunes Store is available on the iPhone, iPad, and iPod Touch. Application software for the iPhone, iPad and iPod Touch can be downloaded from the App Store. ITunes 12.5 is the most recent major version of iTunes, available for Mac OS X v10.9.5 or and Windows 7 or later, it was released on September 13,2016. ITunes 12.2 added Apple Music to the application, along with the Beats 1 radio station, soundJam MP, developed by Bill Kincaid and released by Casady & Greene in 1998, was renamed iTunes when Apple purchased it in 2000. Jeff Robbin and Dave Heller moved to Apple as part of the acquisition and they simplified SoundJams user interface, added the ability to burn CDs, and removed its recording feature and skin support. On January 9,2001, iTunes 1.0 was released at Macworld San Francisco, originally a Mac OS 9-only application, iTunes began to support Mac OS X when version 2.0 was released nine months later, which added support for the original iPod.
Version 3 dropped Mac OS9 support but added smart playlists, in April 2003, version 4.0 introduced the iTunes Store, in October, version 4.1 added support for Microsoft Windows 2000 and Windows XP. Introduced at Macworld 2005 with the new iPod Shuffle, Version 4.7, Version 7.0 introduced gapless playback and Cover Flow in September 2006. In March 2007, iTunes 7.1 added support for Windows Vista, iTunes lacked support for 64-bit versions of Windows until the 7.6 update on January 16,2008. ITunes is supported under any 64-bit version of Windows Vista, although the iTunes executable is still 32-bit, the 64-bit versions of Windows XP and Windows Server 2003 are not supported by Apple, but a workaround has been devised for both operating systems. Version 8.0 added Genius playlists, grid view, iTunes 9 added Homeshare, enabling automatic updating of purchased items across other computers on the same subnet and offers a new iTunes Store UI. Genius Mixes were added, as well as improved app synchronization abilities and it adds iTunes LPs to the store, which provides additional media with an album.
The groove usually starts near the periphery and ends near the center of the disc. The phonograph disc record was the medium used for music reproduction until late in the 20th century. It had co-existed with the cylinder from the late 1880s. Records retained the largest market share even when new formats such as compact cassette were mass-marketed, by the late 1980s, digital media, in the form of the compact disc, had gained a larger market share, and the vinyl record left the mainstream in 1991. The phonograph record has made a resurgence in the early 21st century –9.2 million records were sold in the U. S. in 2014. Likewise, in the UK sales have increased five-fold from 2009 to 2014, as of 2017,48 record pressing facilities remain worldwide,18 in the United States and 30 in other countries. The increased popularity of vinyl has led to the investment in new, only two producers of lacquers remains, Apollo Masters in California, USA, and MDC in Japan. Vinyl records may be scratched or warped if stored incorrectly but if they are not exposed to heat or broken.
The large cover are valued by collectors and artists for the space given for visual expression, in the 2000s, these tracings were first scanned by audio engineers and digitally converted into audible sound. Phonautograms of singing and speech made by Scott in 1860 were played back as sound for the first time in 2008, along with a tuning fork tone and unintelligible snippets recorded as early as 1857, these are the earliest known recordings of sound. In 1877, Thomas Edison invented the phonograph, unlike the phonautograph, it was capable of both recording and reproducing sound. Despite the similarity of name, there is no evidence that Edisons phonograph was based on Scotts phonautograph. Edison first tried recording sound on a paper tape, with the idea of creating a telephone repeater analogous to the telegraph repeater he had been working on. The tinfoil was wrapped around a metal cylinder and a sound-vibrated stylus indented the tinfoil while the cylinder was rotated. The recording could be played back immediately, Edison invented variations of the phonograph that used tape and disc formats.
A decade later, Edison developed a greatly improved phonograph that used a wax cylinder instead of a foil sheet. This proved to be both a better-sounding and far more useful and durable device, the wax phonograph cylinder created the recorded sound market at the end of the 1880s and dominated it through the early years of the 20th century. Berliners earliest discs, first marketed in 1889, but only in Europe, were 12.5 cm in diameter, both the records and the machine were adequate only for use as a toy or curiosity, due to the limited sound quality
An audio engineer works on the recording, manipulating the record using equalization and electronic effects, mixing and reinforcement of sound. Audio engineers work on the. technical aspect of recording—the placing of microphones, pre-amp knobs, the physical recording of any project is done by an engineer. Many audio engineers creatively use technologies to produce sound for film, television, electronic products and computer games. Audio engineers set up, sound check and do live sound mixing using an audio console and development audio engineers invent new technologies and techniques, to enhance the process and art of audio engineering. They might be referred to as acoustic engineers, audio engineers in research and development usually possess a bachelors degree, masters degree or higher qualification in acoustics, computer science or another engineering discipline. They might work in consultancy, specializing in architectural acoustics. Alternatively they might work in companies, or other industries that need audio expertise.
Some positions, such as faculty require a Doctor of Philosophy, in Germany a Toningenieur is an audio engineer who designs and repairs audio systems. The listed subdisciplines are based on PACS coding used by the Acoustical Society of America with some revision, audio engineers develop algorithms to allow the electronic manipulation of audio signals. These can be processed at the heart of audio production such as reverberation. Alternatively, the algorithms might carry out echo cancellation on Skype, or identify, architectural acoustics is the science and engineering of achieving a good sound within a room. For audio engineers, architectural acoustics can be about achieving good speech intelligibility in a stadium or enhancing the quality of music in a theatre, architectural Acoustic design is usually done by acoustic consultants. Electroacoustics is concerned with the design of headphones, loudspeakers, sound reproduction systems, examples of electroacoustic design include portable electronic devices, sound systems in architectural acoustics, surround sound in movie theater and vehicle audio.
Musical acoustics is concerned with researching and describing the science of music, in audio engineering, this includes the design of electronic instruments such as synthesizers, the human voice, computer analysis of audio, music therapy, and the perception and cognition of music. Psychoacoustics is the study of how humans respond to what they hear. At the heart of audio engineering are listeners who are the final arbitrator as to whether a design is successful. The production, computer processing and perception of speech is an important part of audio engineering, ensuring speech is transmitted intelligibly and with high quality, in rooms, through public address systems and through mobile telephone systems are important areas of study. Producer and mixer Phil Ek has described audio engineering as the aspect of recording—the placing of microphones, the turning of pre-amp knobs
In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. Typically the digital output is a twos complement binary number that is proportional to the input, due to the complexity and the need for precisely matched components, all but the most specialized ADCs are implemented as integrated circuits. A digital-to-analog converter performs the function, it converts a digital signal into an analog signal. The conversion involves quantization of the input, so it necessarily introduces a small amount of error, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input. The result is a sequence of values that have been converted from a continuous-time and continuous-amplitude analog signal to a discrete-time. An ADC is defined by its bandwidth and its signal-to-noise ratio, the bandwidth of an ADC is characterized primarily by its sampling rate.
The dynamic range of an ADC is influenced by many factors, including the resolution and accuracy, aliasing and jitter. The dynamic range of an ADC is often summarized in terms of its number of bits. An ideal ADC has an ENOB equal to its resolution, ADCs are chosen to match the bandwidth and required signal-to-noise ratio of the signal to be quantized. If an ADC operates at a rate greater than twice the bandwidth of the signal, perfect reconstruction is possible given an ideal ADC. The presence of quantization error limits the range of even an ideal ADC. However, if the range of the ADC exceeds that of the input signal. The resolution of the converter indicates the number of values it can produce over the range of analog values. The resolution determines the magnitude of the error and therefore determines the maximum possible average signal to noise ratio for an ideal ADC without the use of oversampling. The values are stored electronically in binary form, so the resolution is usually expressed in bits.
In consequence, the number of discrete values available, or levels, is assumed to be a power of two, for example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since 28 =256. The values can represent the ranges from 0 to 255 or from −128 to 127, resolution can be defined electrically, and expressed in volts. The minimum change in required to guarantee a change in the output code level is called the least significant bit voltage
A loudspeaker is an electroacoustic transducer, which converts an electrical audio signal into a corresponding sound. The most widely used type of speaker in the 2010s is the speaker, invented in 1925 by Edward W. Kellogg. The dynamic speaker operates on the basic principle as a dynamic microphone. Besides this most common method, there are several technologies that can be used to convert an electrical signal into sound. The sound source must be amplified or strengthened with a power amplifier before the signal is sent to the speaker. Speakers are typically housed in an enclosure or speaker cabinet which is often a rectangular or square box made of wood or sometimes plastic. The enclosures materials and design play an important role in the quality of the sound, where high fidelity reproduction of sound is required, multiple loudspeaker transducers are often mounted in the same enclosure, each reproducing a part of the audible frequency range. In this case the individual speakers are referred to as drivers, drivers made for reproducing high audio frequencies are called tweeters, those for middle frequencies are called mid-range drivers, and those for low frequencies are called woofers.
Smaller loudspeakers are found in such as radios, portable audio players, computers. Larger loudspeaker systems are used for music, sound reinforcement in theatres and concerts, the term loudspeaker may refer to individual transducers or to complete speaker systems consisting of an enclosure including one or more drivers. To adequately reproduce a range of frequencies with even coverage, most loudspeaker systems employ more than one driver. Individual drivers are used to different frequency ranges. The drivers are named subwoofers, mid-range speakers, the terms for different speaker drivers differ, depending on the application. In two-way systems there is no mid-range driver, so the task of reproducing the mid-range sounds falls upon the woofer and tweeter, home stereos use the designation tweeter for the high frequency driver, while professional concert systems may designate them as HF or highs. When multiple drivers are used in a system, a network, called a crossover. Loudspeaker driver of the type pictured are termed dynamic to distinguish them from earlier drivers, or speakers using piezoelectric or electrostatic systems, or any of several other sorts.
Johann Philipp Reis installed an electric loudspeaker in his telephone in 1861, it was capable of reproducing clear tones, alexander Graham Bell patented his first electric loudspeaker as part of his telephone in 1876, which was followed in 1877 by an improved version from Ernst Siemens. In 1898, Horace Short patented a design for a loudspeaker driven by compressed air, he sold the rights to Charles Parsons
In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy, no information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information, the process of reducing the size of a data file is referred to as data compression. In the context of data transmission, it is called coding in opposition to channel coding. Compression is useful because it reduces resources required to store and transmit data, computational resources are consumed in the compression process and, usually, in the reversal of the process. Data compression is subject to a space–time complexity trade-off, Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible.
Lossless compression is possible because most real-world data exhibits statistical redundancy, for example, an image may have areas of color that do not change over several pixels, instead of coding red pixel, red pixel. The data may be encoded as 279 red pixels and this is a basic example of run-length encoding, there are many schemes to reduce file size by eliminating redundancy. The Lempel–Ziv compression methods are among the most popular algorithms for lossless storage, DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. DEFLATE is used in PKZIP, and PNG, LZW is used in GIF images. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data, for most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded, current LZ-based coding schemes that perform well are Brotli and LZX. LZX is used in Microsofts CAB format, the best modern lossless compressors use probabilistic models, such as prediction by partial matching.
The Burrows–Wheeler transform can be viewed as a form of statistical modelling. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string, sequitur and Re-Pair are practical grammar compression algorithms for which software is publicly available. In a further refinement of the use of probabilistic modelling. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a machine to produce a string of encoded bits from a series of input data symbols
Audio power amplifier
This includes both amplifiers used in home audio systems and musical instrument amplifiers like guitar amplifiers. It is the electronic stage in a typical audio playback chain before the signal is sent to the loudspeakers. The inputs can be any number of sources like record players, CD players, digital audio players. Most audio power amplifiers require these low-level inputs, which are line level, the audio amplifier was invented in 1909 by Lee De Forest when he invented the triode vacuum tube. The triode was a three terminal device with a grid that can modulate the flow of electrons from the filament to the plate. The triode vacuum amplifier was used to make the first AM radio, early audio power amplifiers were based on vacuum tubes and some of these achieved notably high audio quality. Audio power amplifiers based on transistors became practical with the availability of inexpensive transistors in the late 1960s. Since the 1970s, most modern audio amplifiers are based on solid state devices, transistor-based amplifiers are lighter in weight, more reliable and require less maintenance than tube amplifiers.
In the 2010s, there are still audio enthusiasts, audio engineers and music producers who prefer tube-based amplifiers, key design parameters for audio power amplifiers are frequency response, gain and distortion. These are interdependent, increasing gain often leads to increases in noise. While negative feedback reduces the gain, it reduces distortion. Most audio amplifiers are linear amplifiers operating in class AB, until the 1970s, most amplifiers were tube amplifiers which used vacuum tubes. During the 1970s, tube amps were increasingly replaced with transistor-based amplifiers, which were lighter in weight, more reliable, one alternative to a separate preamp is to simply use passive volume and switching controls, sometimes integrated into a power amplifier to form an integrated amplifier. The final stage of amplification, after preamplifiers, is the output stage, for this reason, the design choices made around the output device or devices, such as the Class of operation of the output devices is often taken as the description of the whole power amplifier.
For example, a Class B amplifier will probably have just the power output devices operating cut off for half of each cycle, while the other devices operate in Class A. In a transformerless output stage, the devices are essentially in series with the supply and output load. For some years following the introduction of solid state amplifiers, their perceived sound did not have the excellent audio quality of the best valve amplifiers and this led audiophiles to believe that tube sound or valve sound had an intrinsic quality due to the vacuum tube technology itself. In 1970, Matti Otala published a paper on the origin of a previously unobserved form of distortion, transient intermodulation distortion, TIM distortion was found to occur during very rapid increases in amplifier output voltage
The field of computer music can trace its roots back to the origins of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century. The worlds first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey, mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the Colonel Bogey March of which no known recordings exist, however, CSIRAC played standard repertoire and was not used to extend musical thinking or composition practice which is current computer-music practice. The oldest known recordings of computer generated music were played by the Ferranti Mark 1 computer, the music program was written by Christopher Strachey. During a session recorded by the BBC, the managed to work its way through Baa Baa Black Sheep, God Save the King. Two further major 1950s developments were the origins of digital sound synthesis by computer, max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendents, further popularising computer music through a 1963 article in Science.
In Japan, experiments in computer music date back to 1962 and this resulted in a piece entitled TOSBAC Suite, influenced by the Illiac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo 70, Ezaki published an article called Contemporary Music and Computers in 1970. Early computer-music programs typically did not run in real time, programs would run for hours or days, on multimillion-dollar computers, to generate a few minutes of music. One way around this was to use a system, most notably the Roland MC-8 Microcomposer. In addition to the Yamaha DX7, the advent of digital chips. By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of music using more general programs and algorithms became possible. Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear, advances in computing power and software for manipulation of digital media have dramatically affected the way computer music is generated and performed.
Current-generation micro-computers are powerful enough to perform very sophisticated audio synthesis using a variety of algorithms. Computer-generated music is composed by, or with the extensive aid of. There is a genre of music that is organized, synthesized. Later, composers such as Gottfried Michael Koenig had computers generate the sounds of the composition as well as the score, Koenig produced algorithmic composition programs which were a generalisation of his own serial composition practice. This is not exactly similar to Xenakis work as he used mathematical abstractions, koenigs software translated the calculation of mathematical equations into codes which represented musical notation
Magnetic tape is a medium for magnetic recording, made of a thin, magnetizable coating on a long, narrow strip of plastic film. It was developed in Germany, based on magnetic wire recording, devices that record and play back audio and video using magnetic tape are tape recorders and video tape recorders. A device that stores data on magnetic tape is a tape drive. Magnetic tape revolutionized broadcast and recording, when all radio was live, it allowed programming to be recorded. At a time when gramophone records were recorded in one take, it allowed recordings to be made in multiple parts, which were mixed and edited with tolerable loss in quality. It was a key technology in computer development, allowing unparalleled amounts of data to be mechanically created, stored for long periods. Nowadays, other technologies can perform the functions of magnetic tape, in many cases, these technologies are replacing tape. Despite this, innovation in the technology continues, and Sony, over years, magnetic tape made in the 1970s and 1980s can suffer from a type of deterioration called sticky-shed syndrome.
Caused by hydrolysis of the binder of the tape, it can render the tape unusable, the oxide side of a tape is the surface that can be magnetically manipulated by a tape head. This is the side that stores the information, the side is simply a substrate to hold the tape together. The name originates from the fact that the side of most tapes is made of an oxide of iron. Magnetic tape was invented for recording sound by Fritz Pfleumer in 1928 in Germany, based on the invention of magnetic wire recording by Oberlin Smith in 1888, pfleumers invention used a ferric oxide powder coating on a long strip of paper. This invention was developed by the German electronics company AEG, which manufactured the recording machines and BASF. In 1933, working for AEG, Eduard Schuller developed the ring-shaped tape head, previous head designs were needle-shaped and tended to shred the tape. An important discovery made in this period was the technique of AC biasing, due to the escalating political tensions, and the outbreak of World War II, these developments were largely kept secret. A wide variety of recorders and formats have developed since, most significantly reel-to-reel, the practice of recording and editing audio using magnetic tape rapidly established itself as an obvious improvement over previous methods.
Many saw the potential of making the same improvements in recording television, television signals are similar to audio signals. A major difference is that video signals use more bandwidth than audio signals, existing audio tape recorders could not practically capture a video signal
Digital electronics or digital circuits are electronics that handle digital signals rather than by continuous ranges as used in analog electronics. All levels within a band of values represent the information state. In most cases, the number of states is two, and they are represented by two voltage bands, one near a reference value, and the other a value near the supply voltage. These correspond to the false and true values of the Boolean domain respectively, Digital techniques are useful because it is easier to get an electronic device to switch into one of a number of known states than to accurately reproduce a continuous range of values. Digital electronic circuits are made from large assemblies of logic gates. The binary number system was refined by Gottfried Wilhelm Leibniz and he established that by using the binary system. Digital logic as we know it was the brain-child of George Boole, Boole died young, but his ideas lived on. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits, vacuum tubes replaced relays for logic operations.
Lee De Forests modification, in 1907, of the Fleming valve can be used as an AND logic gate, ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus. Walther Bothe, inventor of the circuit, got part of the 1954 Nobel Prize in physics. Mechanical analog computers started appearing in the first century and were used in the medieval era for astronomical calculations. In World War II, mechanical computers were used for specialized military applications such as calculating torpedo aiming. During this time the first electronic computers were developed. Originally they were the size of a room, consuming as much power as several hundred modern personal computers. The Z3 was a computer designed by Konrad Zuse, finished in 1941. It was the worlds first working programmable, fully automatic digital computer and its operation was facilitated by the invention of the vacuum tube in 1904 by John Ambrose Fleming. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, the bipolar junction transistor was invented in 1947.
From 1955 onwards transistors replaced vacuum tubes in computer designs, giving rise to the generation of computers
In electronics, a digital-to-analog converter is a device that converts a digital signal into an analog signal. An analog-to-digital converter performs the reverse function, there are several DAC architectures, the suitability of a DAC for a particular application is determined by three main parameters, maximum sampling frequency and accuracy. Due to the complexity and the need for precisely matched components, digital-to-analog conversion can degrade a signal, so a DAC should be specified that has insignificant errors in terms of the application. DACs are commonly used in players to convert digital data streams into analog audio signals. They are used in televisions and mobile phones to digital video data into analog video signals which connect to the screen drivers to display monochrome or color images. These two applications use DACs at opposite ends of the speed/resolution trade-off, the audio DAC is a low speed high resolution type while the video DAC is a high speed low to medium resolution type.
Discrete DACs would typically be extremely high speed low resolution power hungry types, very high speed test equipment, especially sampling oscilloscopes, may use discrete DACs. A DAC converts an abstract finite-precision number into a physical quantity, in particular, DACs are often used to convert finite-precision time series data to a continually varying physical signal. A conventional practical DAC converts the numbers into a constant function made up of a sequence of rectangular functions that is modeled with the zero-order hold. Other DAC methods produce a pulse-density modulated output that can be filtered to produce a smoothly varying signal. As per the Nyquist–Shannon sampling theorem, a DAC can reconstruct the signal from the sampled data provided that its bandwidth meets certain requirements. Digital sampling introduces quantization error that manifests as low-level noise added to the reconstructed signal, instead of impulses, a conventional practical DAC updates the analog voltage at uniform sampling intervals, which is interpolated via a reconstruction filter to continuously varied levels.
The effect of this is that the voltage is held in time at the current value until the next input number is latched. This is equivalent to a zero-order hold operation and has an effect on the response of the reconstructed signal. The fact that DACs output a sequence of piecewise constant values or rectangular pulses causes multiple harmonics above the Nyquist frequency, these are removed with a low pass filter acting as a reconstruction filter in applications that require it. Other DAC methods produce a pulse-density modulated signal that can be filtered in a way to produce a smoothly varying signal. DACs and ADCs are part of a technology that has contributed greatly to the digital revolution. To illustrate, consider a typical long-distance telephone call, the callers voice is converted into an analog electrical signal by a microphone, which is converted to a digital stream by an ADC
A telephone line or telephone circuit is a single-user circuit on a telephone communication system. Telephone lines are used to deliver landline telephone service and Digital subscriber line phone service to the premises. Telephone overhead lines are connected to the switched telephone network. Modern lines may run underground, and may carry analog or digital signals to the exchange, often the customer end of that wire pair is connected to a data access arrangement, the telephone company end of that wire pair is connected to a telephone hybrid. In most cases, two wires for each telephone line run from a home or other small building to a local telephone exchange. The wires between the box and the exchange are known as the local loop, and the network of wires going to an exchange. The vast majority of houses in the U. S. are wired with 6-position modular jacks with four conductors wired to the junction box with copper wires. Those wires may be connected back to two telephone lines at the local telephone exchange, thus making those jacks RJ14 jacks.
More often, only two of the wires are connected to the exchange as one line, and the others are unconnected. In that case, the jacks in the house are RJ11, inside the walls of the house—between the houses outside junction box and the interior wall jacks—the most common telephone cable in new houses is Category 5 cable—4 pairs of 24 AWG solid copper. Inside large buildings, and in the cables that run to the telephone company POP