A record producer or music producer oversees and manages the sound recording and production of a band or performer's music, which may range from recording one song to recording a lengthy concept album. A producer has varying roles during the recording process, they may gather musical ideas for the project, collaborate with the artists to select cover tunes or original songs by the artist/group, work with artists and help them to improve their songs, lyrics or arrangements. A producer may also: Select session musicians to play rhythm section accompaniment parts or solos Co-write Propose changes to the song arrangements Coach the singers and musicians in the studioThe producer supervises the entire process from preproduction, through to the sound recording and mixing stages, and, in some cases, all the way to the audio mastering stage; the producer may perform these roles themselves, or help select the engineer, provide suggestions to the engineer. The producer may pay session musicians and engineers and ensure that the entire project is completed within the record label's budget.
A record producer or music producer has a broad role in overseeing and managing the recording and production of a band or performer's music. A producer has many roles that may include, but are not limited to, gathering ideas for the project, composing the music for the project, selecting songs or session musicians, proposing changes to the song arrangements, coaching the artist and musicians in the studio, controlling the recording sessions, supervising the entire process through audio mixing and, in some cases, to the audio mastering stage. Producers often take on a wider entrepreneurial role, with responsibility for the budget, schedules and negotiations. Writer Chris Deville explains it, "Sometimes a producer functions like a creative consultant — someone who helps a band achieve a certain aesthetic, or who comes up with the perfect violin part to complement the vocal melody, or who insists that a chorus should be a bridge. Other times a producer will build a complete piece of music from the ground up and present the finished product to a vocalist, like Metro Boomin supplying Future with readymade beats or Jack Antonoff letting Taylor Swift add lyrics and melody to an otherwise-finished “Out Of The Woods.”The artist of an album may not be a record producer or music producer for his/her album.
While both contribute creatively, the official credit of "record producer" may depend on the record contract. Christina Aguilera, for example, did not receive record producer credits until many albums into her career. In the 2010s, the producer role is sometimes divided among up to three different individuals: executive producer, vocal producer and music producer. An executive producer oversees project finances, a vocal producers oversees the vocal production, a music producer oversees the creative process of recording and mixings; the music producer is often a competent arranger, musician or songwriter who can bring fresh ideas to a project. As well as making any songwriting and arrangement adjustments, the producer selects and/or collaborates with the mixing engineer, who takes the raw recorded tracks and edits and modifies them with hardware and software tools to create a stereo or surround sound "mix" of all the individual voices sounds and instruments, in turn given further adjustment by a mastering engineer for the various distribution media.
The producer oversees the recording engineer who concentrates on the technical aspects of recording. Noted producer Phil Ek described his role as "the person who creatively guides or directs the process of making a record", like a director would a movie. Indeed, in Bollywood music, the designation is music director; the music producer's job is to create and mold a piece of music. The scope of responsibility may be one or two songs or an artist's entire album – in which case the producer will develop an overall vision for the album and how the various songs may interrelate. At the beginning of record industry, the producer role was technically limited to record, in one shot, artists performing live; the immediate predecessors to record producers were the artists and repertoire executives of the late 1920s and 1930s who oversaw the "pop" product and led session orchestras. That was the case of Ben Selvin at Columbia Records, Nathaniel Shilkret at Victor Records and Bob Haring at Brunswick Records.
By the end of the 1930s, the first professional recording studios not owned by the major companies were established separating the roles of A&R man and producer, although it wouldn't be until the late 1940s when the term "producer" became used in the industry. The role of producers changed progressively over the 1960s due to technology; the development of multitrack recording caused a major change in the recording process. Before multitracking, all the elements of a song had to be performed simultaneously. All of these singers and musicians had to be assembled in a large studio where the performance was recorded. With multitrack recording, the "bed tracks" (rhythm section accompaniment parts such as the bassline and rhythm guitar could be recorded first, the vocals and solos could be added using as many "takes" as necessary, it was no longer necessary to get all the players in the studio at the same time. A pop band could record their backing tracks one week, a horn section could be brought in a week to add horn shots and punches, a string section could be brought in a week after that.
Multitrack recording had another pro
Abbey Road Studios
Abbey Road Studios is a recording studio at 3 Abbey Road, St John's Wood, City of Westminster, England. It was established in November 1931 by the Gramophone Company, a predecessor of British music company EMI, which owned it until Universal Music took control of part of EMI in 2013. Abbey Road Studios is most notable as being the 1960s' venue for innovative recording techniques adopted by the Beatles, Pink Floyd, the Hollies, as well as others. One of its earliest world-famous-artist clients was Paul Robeson, who recorded there in December 1931 and went on to record many of his best-known songs there. Towards the end of 2009, the studio came under threat of sale to property developers. However, the British Government protected the site, granting it English Heritage Grade II listed status in 2010, thereby preserving the building from any major alterations. A nine-bedroom Georgian townhouse built in 1831 on the footpath leading to Kilburn Abbey, the building was converted to flats where the most well-known resident was Maundy Gregory.
In 1929, the Gramophone Company converted it into studios. The property benefited from a large garden behind the townhouse, which permitted a much larger building to be constructed to the rear. Pathé filmed the opening of the studios in November 1931 when Edward Elgar conducted the London Symphony Orchestra in recording sessions of his music. In 1934, the inventor of stereo sound, Alan Blumlein, recorded Mozart's Jupiter Symphony, conducted by Thomas Beecham at the studios; the neighbouring house is owned by the studio and used to house musicians. During the mid-20th century, the studio was extensively used by leading British conductor Sir Malcolm Sargent, whose house was just around the corner from the studio building; the Gramophone Company merged with Columbia Graphophone Company to form Electric and Musical Industries in 1931, the studios became known as EMI Recording Studios. In 1936 cellist Pablo Casals became the first to record Johann Sebastian Bach's Cello Suites No. 1 & 2 at the command of EMI head Fred Gaisberg.
The recordings went on to spur a revolution among Bach cellists alike. In 1958, Studio Two at Abbey Road became a centre for rock and roll music when Cliff Richard and the Drifters recorded "Move It" there, pop music material. Abbey Road Studios is associated with the Beatles, who recorded all of their albums and hits there between 1962 and 1970 using the four-track REDD mixing console designed by Peter K. Burkowitz; the Beatles named their 1969 album Abbey Road, after the street. The studio was renamed Abbey Road Studios in 1970. Iain Macmillan took the album's cover photograph outside the studios, with the result that the nearby zebra crossing has become a place of pilgrimage for Beatles fans, it has been a tradition for visitors to pay homage to the band by writing on the wall in front of the building though it is painted over every three months. December 2010, the zebra crossing at Abbey Road was given a Grade II listed status. Pink Floyd recorded most of their late 1960s to mid-1970s albums here, returning only in 1988 for mixing and overdubbing subsequent albums.
Notable producers and sound engineers who have worked at Abbey Road include George Martin, Geoff Emerick, Norman "Hurricane" Smith, Ken Scott, Mike Stone, Alan Parsons, Peter Vince, Malcolm Addey, Peter Brown, Richard Langham, Phil McDonald, John Kurlander, Richard Lush and Ken Townsend, who invented the groundbreaking studio effect known as automatic double tracking. The chief mastering engineer at Abbey Road was Chris "Vinyl" Blair, who started his career as a tape deck operator. In 1979, EMI commissioned the British jazz fusion band Morrissey-Mullen to record Britain's first digitally recorded single record at Abbey Road Studios. From 18 July to 11 September 1983, the public had a rare opportunity to see inside the legendary Studio Two where the Beatles made most of their records. While a new mixing console was being installed in the control room, the studio was used to host a video presentation called The Beatles at Abbey Road; the soundtrack to the video had a number of recordings that were not made commercially available until the release of The Beatles Anthology project over a decade later.
The Red Hot Chili Peppers used a photograph of the band walking across the zebra crossing naked on the front of The Abbey Road E. P., released in 1988. In September 2005, American hip-hop artist Kanye West, backed by a 17-piece female string orchestra, performed songs derived from his first two studio albums at Abbey Road Studios. Recordings of these live renditions formed his live album, Late Orchestration, released in April 2006; the cover art for the album makes use of the famous zebra crossing with West's trademark'Dropout Bear' seen walking across it. In June 2011, South Korean boy band Shinee performed at the studio as part of its Japanese debut showcase in partnership with EMI and the group's local record label SM Entertainment, becoming the first-ever Asian artist to perform in the studio. In November 2011, Australian recording artist Kylie Minogue recorded some of her most famous songs with a full orchestra at Abbey Road Studios; the album called The Abbey Road Sessions was released October 2012.
In September 2012, with the takeover of EMI, the studio became the property of Universal Music. It was not one of the entities. In February 2017, a rare BTR-3 tape recorder used at Abbey Road, was found by members of
Multitrack recording —also known as multitracking, double tracking, or tracking—is a method of sound recording developed in 1955 that allows for the separate recording of multiple sound sources or of sound sources recorded at different times to create a cohesive whole. Multitracking became possible in the mid-1950s when the idea of recording different audio channels to separate discrete "tracks" on the same reel-to-reel tape was developed. A "track" was a different channel recorded to its own discrete area on the tape whereby their relative sequence of recorded events would be preserved, playback would be simultaneous or synchronized. Prior to the development of multitracking, the sound recording process required all of the singers, band instrumentalists, and/or orchestra accompanists to perform at the same time in the same space. Multitrack recording was a significant technical improvement as it allowed studio engineers to record all of the instruments and vocals for a piece of music separately.
Multitracking allowed the engineer to adjust the levels and tone of each individual track, if necessary, redo certain tracks or overdub parts of the track to correct errors or get a better "take." As well, different electronic effects such as reverb could be applied to specific tracks, such as the lead vocals, while not being applied to other tracks where this effect would not be desirable. Multitrack recording was much more than a technical innovation. In the 1980s and 1990s, computers provided means by which both sound recording and reproduction could be digitized, revolutionizing audio recording and distribution. In the 2000s, multitracking hardware and software for computers was of sufficient quality to be used for high-end audio recordings by both professional sound engineers and by bands recording without studios using available programs, which can be used on a high-end laptop computer. Though magnetic tape has not been replaced as a recording medium, the advantages of non-linear editing and recording have resulted in digital systems superseding tape.
In the 2010s, with digital multitracking being the dominant technology, the original word "track" is still used by audio engineers. Multi-tracking can be achieved with analogue recording, tape-based equipment, digital equipment that relies on tape storage of recorded digital data and hard disk-based systems employing a computer and audio recording software. Multi-track recording devices vary in their specifications, such as the number of simultaneous tracks available for recording at any one time. With the introduction of SMPTE timecode in the early 1970s, engineers began to use computers to synchronize separate audio and video playback, or multiple audio tape machines. In this system, one track of each machine carried the timecode signal, while the remaining tracks were available for sound recording; some large studios were able to link multiple 24-track machines together. An extreme example of this occurred in 1982, when the rock group Toto recorded parts of Toto IV on three synchronized 24-track machines.
This setup theoretically provided for up to 69 audio tracks, far more than necessary for most recording projects. For computer-based systems, the trend in the 2000s is towards unlimited numbers of record/playback tracks, although issues such as RAM memory and CPU available do limit this from machine to machine. Moreover, on computer-based systems, the number of available recording tracks is limited by the number of sound card discrete analog or digital inputs; when recording, audio engineers can select which track on the device will be used for each instrument, voice, or other input and can blend one track with two instruments to vary the music and sound options available. At any given point on the tape, any of the tracks on the recording device can be recording or playing back using sel-sync or Selective Synchronous recording; this allows an artist to be able to record onto track 2 and listen to track 1, 3 and 7, allowing them to sing or to play an accompaniment to the performance recorded on these tracks.
They might record an alternate version on track 4 while listening to the other tracks. All the tracks can be played back in perfect synchrony, as if they had been played and recorded together; this can be repeated until all of the available tracks have been in some cases, reused. During mix down a separate set of playback heads with higher fidelity are used. Before all tracks are filled, any number of existing tracks can be "bounced" into one or two tracks, the original tracks erased, making more room for more tracks to be reused for fresh recording. In 1963, The Beatles were using twin track for Please Please Me; the Beatles' producer George Martin used this technique extensively to achieve multiple track results, while still being limited to using only multiple four-track machines, until an eight-track machine became available during the recording of the Beatles' White Album. The Beach Boys' Pet Sounds made innovative use of multitracking with 8-tra
Charles Thompson IV is an American singer and guitarist. He is best known as the frontman of the influential alternative rock band Pixies, with whom he performs under the stage name Black Francis. Following the band's breakup in 1993, he embarked on a solo career under the name Frank Black. After releasing two albums with record label 4AD and one with American Recordings, he left the label and formed a new band, Frank Black and the Catholics, he re-adopted the name Black Francis in 2007. His vocal style has varied from a screaming, yowling delivery as lead vocalist of the Pixies to a more measured and melodic style in his solo career, his cryptic lyrics explore unconventional subjects, such as surrealism and biblical violence, along with science fiction and surf culture. His use of atypical meter signatures, loud–quiet dynamics, distinct preference for live-to-two-track recording during his time with the Catholics, give him a distinct style within alternative rock. Thompson regrouped the Pixies in early 2004 and continues to release solo records and tour as a solo artist.
Charles Thompson was born in Massachusetts. His father was a bar owner, Thompson lived in Los Angeles, California, as a baby because his father wanted to "learn more about the restaurant and bar business." Thompson was introduced to music at a young age. His first guitar was his mother's, a Yamaha classical guitar bought with money from his father's bar tips, which he started to play at age "11 or 12."Thompson's family moved around, first with his father, his stepfather, a religious man who "pursued real estate on both coasts". When Thompson was 12, his mother and stepfather joined an evangelical church, tied to the Pentecostal denomination Assemblies of God, a move that influenced many of his songs written with the Pixies, which refer to the Bible, he discovered the music of Christian rock singer-songwriter Larry Norman at 13 when Norman played at a religious summer camp that Thompson attended. Norman's music influenced Thompson to the extent that he named the Pixies' first EP and a lyric in the band's song "Levitate Me" after one of Norman's catchphrases, "Come on, pilgrim!"
Thompson described the music he listened to during his youth: I used to hang out with some misfits. We were the'we listen to odd-ball music' kids. I wasn't hanging out at all-ages shows or trying to get into clubs to see bands, I was buying records at used records stores and borrowing them from the library. You just saw Lake & Palmer records. So I didn't know music, but it was a good thing that I didn't know it, that I instead listened to a lot of'60s records and this religious music. Thompson lived in an apartment in Massachusetts. Just before his senior year, his family moved to Westport, where he received a Teenager of the Year award—the title of a solo album. During this time, Thompson composed several songs that appeared in his career, including "Here Comes Your Man" from Doolittle, "Velvety Instrumental Version."After graduating from high school in 1983, Thompson studied at the University of Massachusetts Amherst, majoring in anthropology. Thompson shared a room with another roommate for a semester before moving in with future Pixies guitarist Joey Santiago.
The two shared an interest in rock music, Santiago introduced Thompson to 1970s punk and the music of David Bowie. It was at this time that Thompson discovered The Cars, a band he described as "very influential on me and the Pixies."In his second year of college, Thompson embarked on a trip to San Juan, Puerto Rico, as part of an exchange program. He spent six months in an apartment with a "weird, psycho roommate," who served as a direct inspiration for the Pixies song "Crackity Jones. Thompson failed to learn to speak Spanish formally, left his studies after debating whether he would go to New Zealand to view Halley's Comet, or start a rock band, he wrote a letter urging Santiago, with the words "we gotta do it, now is the time Joe," to join him in a band upon his return to Boston. Soon after returning to Massachusetts, Thompson dropped out of college, moved to Boston with Santiago, he spent 1985 working in a warehouse, "managing buttons on teddy bears," composing songs on his acoustic guitar, writing lyrics on the subway.
In January 1986, Thompson formed the Pixies with Santiago. Bassist Kim Deal was recruited a week via a classified advertisement placed in a Boston paper, which requested a bassist "into Hüsker Dü and Peter and Mary." Drummer David Lovering was hired on recommendation from Deal's husband. In 1987, the Pixies released an 18-track demo tape referred to as The Purple Tape. Thompson's father assisted the band financially; the Purple Tape led to a recording contract with the English independent record label 4AD. For the release of the mini album Come On Pilgrim, Thompson adopted the alias "Black Francis", a name inspired by his father: "he had been saving that name in case he had another son."In 1988, the Pixies recorded their debut album Surfer Rosa. Thompson wrote and sang on all the tracks, with the exception of the single "Gigantic,", co-written and sung by Deal. To support the album, the band undertook a European tour
Audio time stretching and pitch scaling
Time stretching is the process of changing the speed or duration of an audio signal without affecting its pitch. Pitch scaling is the opposite: the process of changing the pitch without affecting the speed, not to be confused with the simpler process of pitch shifting which affects both pitch and speed by slowing down or speeding up a recording; these processes are used, for instance, to match the pitches and tempos of two pre-recorded clips for mixing when the clips cannot be reperformed or resampled. They are used to create effects such as increasing the range of an instrument; the simplest way to change the duration or pitch of a digital audio clip is through sample rate conversion. This is a mathematical operation that rebuilds a continuous waveform from its samples and samples that waveform again at a different rate; when the new samples are played at the original sampling frequency, the audio clip sounds faster or slower. The frequencies in the sample are always scaled at the same rate as the speed, transposing its perceived pitch up or down in the process.
In other words, slowing down the recording lowers the pitch, speeding it up raises the pitch. This is analogous to speeding up or slowing down an analogue recording, like a phonograph record or tape, creating the Chipmunk effect. Using this method the two effects cannot be separated. A drum track containing no pitched instruments can be moderately sample rate converted for tempo without adverse effects, but a pitched track cannot. Rabiner and Schafer in 1978 put forth an alternate solution that works in the time domain: attempt to find the period of a given section of the wave using some pitch detection algorithm, crossfade one period into another; this is called time-domain harmonic scaling or the synchronized overlap-add method and performs somewhat faster than the phase vocoder on slower machines but fails when the autocorrelation mis-estimates the period of a signal with complicated harmonics. Adobe Audition seems to solve this by looking for the period closest to a center period that the user specifies, which should be an integer multiple of the tempo, between 30 Hz and the lowest bass frequency.
This is much more limited in scope than the phase vocoder based processing, but can be made much less processor intensive, for real-time applications. It provides the most coherent results for single-pitched sounds like voice or musically monophonic instrument recordings. High-end commercial audio processing packages either combine the two techniques, or use other techniques based on the wavelet transform, or artificial neural network processing, producing the highest-quality time stretching. In order to preserve an audio signal's pitch when stretching or compressing its duration, many time-scale modification procedures follow a frame-based approach. Given an original discrete-time audio signal, this strategy's first step is to split the signal into short analysis frames of fixed length; the analysis frames are spaced by a fixed number of samples, called the analysis hopsize H a ∈ N. To achieve the actual time-scale modification, the analysis frames are temporally relocated to have a synthesis hopsize H s ∈ N.
This frame relocation results in a modification of the signal's duration by a stretching factor of α = H s / H a. However superimposing the unmodified analysis frames results in undesired artifacts such as phase discontinuities or amplitude fluctuations. To prevent these kinds of artifacts, the analysis frames are adapted to form synthesis frames, prior to the reconstruction of the time-scale modified output signal; the strategy of how to derive the synthesis frames from the analysis frames is a key difference among different TSM procedures. One way of stretching the length of a signal without affecting the pitch is to build a phase vocoder after Flanagan and Portnoff. Basic steps: compute the instantaneous frequency/amplitude relationship of the signal using the STFT, the discrete Fourier transform of a short and smoothly windowed block of samples; the phase vocoder handles sinusoid components well, but early implementations introduced considerable smearing on transient waveforms at all non-integer compression/expansion rates, which renders the results phasey and diffuse.
Recent improvements allow better quality results at all compression/expansion ratios but a residual smearing effect still remains. The phase vocoder technique can be used to perform pitch shifting, timbre manipulation and other unusual modifications, all of which can be changed as a function of time. Another method for time stretching relies on a spectral model of the signal. In this method, peaks are identified in frames using the STFT of the signal, sinusoidal "tracks" are created by connecting peaks in adjacent frames; the tracks are re-synthesized at a new time scale. This method can yield good results on both polyphonic and percussive material, especiall
In music, a chorus effect occurs when individual sounds with the same time, similar pitches converge and are perceived as one. While similar sounds coming from multiple sources can occur as in the case of a choir or string orchestra, it can be simulated using an electronic effects unit or signal processing device; when the effect is produced none of the constituent sounds are perceived as being out of tune. It is characteristic of sounds with a rich, shimmering quality that would be absent if the sound came from a single source; the shimmer occurs because of beating. The effect is more apparent; the chorus effect is easy to hear when listening to a choir or string ensemble. A choir has multiple people singing each part. A string ensemble has multiple violinists and multiples of other stringed instruments. Although most acoustic instruments cannot produce a chorus effect by themselves, some instruments can produce it as part of their own design; the effect can make these acoustic instruments sound fuller and louder than by using a single tone generator.
Some examples: Piano - Each hammer strikes a course of multiple strings tuned to nearly the same pitch. Professional piano tuners control the mistuning of each string to add movement without losing clarity. However, in some poorly-cared instruments, the effect is more prominent. Santur - As well as on the piano, the player can strike a course of multiple strings tuned to nearly the same pitch; as the instrument is tuned by the musicians themselves, the chorus effect is more heard than on the piano. 12-string guitar, bajo sexto and greek bouzouki - Courses with pairs of strings, tuned in octaves and unisons, create a distinctive complex shimmer. In the 12-string guitar, this effect is accentuated by the use of open and modal tunings, such as open-G and DADGAD. Colombian tiple, guitarrón chileno and tricordia - Courses of 3 strings, tuned in octaves and unisons, create a more complex shimmer and a fuller effect. Mandolin and oud - Courses with pairs of identically-tuned strings, as opposed to octaves and unisons on the 12-string guitar.
Accordion - two or three reed blocks tuned to nearly the same pitch produce a unique and distinctive sound exclusive to the accordion. Pipe organ - The voix céleste is an organ stop consisting of either one or two ranks of pipes out of tune; the term celeste refers to a rank of pipes detuned so as to produce a beating effect when combined with a tuned rank. It is used to refer to a compound stop of two or more ranks in which at the ranks are detuned relative to each other. However, while the open strings of a standard-tuned guitar can't produce any chorus effect, it can be obtained by the use of alternative tunings; the chorus effect can be simulated by signal processing equipment. The signal processor may be software running on a computer, software running in a digital effect processor, or an analog effect processor. If the processor is hardware-based, it may be packaged as a pedal, a rack-mount module, a table-top device, built into an instrument amplifier, or built into some electronic instruments, such as synthesizers, electronic pianos and Hammond organs.
Regardless of the technology or form factor, the processor achieves the effect by taking an audio signal and mixing it with one or more delayed, pitch-modulated copies of itself. The pitch of the added voices is modulated by an LFO, which makes the overall effect similar to that of a flanger, except with longer delays and without feedback. In the case of the synthesizer, the effect can be achieved by using multiple detuned oscillators for each note, or by passing all the notes played through a separate electronic chorus circuit. Stereo chorus effect processors produce the same effect, but it is varied between the left and right channels by offsetting the delay or phase of the LFO; the effect is thereby enhanced because sounds are produced from multiple locations in the stereo field. Used on instruments like "clean" electric guitar and keyboards, it can yield dreamy or ambient sounds. Commercial chorus effect devices include controls that enable them to be used to produce delay, reverberation, or other related effects that use similar hardware, rather than as chorus effects.
In spite of the name, most electronic chorus effects do not emulate the acoustic ensemble effect. Instead, they create a moving electronic shimmer. Although the electronic chorus effect can be obtained by the multiple ways mentioned above, some devices have adquired a high status among musicians in the "effect pedal" form. Boss CE-1 - Released in 1976, it was one of the first chorus effect pedals commercially available, based on the same circuit from the Roland Jazz C
Distortion and overdrive are forms of audio signal processing used to alter the sound of amplified electric musical instruments by increasing their gain, producing a "fuzzy", "growling", or "gritty" tone. Distortion is most used with the electric guitar, but may be used with other electric instruments such as bass guitar, electric piano, Hammond organ. Guitarists playing electric blues obtained an overdriven sound by turning up their vacuum tube-powered guitar amplifiers to high volumes, which caused the signal to distort. While overdriven tube amps are still used to obtain overdrive in the 2010s in genres like blues and rockabilly, a number of other ways to produce distortion have been developed since the 1960s, such as distortion effect pedals; the growling tone of distorted electric guitar is a key part of many genres, including blues and many rock music genres, notably hard rock, punk rock, hardcore punk, acid rock, heavy metal music. The effects alter the instrument sound by clipping the signal, adding sustain and harmonic and inharmonic overtones and leading to a compressed sound, described as "warm" and "dirty", depending on the type and intensity of distortion used.
The terms distortion and overdrive are used interchangeably. Fuzz is a particular form of extreme distortion created by guitarists using faulty equipment, emulated since the 1960s by a number of "fuzzbox" effects pedals. Distortion and fuzz can be produced by effects pedals, pre-amplifiers, power amplifiers, speakers and by digital amplifier modeling devices and audio software; these effects are used with electric guitars, electric basses, electronic keyboards, more as a special effect with vocals. While distortion is created intentionally as a musical effect and sound engineers sometimes take steps to avoid distortion when using PA systems to amplify vocals or when playing back prerecorded music; the first guitar amplifiers were low-fidelity, would produce distortion when their volume was increased beyond their design limit or if they sustained minor damage. Around 1945, Western-swing guitarist Junior Barnard began experimenting with a rudimentary humbucker pick-up and a small amplifier to obtain his signature "low-down and dirty" bluesy sound.
Many electric blues guitarists, including Chicago bluesmen such as Elmore James and Buddy Guy, experimented in order to get a guitar sound that paralleled the rawness of blues singers such as Muddy Waters and Howlin' Wolf, replacing their originals with the powerful Valco "Chicagoan" pick-ups created for lap-steel, to obtain a louder and fatter tone. In early rock music, Goree Carter's "Rock Awhile" featured an over-driven electric guitar style similar to that of Chuck Berry several years as well as Joe Hill Louis' "Boogie in the Park". In the early 1950s, pioneering rock guitarist Willie Johnson of Howlin' Wolf′s band began deliberately increasing gain beyond its intended levels to produce "warm" distorted sounds. Guitar Slim experimented with distorted overtones, which can be heard in his hit electric blues song "The Things That I Used to Do". Chuck Berry's 1955 classic "Maybellene" features a guitar solo with warm overtones created by his small valve amplifier. Pat Hare produced distorted power chords on his electric guitar for records such as James Cotton's "Cotton Crop Blues" as well as his own "I'm Gonna Murder My Baby", creating "a grittier, more ferocious electric guitar sound," accomplished by turning the volume knob on his amplifier "all the way to the right until the speaker was screaming."In the mid-1950s, guitar distortion sounds started to evolve based on sounds created earlier in the decade by accidental damage to amps, such as in the popular early recording of the 1951 Ike Turner and the Kings of Rhythm song "Rocket 88", where guitarist Willie Kizart used a vacuum tube amplifier that had a speaker cone damaged in transport.
Rock guitarists began intentionally "doctoring" amplifiers and speakers in order to emulate this form of distortion. In 1956, guitarist Paul Burlison of the Johnny Burnette Trio deliberately dislodged a vacuum tube in his amplifier to record "The Train Kept A-Rollin" after a reviewer raved about the sound Burlison's damaged amplifier produced during a live performance. According to other sources Burlison's amp had a broken loudspeaker cone. Pop-oriented producers were horrified by that eerie "two-tone" sound, quite clean on trebles but distorted on basses, but Burnette insisted to publish the sessions, arguing that "that guitar sounds like a nice horn section". In the late 1950s, Guitarist Link Wray began intentionally manipulating his amplifiers' vacuum tubes to create a "noisy" and "dirty" sound for his solos after a accidental discovery. Wray poked holes in his speaker cones with pencils to further distort his tone, used electronic echo chambers, the recent powerful and "fat" Gibson humbucker pickups, controlled "feedback".
The resultant sound can be heard on his influential 1958 instrumental, "Rumble" and Rawhide. In 1961, Grady Martin scored a hit with a fuzzy tone caused by a faulty preamplifier that distorted his guitar playing on the Marty Robbins song "Don't Worry"; that year Martin recorded an instrumental tune under his own name, using the same faulty pr