The ear canal is a pathway running from the outer ear to the middle ear. The adult human ear canal extends from the pinna to the eardrum and is about 2.5 centimetres in length and 0.7 centimetres in diameter. The human ear canal is divided into two parts; the elastic cartilage part forms the outer third of the canal. The cartilage is the continuation of the cartilage framework of pinna; the cartilaginous portion of the ear canal contains small hairs and specialized sweat glands, called apocrine glands, which produce cerumen. The bony part forms the inner two thirds; the bony part is only a ring in the newborn. The layer of epithelium encompassing the bony portion of the ear canal is much thinner and therefore, more sensitive in comparison to the cartilaginous portion. Size and shape of the canal vary among individuals; the canal is 2.5 centimetres long and 0.7 centimetres in diameter. It runs from behind and above downward and forward. On the cross-section, it is of oval shape; these are important factors to consider.
Due to its relative exposure to the outside world, the ear canal is susceptible to diseases and other disorders. Some disorders include: Atresia of the ear canal Cerumen impaction Bone exposure, caused by the wearing away of skin in the canal Auditory canal osteoma Cholesteatoma Contact dermatitis of the ear canal Fungal infection Ear mites in animals Ear myiasis, an rare infestation of maggots Foreign body in ear Granuloma, a scar caused by tympanostomy tubes Otitis externa, bacteria-caused inflammation of the ear canal Stenosis, a gradual closing of the canal Earwax known as cerumen, is a yellowish, waxy substance secreted in the ear canals, it plays an important role in the human ear canal, assisting in cleaning and lubrication, provides some protection from bacteria and insects. Excess or impacted cerumen can press against the eardrum and/or occlude the external auditory canal and impair hearing, causing conductive hearing loss. If left untreated, cerumen impaction can increase the risk of developing an infection within the ear canal.
List of specialized glands within the human integumentary system Veterans Health Administration web site OSHA web site Continuing Medical Education Ear Photographs Otoscopy Tutorial w/ Images "Anatomy diagram: 34257.000-1". Roche Lexicon - illustrated navigator. Elsevier. Archived from the original on 2014-01-01
In biological morphology and anatomy, a sulcus is a furrow or fissure. It may be a groove in the surface of a limb or an organ, notably in the surface of the brain, but in the lungs, certain muscles, as well as in bones, elsewhere. Many sulci are the product of a surface fold or junction, such as in the gums, where they fold around the neck of the tooth. In invertebrate zoology, a sulcus is a fold, groove, or boundary at the edges of sclerites or between segments. Anterior interventricular sulcus calcaneal sulcus coronal sulcus femoral sulcus or intercondylar fossa of femur gingival sulcus gluteal sulcus interlabial sulci intermammary sulcus intertubercular sulcus, the groove between the lesser and greater tubercules of the humerus lacrimal sulcus malleolar sulcus patellar sulcus or intercondylar fossa of femur posterior interventricular sulcus preauricular sulcus radial sulcus sagittal sulcus separatoral sulcus sigmoid sulcus sulcus arteriæ vertebralis sulcus subtarsalis in the eyelid sulcus tubae auditivae tympanic sulcus urethral sulcus Fissure Sinus Sulcus sign
Posterior auricular artery
The posterior auricular artery is a small artery that arises from the external carotid artery, above the digastric muscle and stylohyoid muscle, opposite the apex of the styloid process. It ascends posteriorly beneath the parotid gland, along the styloid process of the temporal bone, between the cartilage of the ear and the mastoid process of the temporal bone along the lateral side of the head; the posterior auricular artery gives off the stylomastoid artery, small branches to the auricle, supplies blood to the scalp posterior to the auricle. Anterior auricular branches of superficial temporal artery Posterior auricular nerve
Dominance in genetics is a relationship between alleles of one gene, in which the effect on phenotype of one allele masks the contribution of a second allele at the same locus. The first allele is dominant and the second allele is recessive. For genes on an autosome, the alleles and their associated traits are autosomal dominant or autosomal recessive. Dominance is a key concept in Mendelian inheritance and classical genetics; the dominant allele codes for a functional protein whereas the recessive allele does not. A classic example of dominance is the inheritance of seed shape in peas. Peas associated with allele r. In this case, three combinations of alleles are possible: RR, Rr, rr; the RR individuals have round peas and the rr individuals have wrinkled peas. In Rr individuals the R allele masks the presence of the r allele, so these individuals have round peas. Thus, allele R is dominant to allele r, allele r is recessive to allele R; this use of upper case letters for dominant alleles and lower case ones for recessive alleles is a followed convention.
More where a gene exists in two allelic versions, three combinations of alleles are possible: AA, Aa, aa. If AA and aa individuals show different forms of some trait, Aa individuals show the same phenotype as AA individuals allele A is said to dominate, be dominant to or show dominance to allele a, a is said to be recessive to A. Dominance is not inherent to either its phenotype, it is a relationship between two alleles of their associated phenotypes. An allele may be dominant for a particular aspect of phenotype but not for other aspects influenced by the same gene. Dominance differs from epistasis, a relationship in which an allele of one gene affects the expression of another allele at a different gene; the concept of dominance was introduced by Gregor Johann Mendel. Though Mendel, "The Father of Genetics", first used the term in the 1860s, it was not known until the early twentieth century. Mendel observed that, for a variety of traits of garden peas having to do with the appearance of seeds, seed pods, plants, there were two discrete phenotypes, such as round versus wrinkled seeds, yellow versus green seeds, red versus white flowers or tall versus short plants.
When bred separately, the plants always produced generation after generation. However, when lines with different phenotypes were crossed and only one of the parental phenotypes showed up in the offspring. However, when these hybrid plants were crossed, the offspring plants showed the two original phenotypes, in a characteristic 3:1 ratio, the more common phenotype being that of the parental hybrid plants. Mendel reasoned that each parent in the first cross was a homozygote for different alleles, that each contributed one allele to the offspring, with the result that all of these hybrids were heterozygotes, that one of the two alleles in the hybrid cross dominated expression of the other: A masked a; the final cross between two heterozygotes would produce AA, Aa, aa offspring in a 1:2:1 genotype ratio with the first two classes showing the phenotype, the last showing the phenotype, thereby producing the 3:1 phenotype ratio. Mendel did not use the terms gene, phenotype, genotype and heterozygote, all of which were introduced later.
He did introduce the notation of capital and lowercase letters for dominant and recessive alleles still in use today. Most animals and some plants have paired chromosomes, are described as diploid, they have two versions of each chromosome, one contributed by the mother's ovum, the other by the father's sperm, known as gametes, described as haploid, created through meiosis. These gametes fuse during fertilization during sexual reproduction, into a new single cell zygote, which divides multiple times, resulting in a new organism with the same number of pairs of chromosomes in each cell as its parents; each chromosome of a matching pair is structurally similar to the other, has a similar DNA sequence. The DNA in each chromosome functions as a series of discrete genes that influence various traits. Thus, each gene has a corresponding homologue, which may exist in different versions called alleles; the alleles at the same locus on the two homologous chromosomes may be different. The blood type of a human is determined by a gene that creates an A, B, AB or O blood type and is located in the long arm of chromosome nine.
There are three different alleles that could be present at this locus, but only two can be present in any individual, one inherited from their mother and one from their father. If two alleles of a given gene are identical, the organism is called a homozygote and is said to be homozygous with respect to that gene; the genetic makeup of an organism, either at a single locus or over all its genes collectively, is called its genotype. The genotype of an organism directly and indirectly affects its molecular and other traits, which individually or collectively are called its phenotype. At heterozygous gene loci, the two alleles interact to produce the phenotype. In complete dominance, the effect of one allele in a heterozygous genotype masks the effect of the other; the allele that mas
Head-related transfer function
A head-related transfer function sometimes known as the anatomical transfer function is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ear canal, density of the head and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz, but the response curve is more complex than a single bump, affects a broad frequency spectrum, varies from person to person. A pair of HRTFs for two ears can be used to synthesize a binaural sound that seems to come from a particular point in space, it is a transfer function. Some consumer home entertainment products designed to reproduce surround sound from stereo headphones use HRTFs; some forms of HRTF-processing have been included in computer software to simulate surround sound playback from loudspeakers.
Humans have just two ears, but can locate sounds in three dimensions – in range, in direction above and below, in front and to the rear, as well as to either side. This is possible because the brain, inner ear and the external ears work together to make inferences about location; this ability to localize sound sources may have developed in humans and ancestors as an evolutionary necessity, since the eyes can only see a fraction of the world around a viewer, vision is hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy, regardless of the surrounding light. Humans estimate the location of a source by taking cues derived from one ear, by comparing cues received at both ears. Among the difference cues are time differences of intensity differences; the monaural cues come from the interaction between the sound source and the human anatomy, in which the original source sound is modified before it enters the ear canal for processing by the auditory system.
These modifications encode the source location, may be captured via an impulse response which relates the source location and the ear location. This impulse response is termed the head-related impulse response. Convolution of an arbitrary source sound with the HRIR converts the sound to that which would have been heard by the listener if it had been played at the source location, with the listener's ear at the receiver location. HRIRs have been used to produce virtual surround sound; the HRTF is the Fourier transform of HRIR. HRTFs for left and right ear describe the filtering of a sound source before it is perceived at the left and right ears as xL and xR, respectively; the HRTF can be described as the modifications to a sound from a direction in free air to the sound as it arrives at the eardrum. These modifications include the shape of the listener's outer ear, the shape of the listener's head and body, the acoustic characteristics of the space in which the sound is played, so on. All these characteristics will influence how a listener can tell what direction a sound is coming from.
In the AES69-2015 standard, the Audio Engineering Society has defined the SOFA file format for storing spatially oriented acoustic data like head-related transfer functions. SOFA software libraries and files are collected at the Sofa Conventions website; the associated mechanism varies between individuals, as their ear shapes differ. HRTF describes how a given sound wave input is filtered by the diffraction and reflection properties of the head and torso, before the sound reaches the transduction machinery of the eardrum and inner ear. Biologically, the source-location-specific prefiltering effects of these external structures aid in the neural determination of source location the determination of the source's elevation. Linear systems analysis defines the transfer function as the complex ratio between the output signal spectrum and the input signal spectrum as a function of frequency. Blauert defined the transfer function as the free-field transfer function. Other terms include free-field to eardrum transfer function and the pressure transformation from the free-field to the eardrum.
Less specific descriptions include the pinna transfer function, the outer ear transfer function, the pinna response, or directional transfer function. The transfer function H of any linear time-invariant system at frequency f is: H = Output / InputOne method used to obtain the HRTF from a given source location is therefore to measure the head-related impulse response, h, at the ear drum for the impulse Δ placed at the source; the HRTF H is the Fourier transform of the HRIR h. When measured for a "dummy head" of idealized geometry, HRTF are complicated functions of frequency and the three spatial variables. For distances greater than 1 m from the head, the HRTF can be said to attenuate inversely with range, it is this far field HRTF, H, that has most been measured. At closer range, the difference in level observed between the ears can grow quite large in the low-frequency region within which negligible level differences are observed in the far field. HRTFs are measured in an anechoic chamber to minimize the influence of early reflections and reverberation on the measured respons
Great auricular nerve
The great auricular nerve originates from the cervical plexus, composed of branches of spinal nerves C2 and C3. It provides sensory innervation for the skin over parotid gland and mastoid process, both surfaces of the outer ear, it is the largest of the ascending branches of the cervical plexus. It arises from the second and third cervical nerves, winds around the posterior border of the sternocleidomastoideus, after perforating the deep fascia, ascends upon that muscle beneath the platysma to the parotid gland, where it divides into an anterior and a posterior branch; the anterior branch is distributed to the skin of the face over the parotid gland, communicates in the substance of the gland with the facial nerve. The posterior branch supplies the skin over the mastoid process and on the back of the auricula, except at its upper part; the posterior branch communicates with the smaller occipital, the auricular branch of the vagus, the posterior auricular branch of the facial. This article incorporates text in the public domain from page 926 of the 20th edition of Gray's Anatomy Diagram at aapmr.org Anatomy figure: 25:03-03 at Human Anatomy Online, SUNY Downstate Medical Center - "Diagram of the cervical plexus."
Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance. It may refer to the methods in acoustical engineering to simulate the placement of an auditory cue in a virtual 3D space; the sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time- and level-differences between both ears, spectral information, timing analysis, correlation analysis, pattern matching; these cues are used by other animals, but there may be differences in usage, there are localization cues which are absent in the human auditory system, such as the effects of ear movements. Animals with the ability to localize sound have a clear evolutionary advantage. Sound is the perceptual result of mechanical vibrations traveling through a medium such as air or water. Through the mechanisms of compression and rarefaction, sound waves travel through the air, bounce off the pinna and concha of the exterior ear, enter the ear canal.
The sound waves vibrate the tympanic membrane, causing the three bones of the middle ear to vibrate, which sends the energy through the oval window and into the cochlea where it is changed into a chemical signal by hair cells in the organ of corti, which synapse onto spiral ganglion fibers that travel through the cochlear nerve into the brain. In vertebrates, inter-aural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress, this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths; some cells are more directly connected to one ear than the other, thus they are specific for a particular inter-aural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress' theory is unable to account for the precedence effect, in which only the first of multiple identical sounds is used to determine the sounds' location, it cannot be used to explain the response.
Furthermore, a number of recent physiological observations made in the midbrain and brainstem of small mammals have shed considerable doubt on the validity of Jeffress' original ideas Neurons sensitive to inter-aural level differences are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn, depends on the sound intensities at the ears. In the auditory midbrain nucleus, the inferior colliculus, many ILD sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of ILD. However, there are many neurons with much more shallow response functions that do not decline to zero spikes. Most mammals are adept at resolving the location of a sound source using interaural time differences and interaural level differences. However, no such time or level differences exist for sounds originating along the circumference of circular conical slices, where the cone's axis lies along the line between the two ears.
Sound waves originating at any point along a given circumference slant height will have ambiguous perceptual coordinates. That is to say, the listener will be incapable of determining whether the sound originated from the back, top, bottom or anywhere else along the circumference at the base of a cone at any given distance from the ear. Of course, the importance of these ambiguities are vanishingly small for sound sources close to or far away from the subject, but it is these intermediate distances that are most important in terms of fitness; these ambiguities can be removed by tilting the head, which can introduce a shift in both the amplitude and phase of sound waves arriving at each ear. This translates the vertical orientation of the interaural axis horizontally, thereby leveraging the mechanism of localization on the horizontal plane. Moreover with no alternation in the angle of the interaural axis the hearing system can capitalize on interference patterns generated by pinnae, the torso, the temporary re-purposing of a hand as extension of the pinna.
As with other sensory stimuli, perceptual disambiguation is accomplished through integration of multiple sensory inputs visual cues. Having localized a sound within the circumference of a circle at some perceived distance, visual cues serve to fix the location of the sound. Moreover, prior knowledge of the location of the sound generating agent will assist in resolving its current location. Sound localization is the process of determining the location of a sound source. Objectively speaking, the major goal of sound localization is to simulate a specific sound field, including the acoustic sources, the listener, the media and environments of sound propagation; the brain utilizes subtle differences in intensity and timing cues to allow us to localize sound sources. In this section, to more understand the human auditory mechanism, we will discuss about human ear localization theory. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, the distance or velocity.
The azimuth of a sound is signaled by the difference in arrival times between the ears, by the relative amplitude of high-frequency sounds, by the asymmetrical spectral reflections from various part