Oscillation is the repetitive variation in time, of some measure about a central value or between two or more different states. The term vibration is used to describe mechanical oscillation. Familiar examples of oscillation include a swinging pendulum and alternating current. Oscillations occur not only in mechanical systems but in dynamic systems in every area of science: for example the beating of the human heart, business cycles in economics, predator–prey population cycles in ecology, geothermal geysers in geology, vibration of strings in guitar and other string instruments, periodic firing of nerve cells in the brain, the periodic swelling of Cepheid variable stars in astronomy; the simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air ice surface; the system is in an equilibrium state. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium.
However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted; the time taken for an oscillation to occur is referred to as the oscillatory period. The systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy, converted into potential energy stored in the spring at the extremes of its path; the spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium.
All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment; this is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system; the simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator. In addition, an oscillating system may be subject to some external force, as when an AC circuit is connected to an outside power source. In this case the oscillation is said to be driven; some systems can be excited by energy transfer from the environment. This transfer occurs where systems are embedded in some fluid flow. For example, the phenomenon of flutter in aerodynamics occurs when an arbitrarily small displacement of an aircraft wing results in an increase in the angle of attack of the wing on the air flow and a consequential increase in lift coefficient, leading to a still greater displacement.
At sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. The harmonic oscillator and the systems it models have a single degree of freedom. More complicated systems have more degrees of freedom, for example three springs. In such cases, the behavior of each variable influences that of the others; this leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks mounted on a common wall will tend to synchronise; this phenomenon was first observed by Christiaan Huygens in 1665. The apparent motions of the compound oscillations appears complicated but a more economic, computationally simpler and conceptually deeper description is given by resolving the motion into normal modes. More special cases are the coupled oscillators where energy alternates between two forms of oscillation. Well-known is the Wilberforce pendulum, where the oscillation alternates between an elongation of a vertical spring and the rotation of an object at the end of that spring.
As the number of degrees of freedom becomes arbitrarily large, a system approaches continuity. Such systems have an infinite number of normal modes and their oscillations occur in the form of waves that can characteristically propagate; the mathematics of oscillation deals with the quantification of the amount that a sequence or function tends to move between extremes. There are several related notions: oscillation of a sequence of real numbers, oscillation of a real valued function at a point, oscillation of a function on an interval. Crystal oscillator Neutron stars Cyclic Model Neutral particle oscillation, e.g. neutrino oscillations Quantum harmonic oscillator Cellular Automata oscillator Media related to Oscillation at Wikimedia Commons Vibrations – a chapter from an online textbook
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis finds application in all fields of engineering and the physical sciences, but in the 21st century the life sciences, social sciences, medicine and the arts have adopted elements of scientific computations; as an aspect of mathematics and computer science that generates and implements algorithms, the growth in power and the revolution in computing has raised the use of realistic mathematical models in science and engineering, complex numerical analysis is required to provide solutions to these more involved models of the world. Ordinary differential equations appear in celestial mechanics. Before the advent of modern computers, numerical methods depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead; these same interpolation formulas continue to be used as part of the software algorithms for solving differential equations.
One of the earliest mathematical writings is a Babylonian tablet from the Yale Babylonian Collection, which gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Being able to compute the sides of a triangle is important, for instance, in astronomy and construction. Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of the square root of 2, modern numerical analysis does not seek exact answers, because exact answers are impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors; the overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of, suggested by the following: Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes; such simulations consist of solving partial differential equations numerically. Hedge funds use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices and crew assignments and fuel needs; such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis; the rest of this section outlines several important themes of numerical analysis. The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve good numerical estimates of some functions; the canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a large number of used formulas and functions and their values at many points. The function values are no longer useful when a computer is available, but the large listing of formulas can still be handy; the mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, it was found that these computers were useful for administrative purposes, but the invention of the computer influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution. In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test involving the residual, is specified in order to decide when a sufficiently accurate solution has been found. Using infinite precision arithmetic these methods would not reach the solution within a finite number of steps. Examples include Newton's method, the bisection method, Jacobi iteration. In computational matrix algebra, iterative methods are generall
ArXiv is a repository of electronic preprints approved for posting after moderation, but not full peer review. It consists of scientific papers in the fields of mathematics, astronomy, electrical engineering, computer science, quantitative biology, mathematical finance and economics, which can be accessed online. In many fields of mathematics and physics all scientific papers are self-archived on the arXiv repository. Begun on August 14, 1991, arXiv.org passed the half-million-article milestone on October 3, 2008, had hit a million by the end of 2014. By October 2016 the submission rate had grown to more than 10,000 per month. ArXiv was made possible by the compact TeX file format, which allowed scientific papers to be transmitted over the Internet and rendered client-side. Around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Paul Ginsparg recognized the need for central storage, in August 1991 he created a central repository mailbox stored at the Los Alamos National Laboratory which could be accessed from any computer.
Additional modes of access were soon added: FTP in 1991, Gopher in 1992, the World Wide Web in 1993. The term e-print was adopted to describe the articles, it began as a physics archive, called the LANL preprint archive, but soon expanded to include astronomy, computer science, quantitative biology and, most statistics. Its original domain name was xxx.lanl.gov. Due to LANL's lack of interest in the expanding technology, in 2001 Ginsparg changed institutions to Cornell University and changed the name of the repository to arXiv.org. It is now hosted principally with eight mirrors around the world, its existence was one of the precipitating factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists upload their papers to arXiv.org for worldwide access and sometimes for reviews before they are published in peer-reviewed journals. Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv; the annual budget for arXiv is $826,000 for 2013 to 2017, funded jointly by Cornell University Library, the Simons Foundation and annual fee income from member institutions.
This model arose in 2010, when Cornell sought to broaden the financial funding of the project by asking institutions to make annual voluntary contributions based on the amount of download usage by each institution. Each member institution pledges a five-year funding commitment to support arXiv. Based on institutional usage ranking, the annual fees are set in four tiers from $1,000 to $4,400. Cornell's goal is to raise at least $504,000 per year through membership fees generated by 220 institutions. In September 2011, Cornell University Library took overall administrative and financial responsibility for arXiv's operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it "was supposed to be a three-hour tour, not a life sentence". However, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. Although arXiv is not peer reviewed, a collection of moderators for each area review the submissions; the lists of moderators for many sections of arXiv are publicly available, but moderators for most of the physics sections remain unlisted.
Additionally, an "endorsement" system was introduced in 2004 as part of an effort to ensure content is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, but to check whether the paper is appropriate for the intended subject area. New authors from recognized academic institutions receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for restricting scientific inquiry. A majority of the e-prints are submitted to journals for publication, but some work, including some influential papers, remain purely as e-prints and are never published in a peer-reviewed journal. A well-known example of the latter is an outline of a proof of Thurston's geometrization conjecture, including the Poincaré conjecture as a particular case, uploaded by Grigori Perelman in November 2002.
Perelman appears content to forgo the traditional peer-reviewed journal process, stating: "If anybody is interested in my way of solving the problem, it's all there – let them go and read about it". Despite this non-traditional method of publication, other mathematicians recognized this work by offering the Fields Medal and Clay Mathematics Millennium Prizes to Perelman, both of which he refused. Papers can be submitted in any of several formats, including LaTeX, PDF printed from a word processor other than TeX or LaTeX; the submission is rejected by the arXiv software if generating the final PDF file fails, if any image file is too large, or if the total size of the submission is too large. ArXiv now allows one to store and modify an incomplete submission, only finalize the submission when ready; the time stamp on the article is set. The standard access route is through one of several mirrors. Sev
Quantum harmonic oscillator
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary potential can be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known; the Hamiltonian of the particle is: H ^ = p ^ 2 2 m + 1 2 k x ^ 2 = p ^ 2 2 m + 1 2 m ω 2 x ^ 2, where m is the particle's mass, k is the force constant, ω = k m is the angular frequency of the oscillator, x ^ is the position operator, p ^ is the momentum operator. The first term in the Hamiltonian represents the kinetic energy of the particle, the second term represents its potential energy, as in Hooke's law. One may write the time-independent Schrödinger equation, H ^ | ψ ⟩ = E | ψ ⟩, where E denotes a to-be-determined real number that will specify a time-independent energy level, or eigenvalue, the solution |ψ⟩ denotes that level's energy eigenstate.
One may solve the differential equation representing this eigenvalue problem in the coordinate basis, for the wave function ⟨x|ψ⟩ = ψ, using a spectral method. It turns out. In this basis, they amount to Hermite functions, ψ n = 1 2 n n! ⋅ 1 / 4 ⋅ e − m ω x 2 2 ℏ ⋅ H n, n = 0, 1, 2, …. The functions Hn are the physicists' Hermite polynomials, H n = n e z 2 d n d z n; the corresponding energy levels are E n = ℏ ω = ℏ 2 ω. This energy spectrum is noteworthy for three reasons. First, the energies are quantized. Second, these discrete energy levels are spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy is not equal to the minimum of the potential well, but ħω/2 above it; because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed, but have a small range of variance, in accordance with the Heisenberg uncertainty principle. The ground state probability density is concentrated at the origin, which means the particle spends most of its time at the bottom of the potential well, as one would expect for a state with little energy.
As the energy increases, the probability density peaks at the classical "turning points", where the state's energy coincides with the potential energy. This is consistent with the classical harmonic oscillator, in which the particle spends more of its time near the turning points, where it is moving the slowest; the correspondence principle is thus satisfied. Moreover, special nondispersive wave
Diatomic molecules are molecules composed of only two atoms, of the same or different chemical elements. The prefix di- is of Greek origin, meaning "two". If a diatomic molecule consists of two atoms of the same element, such as hydrogen or oxygen it is said to be homonuclear. Otherwise, if a diatomic molecule consists of two different atoms, such as carbon monoxide or nitric oxide, the molecule is said to be heteronuclear; the only chemical elements that form stable homonuclear diatomic molecules at standard temperature and pressure are the gases hydrogen, oxygen and chlorine. The noble gases are gases at STP, but they are monatomic; the homonuclear diatomic gases and noble gases together are called "elemental gases" or "molecular gases", to distinguish them from other gases that are chemical compounds. At elevated temperatures, the halogens bromine and iodine form diatomic gases. All halogens have been observed as diatomic molecules, except for astatine, uncertain; the mnemonics BrINClHOF, pronounced "Brinklehof", HONClBrIF, pronounced "Honkelbrif", HOFBrINCl have been coined to aid recall of the list of diatomic elements.
Other elements form diatomic molecules when evaporated, but these diatomic species repolymerize when cooled. Heating elemental phosphorus gives diphosphorus, P2. Sulfur vapor is disulfur. Dilithium is known in the gas phase. Ditungsten and dimolybdenum form with sextuple bonds in the gas phase; the bond in a homonuclear diatomic molecule is non-polar. Dirubidium is diatomic. All other diatomic molecules are chemical compounds of two different elements. Many elements can combine to form heteronuclear diatomic molecules, depending on temperature and pressure; some examples include, gases carbon monoxide, nitric oxide, hydrogen chloride. Many 1:1 binary compounds are not considered diatomic because they are polymeric at room temperature, but they form diatomic molecules when evaporated, for example gaseous MgO, SiO, many others. Hundreds of diatomic molecules have been identified in the environment of the Earth, in the laboratory, in interstellar space. About 99% of the Earth's atmosphere is composed of two species of diatomic molecules: nitrogen and oxygen.
The natural abundance of hydrogen in the Earth's atmosphere is only of the order of parts per million, but H2 is the most abundant diatomic molecule in the universe. The interstellar medium is, dominated by hydrogen atoms. Diatomic elements played an important role in the elucidation of the concepts of element and molecule in the 19th century, because some of the most common elements, such as hydrogen and nitrogen, occur as diatomic molecules. John Dalton's original atomic hypothesis assumed that all elements were monatomic and that the atoms in compounds would have the simplest atomic ratios with respect to one another. For example, Dalton assumed water's formula to be HO, giving the atomic weight of oxygen as eight times that of hydrogen, instead of the modern value of about 16; as a consequence, confusion existed regarding atomic weights and molecular formulas for about half a century. As early as 1805, Gay-Lussac and von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen, by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on what is now called Avogadro's law and the assumption of diatomic elemental molecules.
However, these results were ignored until 1860 due to the belief that atoms of one element would have no chemical affinity toward atoms of the same element, partly due to apparent exceptions to Avogadro's law that were not explained until in terms of dissociating molecules. At the 1860 Karlsruhe Congress on atomic weights, Cannizzaro resurrected Avogadro's ideas and used them to produce a consistent table of atomic weights, which agree with modern values; these weights were an important prerequisite for the discovery of the periodic law by Dmitri Mendeleev and Lothar Meyer. Diatomic molecules are in their lowest or ground state, which conventionally is known as the X state; when a gas of diatomic molecules is bombarded by energetic electrons, some of the molecules may be excited to higher electronic states, as occurs, for example, in the natural aurora. Such excitation can occur when the gas absorbs light or other electromagnetic radiation; the excited states are unstable and relax back to the ground state.
Over various short time scales after the excitation, transitions occur from higher to lower electronic states and to the ground state, in each transition results a photon is emitted. This emission is known as fluorescence. Successively higher electronic states are conventionally named A, B, C, etc.. The excitation energy must be greater than or equal to the energy of the electronic state in order for the excitation to occur. In quantum theory, an electronic state of a diatomic molecule is represented by 2 S + 1 Λ ( v
In mathematics, a parabola is a plane curve, mirror-symmetrical and is U-shaped. It fits several superficially different other mathematical descriptions, which can all be proved to define the same curves. One description of a parabola involves a line; the focus does not lie on the directrix. The parabola is the locus of points in that plane that are equidistant from both the directrix and the focus. Another description of a parabola is as a conic section, created from the intersection of a right circular conical surface and a plane, parallel to another plane, tangential to the conical surface; the line perpendicular to the directrix and passing through the focus is called the "axis of symmetry". The point on the parabola that intersects the axis of symmetry is called the "vertex", is the point where the parabola is most curved; the distance between the vertex and the focus, measured along the axis of symmetry, is the "focal length". The "latus rectum" is the chord of the parabola, parallel to the directrix and passes through the focus.
Parabolas can open up, left, right, or in some other arbitrary direction. Any parabola can be repositioned and rescaled to fit on any other parabola—that is, all parabolas are geometrically similar. Parabolas have the property that, if they are made of material that reflects light light which travels parallel to the axis of symmetry of a parabola and strikes its concave side is reflected to its focus, regardless of where on the parabola the reflection occurs. Conversely, light that originates from a point source at the focus is reflected into a parallel beam, leaving the parabola parallel to the axis of symmetry; the same effects occur with other forms of energy. This reflective property is the basis of many practical uses of parabolas; the parabola has many important applications, from a parabolic antenna or parabolic microphone to automobile headlight reflectors to the design of ballistic missiles. They are used in physics and many other areas; the earliest known work on conic sections was by Menaechmus in the fourth century BC.
He discovered a way to solve the problem of doubling the cube using parabolas. The area enclosed by a parabola and a line segment, the so-called "parabola segment", was computed by Archimedes via the method of exhaustion in the third century BC, in his The Quadrature of the Parabola; the name "parabola" is due to Apollonius. It means "application", referring to "application of areas" concept, that has a connection with this curve, as Apollonius had proved; the focus–directrix property of the parabola and other conic sections is due to Pappus. Galileo showed that the path of a projectile follows a parabola, a consequence of uniform acceleration due to gravity; the idea that a parabolic reflector could produce an image was well known before the invention of the reflecting telescope. Designs were proposed in the early to mid seventeenth century by many mathematicians including René Descartes, Marin Mersenne, James Gregory; when Isaac Newton built the first reflecting telescope in 1668, he skipped using a parabolic mirror because of the difficulty of fabrication, opting for a spherical mirror.
Parabolic mirrors are used in most modern reflecting telescopes and in satellite dishes and radar receivers. A parabola can be defined geometrically as a set of points in the Euclidean plane: A parabola is a set of points, such that for any point P of the set the distance | P F | to a fixed point F, the focus, is equal to the distance | P l | to a fixed line l, the directrix: The midpoint V of the perpendicular from the focus F onto the directrix l is called vertex and the line F V the axis of symmetry of the parabola. If one introduces cartesian coordinates, such that F =, f > 0, the directrix has the equation y = − f one obtains for a point P = from | P F | 2 = | P l | 2 the equation x 2 + 2 = 2. Solving for y yields y = 1 4 f x 2; the parabola is U-shaped. The horizontal chord through the focus is called the latus rectum; the latus rectum is parallel to the directrix. The semi-latus
Musical acoustics or music acoustics is a branch of acoustics concerned with researching and describing the physics of music – how sounds are employed to make music. Examples of areas of study are the function of musical instruments, the human voice, computer analysis of melody, in the clinical use of music in music therapy; the physics of musical instruments Frequency range of music Fourier analysis Computer analysis of musical structure Synthesis of musical sounds Music cognition, based on physics Whenever two different pitches are played at the same time, their sound waves interact with each other – the highs and lows in the air pressure reinforce each other to produce a different sound wave. Any repeating sound wave, not a sine wave can be modeled by many different sine waves of the appropriate frequencies and amplitudes. In humans the hearing apparatus can isolate these tones and hear them distinctly; when two or more tones are played at once, a variation of air pressure at the ear "contains" the pitches of each, the ear and/or brain isolate and decode them into distinct tones.
When the original sound sources are periodic, the note consists of several related sine waves called the fundamental and the harmonics, partials, or overtones. The sounds have harmonic frequency spectra; the lowest frequency present is the fundamental, is the frequency at which the entire wave vibrates. The overtones vibrate faster than the fundamental, but must vibrate at integer multiples of the fundamental frequency for the total wave to be the same each cycle. Real instruments are close to periodic, but the frequencies of the overtones are imperfect, so the shape of the wave changes over time. Variations in air pressure against the ear drum, the subsequent physical and neurological processing and interpretation, give rise to the subjective experience called sound. Most sound that people recognize as musical is dominated by periodic or regular vibrations rather than non-periodic ones; the transmission of these variations through air is via a sound wave. In a simple case, the sound of a sine wave, considered the most basic model of a sound waveform, causes the air pressure to increase and decrease in a regular fashion, is heard as a pure tone.
Pure tones can be produced by tuning forks or whistling. The rate at which the air pressure oscillates is the frequency of the tone, measured in oscillations per second, called hertz. Frequency is the primary determinant of the perceived pitch. Frequency of musical instruments can change with altitude due to changes in air pressure. *This chart only displays down to C0, though some pipe organs, such as the Boardwalk Hall Auditorium Organ, extend down to C−1. The fundamental frequency of the subcontrabass tuba is B♭−1; the fundamental is the frequency. Overtones are other sinusoidal components present at frequencies above the fundamental. All of the frequency components that make up the total waveform, including the fundamental and the overtones, are called partials. Together they form the harmonic series. Overtones that are perfect integer multiples of the fundamental are called harmonics; when an overtone is near to being harmonic, but not exact, it is sometimes called a harmonic partial, although they are referred to as harmonics.
Sometimes overtones are created that are not anywhere near a harmonic, are just called partials or inharmonic overtones. The fundamental frequency is considered the first partial; the numbering of the partials and harmonics is usually the same. But if there are inharmonic partials, the numbering no longer coincides. Overtones are numbered. So speaking, the first overtone is the second partial; as this can result in confusion, only harmonics are referred to by their numbers, overtones and partials are described by their relationships to those harmonics. When a periodic wave is composed of a fundamental and only odd harmonics, the summed wave is half-wave symmetric. If the wave has any harmonics, it is asymmetrical. Conversely, a system that changes the shape of the wave creates additional harmonics; this is called a non-linear system. If it affects the wave symmetrically, the harmonics produced. If it affects the harmonics asymmetrically, at least one harmonic is produced. If two notes are played, with frequency ratios that are simple fractions, the composite wave is still periodic, with a short period—and the combination sounds consonant.
For instance, a note vibrating at 200 Hz and a note vibrating at 300 Hz add together to make a wave that repeats at 100 Hz: every 1/100 of a second, the 300 Hz wave repeats three times and the 200 Hz wave repeats twice. Note that the total wave repeats at 100 Hz, but there is no actual 100 Hz sinusoidal component. Additionally, the two notes have many of the same partials. For instance, a note with a fundamental frequency of 200 Hz has harmonics at:: 400, 600, 800, 1000, 1200, … A note with fundamental frequency of 300 Hz has harmonics at:: 600, 900, 1200, 1500, … The two notes share