The Doppler effect is the change in frequency or wavelength of a wave in relation to an observer, moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842. A common example of Doppler shift is the change of pitch heard when a vehicle sounding a horn approaches and recedes from an observer. Compared to the emitted frequency, the received frequency is higher during the approach, identical at the instant of passing by, lower during the recession; the reason for the Doppler effect is that when the source of the waves is moving towards the observer, each successive wave crest is emitted from a position closer to the observer than the crest of the previous wave. Therefore, each wave takes less time to reach the observer than the previous wave. Hence, the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency. While they are traveling, the distance between successive wave fronts is reduced, so the waves "bunch together".
Conversely, if the source of waves is moving away from the observer, each wave is emitted from a position farther from the observer than the previous wave, so the arrival time between successive waves is increased, reducing the frequency. The distance between successive wave fronts is increased, so the waves "spread out". For waves that propagate in a medium, such as sound waves, the velocity of the observer and of the source are relative to the medium in which the waves are transmitted; the total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects is analyzed separately. For waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered. Doppler first proposed this effect in 1842 in his treatise "Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels"; the hypothesis was tested for sound waves by Buys Ballot in 1845.
He confirmed that the sound's pitch was higher than the emitted frequency when the sound source approached him, lower than the emitted frequency when the sound source receded from him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848. In Britain, John Scott Russell made an experimental study of the Doppler effect. In classical physics, where the speeds of source and the receiver relative to the medium are lower than the velocity of waves in the medium, the relationship between observed frequency f and emitted frequency f 0 is given by: f = f 0 where c is the velocity of waves in the medium; the frequency is decreased. Equivalent formula, easier to remember: f v w r = f 0 v w s = 1 λ where v w r is the wave's velocity relative to the receiver; the above formula assumes that the source is either directly approaching or receding from the observer. If the source approaches the observer at an angle, the observed frequency, first heard is higher than the object's emitted frequency.
Thereafter, there is a monotonic decrease in the observed frequency as it gets closer to the observer, through equality when it is coming from a direction perpendicular to the relative motion, a continued monotonic decrease as it recedes from the observer. When the observer is close to the path of the object, the transition from high to low frequency is abrupt; when the observer is far from the path of the object, the transition from high to low frequency is gradual. If the speeds v s and v r are small compared to the speed of the wave, the relationship between observed frequency f and emitted frequency f 0 is where Δ f = f
A variable star is a star whose brightness as seen from Earth fluctuates. This variation may be caused by a change in emitted light or by something blocking the light, so variable stars are classified as either: Intrinsic variables, whose luminosity changes. Extrinsic variables, whose apparent changes in brightness are due to changes in the amount of their light that can reach Earth. Many most, stars have at least some variation in luminosity: the energy output of our Sun, for example, varies by about 0.1% over an 11-year solar cycle. An ancient Egyptian calendar of lucky and unlucky days composed some 3,200 years ago may be the oldest preserved historical document of the discovery of a variable star, the eclipsing binary Algol. Of the modern astronomers, the first variable star was identified in 1638 when Johannes Holwarda noticed that Omicron Ceti pulsated in a cycle taking 11 months; this discovery, combined with supernovae observed in 1572 and 1604, proved that the starry sky was not eternally invariable as Aristotle and other ancient philosophers had taught.
In this way, the discovery of variable stars contributed to the astronomical revolution of the sixteenth and early seventeenth centuries. The second variable star to be described was the eclipsing variable Algol, by Geminiano Montanari in 1669. Chi Cygni was identified in 1686 by G. Kirch R Hydrae in 1704 by G. D. Maraldi. By 1786 ten variable stars were known. John Goodricke himself discovered Beta Lyrae. Since 1850 the number of known variable stars has increased especially after 1890 when it became possible to identify variable stars by means of photography; the latest edition of the General Catalogue of Variable Stars lists more than 46,000 variable stars in the Milky Way, as well as 10,000 in other galaxies, over 10,000'suspected' variables. The most common kinds of variability involve changes in brightness, but other types of variability occur, in particular changes in the spectrum. By combining light curve data with observed spectral changes, astronomers are able to explain why a particular star is variable.
Variable stars are analysed using photometry, spectrophotometry and spectroscopy. Measurements of their changes in brightness can be plotted to produce light curves. For regular variables, the period of variation and its amplitude can be well established. Peak brightnesses in the light curve are known as maxima. Amateur astronomers can do useful scientific study of variable stars by visually comparing the star with other stars within the same telescopic field of view of which the magnitudes are known and constant. By estimating the variable's magnitude and noting the time of observation a visual lightcurve can be constructed; the American Association of Variable Star Observers collects such observations from participants around the world and shares the data with the scientific community. From the light curve the following data are derived: are the brightness variations periodical, irregular, or unique? What is the period of the brightness fluctuations? What is the shape of the light curve? From the spectrum the following data are derived: what kind of star is it: what is its temperature, its luminosity class? is it a single star, or a binary? does the spectrum change with time?
Changes in brightness may depend on the part of the spectrum, observed if the wavelengths of spectral lines are shifted this points to movements strong magnetic fields on the star betray themselves in the spectrum abnormal emission or absorption lines may be indication of a hot stellar atmosphere, or gas clouds surrounding the star. In few cases it is possible to make pictures of a stellar disk; these may show darker spots on its surface. Combining light curves with spectral data gives a clue as to the changes that occur in a variable star. For example, evidence for a pulsating star is found in its shifting spectrum because its surface periodically moves toward and away from us, with the same frequency as its changing brightness. About two-thirds of all variable stars appear to be pulsating. In the 1930s astronomer Arthur Stanley Eddington showed that the mathematical equations that describe the interior of a star may lead to instabilities that cause a star to pulsate; the most common type of instability is related to oscillations in the degree of ionization in outer, convective layers of the star.
Suppose the star is in the swelling phase. Its outer layers expand; because of the decreasing temperature the degree of ionization decreases. This makes the gas more transparent, thus makes it easier for the star to radiate its energy; this in turn will make the star start to contract. As the gas is thereby compressed, it is heated and the degree of ionization again increases. Thi
A giant star is a star with larger radius and luminosity than a main-sequence star of the same surface temperature. They lie above the main sequence on the Hertzsprung–Russell diagram and correspond to luminosity classes II and III; the terms giant and dwarf were coined for stars of quite different luminosity despite similar temperature or spectral type by Ejnar Hertzsprung about 1905. Giant stars have radii up to a few hundred times the Sun and luminosities between 10 and a few thousand times that of the Sun. Stars still more luminous than giants are referred to as hypergiants. A hot, luminous main-sequence star may be referred to as a giant, but any main-sequence star is properly called a dwarf no matter how large and luminous it is. A star becomes a giant after all the hydrogen available for fusion at its core has been depleted and, as a result, leaves the main sequence; the behaviour of a post-main-sequence star depends on its mass. For a star with a mass above about 0.25 solar masses, once the core is depleted of hydrogen it contracts and heats up so that hydrogen starts to fuse in a shell around the core.
The portion of the star outside the shell expands and cools, but with only a small increase in luminosity, the star becomes a subgiant. The inert helium core continues to grow and increase temperature as it accretes helium from the shell, but in stars up to about 10-12 M☉ it does not become hot enough to start helium burning. Instead, after just a few million years the core reaches the Schönberg–Chandrasekhar limit collapses, may become degenerate; this causes the outer layers to expand further and generates a strong convective zone that brings heavy elements to the surface in a process called the first dredge-up. This strong convection increases the transport of energy to the surface, the luminosity increases and the star moves onto the red-giant branch where it will stably burn hydrogen in a shell for a substantial fraction of its entire life; the core continues to gain mass and increase in temperature, whereas there is some mass loss in the outer layers. § 5.9. If the star's mass, when on the main sequence, was below 0.4 M☉, it will never reach the central temperatures necessary to fuse helium.
P. 169. It will therefore remain a hydrogen-fusing red giant until it runs out of hydrogen, at which point it will become a helium white dwarf. § 4.1, 6.1. According to stellar evolution theory, no star of such low mass can have evolved to that stage within the age of the Universe. In stars above about 0.4 M☉ the core temperature reaches 108 K and helium will begin to fuse to carbon and oxygen in the core by the triple-alpha process.§ 5.9, chapter 6. When the core is degenerate helium fusion begins explosively, but most of the energy goes into lifting the degeneracy and the core becomes convective; the energy generated by helium fusion reduces the pressure in the surrounding hydrogen-burning shell, which reduces its energy-generation rate. The overall luminosity of the star decreases, its outer envelope contracts again, the star moves from the red-giant branch to the horizontal branch. Chapter 6; when the core helium is exhausted, a star with up to about 8 M☉ has a carbon–oxygen core that becomes degenerate and starts helium burning in a shell.
As with the earlier collapse of the helium core, this starts convection in the outer layers, triggers a second dredge-up, causes a dramatic increase in size and luminosity. This is the asymptotic giant branch analogous to the red-giant branch but more luminous, with a hydrogen-burning shell contributing most of the energy. Stars only remain on the AGB for around a million years, becoming unstable until they exhaust their fuel, go through a planetary nebula phase, become a carbon–oxygen white dwarf. § 7.1–7.4. Main-sequence stars with masses above about 12 M☉ are very luminous and they move horizontally across the HR diagram when they leave the main sequence becoming blue giants before they expand further into blue supergiants, they start core-helium burning before the core becomes degenerate and develop smoothly into red supergiants without a strong increase in luminosity. At this stage they have comparable luminosities to bright AGB stars although they have much higher masses, but will further increase in luminosity as they burn heavier elements and become a supernova.
Stars in the 8-12 M☉ range have somewhat intermediate properties and have been called super-AGB stars. They follow the tracks of lighter stars through RGB, HB, AGB phases, but are massive enough to initiate core carbon burning and some neon burning, they form oxygen–magnesium–neon cores, which may collapse in an electron-capture supernova, or they may leave behind an oxygen–neon white dwarf. O class main sequence stars are highly luminous; the giant phase for such stars is a brief phase of increased size and luminosity before developing a supergiant spectral luminosity class. Type O giants may be more than a hundred thousand times as luminous as the sun, brighter than many supergiants. Classification is complex and difficult with small differences between luminosity classes and a continuous range of intermediate forms; the most massive stars develop giant or supergiant spectral features while still burning hydrogen in their cores, due to mixing of heavy elements to the surface and high luminosity which produces a powerful stellar wind and causes the star's atmosphere to expand.
A star whose initial mass is less than 0.25 M☉ will not become a giant star at all. For most of th
Proper motion is the astronomical measure of the observed changes in the apparent places of stars or other celestial objects in the sky, as seen from the center of mass of the Solar System, compared to the abstract background of the more distant stars. The components for proper motion in the equatorial coordinate system are given in the direction of right ascension and of declination, their combined value is computed as the total proper motion. It has dimensions of angle per time arcseconds per year or milliarcseconds per year. Knowledge of the proper motion and radial velocity allows calculations of true stellar motion or velocity in space in respect to the Sun, by coordinate transformation, the motion in respect to the Milky Way. Proper motion is not "proper", because it includes a component due to the motion of the Solar System itself. Over the course of centuries, stars appear to maintain nearly fixed positions with respect to each other, so that they form the same constellations over historical time.
Ursa Major or Crux, for example, looks nearly the same now. However, precise long-term observations show that the constellations change shape, albeit slowly, that each star has an independent motion; this motion is caused by the movement of the stars relative to the Solar System. The Sun travels in a nearly circular orbit about the center of the Milky Way at a speed of about 220 km/s at a radius of 8 kPc from the center, which can be taken as the rate of rotation of the Milky Way itself at this radius; the proper motion is a two-dimensional vector and is thus defined by two quantities: its position angle and its magnitude. The first quantity indicates the direction of the proper motion on the celestial sphere, the second quantity is the motion's magnitude expressed in arcseconds per year or milliarcsecond per year. Proper motion may alternatively be defined by the angular changes per year in the star's right ascension and declination, using a constant epoch in defining these; the components of proper motion by convention are arrived at.
Suppose an object moves from coordinates to coordinates in a time Δt. The proper motions are given by: μ α = α 2 − α 1 Δ t, μ δ = δ 2 − δ 1 Δ t; the magnitude of the proper motion μ is given by the Pythagorean theorem: μ 2 = μ δ 2 + μ α 2 ⋅ cos 2 δ, μ 2 = μ δ 2 + μ α ∗ 2, where δ is the declination. The factor in cos2δ accounts for the fact that the radius from the axis of the sphere to its surface varies as cosδ, for example, zero at the pole. Thus, the component of velocity parallel to the equator corresponding to a given angular change in α is smaller the further north the object's location; the change μα, which must be multiplied by cosδ to become a component of the proper motion, is sometimes called the "proper motion in right ascension", μδ the "proper motion in declination". If the proper motion in right ascension has been converted by cosδ, the result is designated μα*. For example, the proper motion results in right ascension in the Hipparcos Catalogue have been converted. Hence, the individual proper motions in right ascension and declination are made equivalent for straightforward calculations of various other stellar motions.
The position angle θ is related to these components by: μ sin θ = μ α cos δ = μ α ∗, μ cos θ = μ δ. Motions in equatorial coordinates can be converted to motions in galactic coordinates. For the majority of stars seen in the sky, the observed proper motions are small and unremarkable; such stars are either faint or are distant, have changes of below 10 milliarcseconds per year, do not appear to move appreciably over many millennia. A few do have significant motions, are called high-proper motion stars. Motions can be in seemingly random directions. Two or more stars, double stars or open star clusters, which are moving in similar directions, exhibit so-called shared or common proper motion, suggesting they may be gravitationally attached or share similar motion in space. Barnard's Star has the largest proper motion of all stars, moving at 10.3 seconds of arc per year. L
The parsec is a unit of length used to measure large distances to astronomical objects outside the Solar System. A parsec is defined as the distance at which one astronomical unit subtends an angle of one arcsecond, which corresponds to 648000/π astronomical units. One parsec is equal to 31 trillion kilometres or 19 trillion miles; the nearest star, Proxima Centauri, is about 1.3 parsecs from the Sun. Most of the stars visible to the unaided eye in the night sky are within 500 parsecs of the Sun; the parsec unit was first suggested in 1913 by the British astronomer Herbert Hall Turner. Named as a portmanteau of the parallax of one arcsecond, it was defined to make calculations of astronomical distances from only their raw observational data quick and easy for astronomers. For this reason, it is the unit preferred in astronomy and astrophysics, though the light-year remains prominent in popular science texts and common usage. Although parsecs are used for the shorter distances within the Milky Way, multiples of parsecs are required for the larger scales in the universe, including kiloparsecs for the more distant objects within and around the Milky Way, megaparsecs for mid-distance galaxies, gigaparsecs for many quasars and the most distant galaxies.
In August 2015, the IAU passed Resolution B2, which, as part of the definition of a standardized absolute and apparent bolometric magnitude scale, mentioned an existing explicit definition of the parsec as 648000/π astronomical units, or 3.08567758149137×1016 metres. This corresponds to the small-angle definition of the parsec found in many contemporary astronomical references; the parsec is defined as being equal to the length of the longer leg of an elongated imaginary right triangle in space. The two dimensions on which this triangle is based are its shorter leg, of length one astronomical unit, the subtended angle of the vertex opposite that leg, measuring one arc second. Applying the rules of trigonometry to these two values, the unit length of the other leg of the triangle can be derived. One of the oldest methods used by astronomers to calculate the distance to a star is to record the difference in angle between two measurements of the position of the star in the sky; the first measurement is taken from the Earth on one side of the Sun, the second is taken half a year when the Earth is on the opposite side of the Sun.
The distance between the two positions of the Earth when the two measurements were taken is twice the distance between the Earth and the Sun. The difference in angle between the two measurements is twice the parallax angle, formed by lines from the Sun and Earth to the star at the distant vertex; the distance to the star could be calculated using trigonometry. The first successful published direct measurements of an object at interstellar distances were undertaken by German astronomer Friedrich Wilhelm Bessel in 1838, who used this approach to calculate the 3.5-parsec distance of 61 Cygni. The parallax of a star is defined as half of the angular distance that a star appears to move relative to the celestial sphere as Earth orbits the Sun. Equivalently, it is the subtended angle, from that star's perspective, of the semimajor axis of the Earth's orbit; the star, the Sun and the Earth form the corners of an imaginary right triangle in space: the right angle is the corner at the Sun, the corner at the star is the parallax angle.
The length of the opposite side to the parallax angle is the distance from the Earth to the Sun (defined as one astronomical unit, the length of the adjacent side gives the distance from the sun to the star. Therefore, given a measurement of the parallax angle, along with the rules of trigonometry, the distance from the Sun to the star can be found. A parsec is defined as the length of the side adjacent to the vertex occupied by a star whose parallax angle is one arcsecond; the use of the parsec as a unit of distance follows from Bessel's method, because the distance in parsecs can be computed as the reciprocal of the parallax angle in arcseconds. No trigonometric functions are required in this relationship because the small angles involved mean that the approximate solution of the skinny triangle can be applied. Though it may have been used before, the term parsec was first mentioned in an astronomical publication in 1913. Astronomer Royal Frank Watson Dyson expressed his concern for the need of a name for that unit of distance.
He proposed the name astron, but mentioned that Carl Charlier had suggested siriometer and Herbert Hall Turner had proposed parsec. It was Turner's proposal. In the diagram above, S represents the Sun, E the Earth at one point in its orbit, thus the distance ES is one astronomical unit. The angle SDE is one arcsecond so by definition D is a point in space at a distance of one parsec from the Sun. Through trigonometry, the distance SD is calculated as follows: S D = E S tan 1 ″ S D ≈ E S 1 ″ = 1 au 1 60 × 60 × π
Astronomical spectroscopy is the study of astronomy using the techniques of spectroscopy to measure the spectrum of electromagnetic radiation, including visible light and radio, which radiates from stars and other celestial objects. A stellar spectrum can reveal many properties of stars, such as their chemical composition, density, distance and relative motion using Doppler shift measurements. Spectroscopy is used to study the physical properties of many other types of celestial objects such as planets, nebulae and active galactic nuclei. Astronomical spectroscopy is used to measure three major bands of radiation: visible spectrum, X-ray. While all spectroscopy looks at specific areas of the spectrum, different methods are required to acquire the signal depending on the frequency. Ozone and molecular oxygen absorb light with wavelengths under 300 nm, meaning that X-ray and ultraviolet spectroscopy require the use of a satellite telescope or rocket mounted detectors. Radio signals have much longer wavelengths than optical signals, require the use of antennas or radio dishes.
Infrared light is absorbed by atmospheric water and carbon dioxide, so while the equipment is similar to that used in optical spectroscopy, satellites are required to record much of the infrared spectrum. Physicists have been looking at the solar spectrum since Isaac Newton first used a simple prism to observe the refractive properties of light. In the early 1800s Joseph von Fraunhofer used his skills as a glass maker to create pure prisms, which allowed him to observe 574 dark lines in a continuous spectrum. Soon after this, he combined telescope and prism to observe the spectrum of Venus, the Moon and various stars such as Betelgeuse; the resolution of a prism is limited by its size. This issue was resolved in the early 1900s with the development of high-quality reflection gratings by J. S. Plaskett at the Dominion Observatory in Ottawa, Canada. Light striking a mirror will reflect at the same angle, however a small portion of the light will be refracted at a different angle. By creating a "blazed" grating which utilizes a large number of parallel mirrors, the small portion of light can be focused and visualized.
These new spectroscopes were more detailed than a prism, required less light, could be focused on a specific region of the spectrum by tilting the grating. The limitation to a blazed grating is the width of the mirrors, which can only be ground a finite amount before focus is lost. In order to overcome this limitation holographic gratings were developed. Volume phase holographic gratings use a thin film of dichromated gelatin on a glass surface, subsequently exposed to a wave pattern created by an interferometer; this wave pattern sets up a reflection pattern similar to the blazed gratings but utilizing Bragg diffraction, a process where the angle of reflection is dependent on the arrangement of the atoms in the gelatin. The holographic gratings can have up to 6000 lines/mm and can be up to twice as efficient in collecting light as blazed gratings; because they are sealed between two sheets of glass, the holographic gratings are versatile lasting decades before needing replacement. Light dispersed by the grating or prism in a spectrograph can be recorded by a detector.
Photographic plates were used to record spectra until electronic detectors were developed, today optical spectrographs most employ charge-coupled devices. The wavelength scale of a spectrum can be calibrated by observing the spectrum of emission lines of known wavelength from a gas-discharge lamp; the flux scale of a spectrum can be calibrated as a function of wavelength by comparison with an observation of a standard star with corrections for atmospheric absorption of light. Radio astronomy was founded with the work of Karl Jansky in the early 1930s, while working for Bell Labs, he built a radio antenna to look at potential sources of interference for transatlantic radio transmissions. One of the sources of noise discovered came not from Earth, but from the center of the Milky Way, in the constellation Sagittarius. In 1942, JS Hey captured the sun's radio frequency using military radar receivers. Radio spectroscopy started with the discovery of the 21-centimeter H I line in 1951. Radio interferometry was pioneered in 1946, when Joseph Lade Pawsey, Ruby Payne-Scott and Lindsay McCready used a single antenna atop a sea cliff to observe 200 MHz solar radiation.
Two incident beams, one directly from the sun and the other reflected from the sea surface, generated the necessary interference. The first multi-receiver interferometer was built in the same year by Martin Vonberg. In 1960, Ryle and Antony Hewish published the technique of aperture synthesis to analyze interferometer data; the aperture synthesis process, which involves autocorrelating and discrete Fourier transforming the incoming signal, recovers both the spatial and frequency variation in flux. The result is a 3D image. For this work and Hewish were jointly awarded the 1974 Nobel Prize in Physics. Newton used a prism to split white light into a spectrum of color, Fraunhofer's high-quality prisms allowed scientists to see dark lines of an unknown origin. In the 1850s, Gustav Kirchhoff and Robert Bunsen described the phenomena behind these dark lines
Margin of error
The margin of error is a statistic expressing the amount of random sampling error in a survey's results. The larger the margin of error, the less confidence one should have that the poll's reported results are close to the "true" figures. Margin of error is positive whenever a population is incompletely sampled and the outcome measure has positive variance; the term "margin of error" is used in non-survey contexts to indicate observational error in reporting measured quantities. Margin of error is defined as the "radius" of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B; when a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%; the margin of error has been described as an "absolute" quantity, equal to a confidence interval radius for the statistic.
For example, if the true value is 50 percentage points, the statistic has a confidence interval radius of 5 percentage points we say the margin of error is 5 percentage points. As another example, if the true value is 50 people, the statistic has a confidence interval radius of 5 people we might say the margin of error is 5 people. In some cases, the margin of error is not expressed as an "absolute" quantity. For example, suppose the true value is 50 people, the statistic has a confidence interval radius of 5 people. If we use the "absolute" definition, the margin of error would be 5 people. If we use the "relative" definition we express this absolute margin of error as a percent of the true value. Hence, in this case, the absolute margin of error is 5 people, but the "percent relative" margin of error is 10%. However, the distinction is not explicitly made, yet is apparent from context. Like confidence intervals, the margin of error can be defined for any desired confidence level, but a level of 90%, 95% or 99% is chosen.
This level is the confidence that a margin of error around the reported percentage would include the "true" percentage. Hence, for example, we can be confident, at the 95% level, that out of every 100 simple random samples taken from a given population, 95 of them will contain the true percentage or other statistic under investigation, within the margin of error associated with each. Along with the confidence level, the sample design for a survey, in particular its sample size, determines the magnitude of the margin of error. A larger sample size produces a smaller margin of all else remaining equal. If the exact confidence intervals are used the margin of error takes into account both sampling error and non-sampling error. If an approximate confidence interval is used the margin of error may only take random sampling error into account, it does not represent other potential sources of error or bias such as a non-representative sample-design, poorly phrased questions, people lying or refusing to respond, the exclusion of people who could not be contacted, or miscounts and miscalculations.
An example from the 2004 U. S. presidential campaign will be used to illustrate concepts throughout this article. According to an October 2, 2004 survey by Newsweek, 47% of registered voters would vote for John Kerry/John Edwards if the election were held on that day, 45% would vote for George W. Bush/Dick Cheney, 2% would vote for Ralph Nader/Peter Camejo; the size of the sample was 1,013. Unless otherwise stated, the remainder of this article uses a 95% level of confidence. Polls involve taking a sample from a certain population. In the case of the Newsweek poll, the population of interest is the population of people who will vote; because it is impractical to poll everyone who will vote, pollsters take smaller samples that are intended to be representative, that is, a random sample of the population. It is possible that pollsters sample 1,013 voters who happen to vote for Bush when in fact the population is evenly split between Bush and Kerry, but this is unlikely given that the sample is random.
Sampling theory provides methods for calculating the probability that the poll results differ from reality by more than a certain amount due to chance. This theory and some Bayesian assumptions suggest that the "true" percentage will be close to 47%; the more people that are sampled, the more confident pollsters can be that the "true" percentage is close to the observed percentage. The margin of error is a measure of how close the results are to be. However, the margin of error only accounts for random sampling error, so it is blind to systematic errors that may be introduced by non-response or by interactions between the survey and subjects' memory, motivation and knowledge; this section will discuss the standard error of a percentage, the corresponding confidence interval, connect these two concepts to the margin of error. For simplicity, the calculations here assume the poll was based on a simple random sample from a large population; the standard error of a reported proportion or percentage p measures its accuracy, is the estimated standard deviation of that perc