Davidson College is a private liberal arts college in Davidson, North Carolina with a historic 665-acre main campus and a 110-acre lake campus on Lake Norman. The college has graduated 23 Rhodes Scholars. Davidson annually enrolls about 1950 students from 40 countries. Of those students, nearly 80 percent study abroad and about 25 percent participate in 19 NCAA Division I sports. Students may choose from 26 majors and 17 interdisciplinary minors, as well as other interdisciplinary studies; the college is governed by an honor code and the majority of students, about 93 percent, live on campus for all four years. Princeton Review and U. S. News & World Report regard Davidson's admission process as "most selective". For the class of 2022, Davidson received 5,712 applications and accepted 1,104; the yield rate was 46.8%, 85% of accepted freshmen reporting rank were in the top 10% of their high school classes. The middle 50% range of SAT scores for admitted students was 640–720 for the new Evidence-Based Reading & Writing, 650–730 for Math, while the ACT Composite range was 29–33.
Caucasians represented 67.1% of the incoming class, 44.5% of enrolled freshmen were from the South. The 2019 annual ranking by U. S. News & World Report rates Davidson College as the 10th best among "National Liberal Arts Colleges" in America, 3rd in "Best Undergraduate Teaching" in the nation. For 2016, Davidson College was ranked 25th overall on Forbes' list of "America's Top Colleges," and 1st in the South. In 2018, Kiplinger's Personal Finance rated Davidson College as the #1 best college for value across all colleges and universities in America. An institution of higher learning of The Presbyterian Church USA, Davidson College was founded in 1837 by The Concord Presbytery after purchasing 469 acres of land from William Lee Davidson II, he was the son of Revolutionary War commander Brigadier General William Lee Davidson, for whom the college is named. Church records show a meeting on May 13, 1835, among subsequent meetings, by members of the Concord Presbytery making plans to purchase and perform initial construction on the land, with land payments starting Jan. 1 of the following year.
The first students graduated from Davidson in 1840 and received diplomas with the newly created college seal designed by Peter Stuart Ney, believed by some to be Napoleon's Marshal Ney. In the 1850s, Davidson overcame financial difficulty by instituting "The Scholarship Plan," a program that allowed Davidson hopefuls to purchase a scholarship for $100, which could be redeemed in exchange for full tuition to Davidson until the 1870s; the college's financial situation improved in 1856 with a $250,000 donation by Maxwell Chambers, making Davidson the wealthiest college south of Princeton. The Chambers Building was erected to commemorate this gift. On November 28, 1921, the Chambers Building was destroyed in a fire but was reconstructed eight years with funds provided by a generous gift from the Rockefeller family; the Chambers Building continues to be the primary academic building on campus. In 1923, the Gamma chapter in North Carolina of Phi Beta Kappa was established at Davidson. Over 1500 men and 500 women have been initiated into Davidson's chapter of Phi Beta Kappa.
In 1924, James Duke formed the Duke Endowment, which has provided millions of dollars to the college, including a $15 million pledge in 2007 to assist with the elimination of student loans. On May 5, 1972, the trustees voted to allow women to enroll at Davidson as degree students for the first time. Women did not enjoy degree privileges; the first women to attend classes at Davidson were the five daughters of its president, the Rev. John Lycan Kirkpatrick; the first women were permitted to attend classes to increase the size of the student body during the American Civil War. However, art major Marianna "Missy" Woodward became the first woman to graduate from Davidson, she graduated in 1973 and was the only woman in a class of 217. In early 2005, the College's Board of Trustees voted in a 31–5 decision to allow 20% of the board to be non-Christian. John Belk, the former mayor of Charlotte and one of the heirs of Belk Department Store, resigned in protest after more than six decades of affiliation with the college.
Belk, continued his strong relationship with his alma mater and was honored in March 2006 at the Tenth Anniversary Celebration of the Belk Scholarship. In 2007, Davidson eliminated the need for students to take out loans to pay for their tuition. All demonstrated need is met through grants, student employment, parental contribution; the college claims to be the first liberal arts college in the United States to do this. Princeton Review and U. S. News & World Report regard Davidson's admission process as "most selective". For the class of 2022, Davidson received 5,712 applications and accepted 1,104; the yield rate was 46.8%, 85% of accepted freshmen reporting rank were in the top 10% of their high school classes. The middle 50% range of SAT scores for admitted students was 640–720 for the new Evidence-Based Reading & Writing, 650–730 for Math, while the ACT Composite range was 29–33. Caucasians represented 67.1% of the incoming class, 44.5% of enrolled freshmen were from the South. The 2019 annual ranking by U.
S. News & World Report rates Davidson College as the 10th best among "National Liberal Arts Colleges" in America, 3rd in "Best Undergraduate Teaching" in the nation. For 2016, Davidson College was ranked 25th overall on Forbes' list of "America's Top Colleges," and 1st in the South. In 2018, Kiplinger's Personal Finance rated Davidson College as the #1 best c
A Rydberg atom is an excited atom with one or more electrons that have a high principal quantum number. These atoms have a number of peculiar properties including an exaggerated response to electric and magnetic fields, long decay periods and electron wavefunctions that approximate, under some conditions, classical orbits of electrons about the nuclei; the core electrons shield the outer electron from the electric field of the nucleus such that, from a distance, the electric potential looks identical to that experienced by the electron in a hydrogen atom. In spite of its shortcomings, the Bohr model of the atom is useful in explaining these properties. Classically, an electron in a circular orbit of radius r, about a hydrogen nucleus of charge +e, obeys Newton's second law: F = m a ⇒ k e 2 r 2 = m v 2 r where k = 1/. Orbital momentum is quantized in units of ħ: m v r = n ℏ. Combining these two equations leads to Bohr's expression for the orbital radius in terms of the principal quantum number, n: r = n 2 ℏ 2 k e 2 m.
It is now apparent why Rydberg atoms have such peculiar properties: the radius of the orbit scales as n2 and the geometric cross-section as n4. Thus Rydberg atoms are large with loosely bound valence electrons perturbed or ionized by collisions or external fields; because the binding energy of a Rydberg electron is proportional to 1/r and hence falls off like 1/n2, the energy level spacing falls off like 1/n3 leading to more spaced levels converging on the first ionization energy. These spaced Rydberg states form what is referred to as the Rydberg series. Figure 2 shows some of the energy levels of the lowest three values of orbital angular momentum in lithium; the existence of the Rydberg series was first demonstrated in 1885 when Johann Balmer discovered a simple empirical formula for the wavelengths of light associated with transitions in atomic hydrogen. Three years the Swedish physicist Johannes Rydberg presented a generalized and more intuitive version of Balmer's formula that came to be known as the Rydberg formula.
This formula indicated the existence of an infinite series of more spaced discrete energy levels converging on a finite limit. This series was qualitatively explained in 1913 by Niels Bohr with his semiclassical model of the hydrogen atom in which quantized values of angular momentum lead to the observed discrete energy levels. A full quantitative derivation of the observed spectrum was derived by Wolfgang Pauli in 1926 following development of quantum mechanics by Werner Heisenberg and others; the only stable state of a hydrogen-like atom is the ground state with n = 1. The study of Rydberg states requires a reliable technique for exciting ground state atoms to states with a large value of n. Much early experimental work on Rydberg atoms relied on the use of collimated beams of fast electrons incident on ground-state atoms. Inelastic scattering processes can use the electron kinetic energy to increase the atoms' internal energy exciting to a broad range of different states including many high-lying Rydberg states, e − + A → A ∗ + e −.
Because the electron can retain any arbitrary amount of its initial kinetic energy, this process always results in a population with a broad spread of different energies. Another mainstay of early Rydberg atom experiments relied on charge exchange between a beam of ions and a population of neutral atoms of another species, resulting in the formation of a beam of excited atoms, A + + B → A ∗ + B +. Again, because the kinetic energy of the interaction can contribute to the final internal energies of the constituents, this technique populates a broad range of energy levels; the arrival of tunable dye lasers in the 1970s allowed a much greater level of control over populations of excited atoms. In optical excitation, the incident photon is absorbed by the target atom specifying the final state energy; the problem of producing single state, mono-energetic populations of Rydberg atoms thus becomes the somewhat simpler problem of controlling the frequency of the laser output, A + γ → A ∗. This form of direct optical excitation is limited to experiments with the alkali metals, because the ground state binding energy in other species is too high to be accessible with most laser systems.
For atoms with a large valence electron binding energy, the excited states of the Rydberg series are inaccessible with conventional laser systems. Initial collisional excitation can make up the energy shortfall allowing optical excitation to be used to select the final state. Although the initial step excites to a broad range of intermediate states, the precision inherent in the optical excitation process means that the laser light only in
In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is related to the number Ω of microscopic configurations that are consistent with the macroscopic quantities that characterize the system. Under the assumption that each microstate is probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB. Formally, S = k B ln Ω. Macroscopic systems have a large number Ω of possible microscopic configurations. For example, the entropy of an ideal gas is proportional to the number of gas molecules N. Twenty liters of gas at room temperature and atmospheric pressure has N ≈ 6×1023. At equilibrium, each of the Ω ≈ eN configurations can be regarded as random and likely; the second law of thermodynamics states. Such systems spontaneously evolve towards the state with maximum entropy. Non-isolated systems may lose entropy, provided their environment's entropy increases by at least that amount so that the total entropy increases.
Entropy is a function of the state of the system, so the change in entropy of a system is determined by its initial and final states. In the idealization that a process is reversible, the entropy does not change, while irreversible processes always increase the total entropy; because it is determined by the number of random microstates, entropy is related to the amount of additional information needed to specify the exact physical state of a system, given its macroscopic specification. For this reason, it is said that entropy is an expression of the disorder, or randomness of a system, or of the lack of information about it; the concept of entropy plays a central role in information theory. Boltzmann's constant, therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units; the entropy of a substance is given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. The French mathematician Lazare Carnot proposed in his 1803 paper Fundamental Principles of Equilibrium and Movement that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity.
In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines, whenever "caloric" falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body, he made the analogy with that of. This was an early insight into the second law of thermodynamics. Carnot based his views of heat on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, on the contemporary views of Count Rumford who showed that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy, its conservation in all processes. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the natural logarithm of the number of microstates such a gas could occupy.
Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. There are two related definitions of entropy: the thermodynamic definition and the statistical mechanics definition; the classical thermodynamics definition developed first. In the classical thermodynamics viewpoint, the system is composed of large numbers of constituents and the state of the system is described by the average thermodynamic properties of those constituents.
A pump is a device that moves fluids, or sometimes slurries, by mechanical action. Pumps can be classified into three major groups according to the method they use to move the fluid: direct lift and gravity pumps. Pumps operate by some mechanism, consume energy to perform mechanical work moving the fluid. Pumps operate via many energy sources, including manual operation, engines, or wind power, come in many sizes, from microscopic for use in medical applications to large industrial pumps. Mechanical pumps serve in a wide range of applications such as pumping water from wells, aquarium filtering, pond filtering and aeration, in the car industry for water-cooling and fuel injection, in the energy industry for pumping oil and natural gas or for operating cooling towers. In the medical industry, pumps are used for biochemical processes in developing and manufacturing medicine, as artificial replacements for body parts, in particular the artificial heart and penile prosthesis; when a casing contains only one revolving impeller, it is called a single-stage pump.
When a casing contains two or more revolving impellers, it is called a double- or multi-stage pump. In biology, many different types of chemical and biomechanical pumps have evolved. Mechanical pumps may be placed external to the fluid. Pumps can be classified by their method of displacement into positive displacement pumps, impulse pumps, velocity pumps, gravity pumps, steam pumps and valveless pumps. There are two basic types of pumps: centrifugal. Although axial-flow pumps are classified as a separate type, they have the same operating principles as centrifugal pumps. A positive displacement pump makes a fluid move by trapping a fixed amount and forcing that trapped volume into the discharge pipe; some positive displacement pumps use an expanding cavity on the suction side and a decreasing cavity on the discharge side. Liquid flows into the pump as the cavity on the suction side expands and the liquid flows out of the discharge as the cavity collapses; the volume is constant through each cycle of operation.
Positive displacement pumps, unlike centrifugal or roto-dynamic pumps, theoretically can produce the same flow at a given speed no matter what the discharge pressure. Thus, positive displacement pumps are constant flow machines. However, a slight increase in internal leakage as the pressure increases prevents a constant flow rate. A positive displacement pump must not operate against a closed valve on the discharge side of the pump, because it has no shutoff head like centrifugal pumps. A positive displacement pump operating against a closed discharge valve continues to produce flow and the pressure in the discharge line increases until the line bursts, the pump is damaged, or both. A relief or safety valve on the discharge side of the positive displacement pump is therefore necessary; the relief valve can be external. The pump manufacturer has the option to supply internal relief or safety valves; the internal valve is used only as a safety precaution. An external relief valve in the discharge line, with a return line back to the suction line or supply tank provides increased safety.
A positive displacement pump can be further classified according to the mechanism used to move the fluid: Rotary-type positive displacement: internal gear, shuttle block, flexible vane or sliding vane, circumferential piston, flexible impeller, helical twisted roots or liquid-ring pumps Reciprocating-type positive displacement: piston pumps, plunger pumps or diaphragm pumps Linear-type positive displacement: rope pumps and chain pumps These pumps move fluid using a rotating mechanism that creates a vacuum that captures and draws in the liquid. Advantages: Rotary pumps are efficient because they can handle viscous fluids with higher flow rates as viscosity increases. Drawbacks: The nature of the pump requires close clearances between the rotating pump and the outer edge, making it rotate at a slow, steady speed. If rotary pumps are operated at high speeds, the fluids cause erosion, which causes enlarged clearances that liquid can pass through, which reduces efficiency. Rotary positive displacement pumps fall into three main types: Gear pumps – a simple type of rotary pump where the liquid is pushed between two gears Screw pumps – the shape of the internals of this pump is two screws turning against each other to pump the liquid Rotary vane pumps – similar to scroll compressors, these have a cylindrical rotor encased in a shaped housing.
As the rotor orbits, the vanes trap fluid between the rotor and the casing, drawing the fluid through the pump. Reciprocating pumps move the fluid using one or more oscillating pistons, plungers, or membranes, while valves restrict fluid motion to the desired direction. In order for suction to take place, the pump must first pull the plunger in an outward motion to decrease pressure in the chamber. Once the plunger pushes back, it will increase the pressure chamber and the inward pressure of the plunger will open the discharge valve and release the fluid into the delivery pipe at a high velocity. Pumps in this category range from simplex, with one cylinder, to in some cases quad cylinders, or more. Many reciprocating-type pumps are triplex cylinder, they can be either single-acting with suction during one direction of piston motion and discharge on the other, or double-acting with suction and discharge in both directions. The pumps can be powered manually, by air or steam
Temperature is a physical quantity expressing hot and cold. It is measured with a thermometer calibrated in one or more temperature scales; the most used scales are the Celsius scale, Fahrenheit scale, Kelvin scale. The kelvin is the unit of temperature in the International System of Units, in which temperature is one of the seven fundamental base quantities; the Kelvin scale is used in science and technology. Theoretically, the coldest a system can be is when its temperature is absolute zero, at which point the thermal motion in matter would be zero. However, an actual physical system or object can never attain a temperature of absolute zero. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, −459.67 °F on the Fahrenheit scale. For an ideal gas, temperature is proportional to the average kinetic energy of the random microscopic motions of the constituent microscopic particles. Temperature is important in all fields of natural science, including physics, Earth science and biology, as well as most aspects of daily life.
Many physical processes are affected by temperature, such as physical properties of materials including the phase, solubility, vapor pressure, electrical conductivity rate and extent to which chemical reactions occur the amount and properties of thermal radiation emitted from the surface of an object speed of sound is a function of the square root of the absolute temperature Temperature scales differ in two ways: the point chosen as zero degrees, the magnitudes of incremental units or degrees on the scale. The Celsius scale is used for common temperature measurements in most of the world, it is an empirical scale, developed by a historical progress, which led to its zero point 0 °C being defined by the freezing point of water, additional degrees defined so that 100 °C was the boiling point of water, both at sea-level atmospheric pressure. Because of the 100-degree interval, it was called a centigrade scale. Since the standardization of the kelvin in the International System of Units, it has subsequently been redefined in terms of the equivalent fixing points on the Kelvin scale, so that a temperature increment of one degree Celsius is the same as an increment of one kelvin, though they differ by an additive offset of 273.15.
The United States uses the Fahrenheit scale, on which water freezes at 32 °F and boils at 212 °F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scots-Irish physicist who first defined it, it is a absolute temperature scale. Its zero point, 0 K, is defined to coincide with the coldest physically-possible temperature, its degrees are defined through thermodynamics. The temperature of absolute zero occurs at 0 K = −273.15 °C, the freezing point of water at sea-level atmospheric pressure occurs at 273.15 K = 0 °C. The International System of Units defines a scale and unit for the kelvin or thermodynamic temperature by using the reliably reproducible temperature of the triple point of water as a second reference point; the triple point is a singular state with its own unique and invariant temperature and pressure, along with, for a fixed mass of water in a vessel of fixed volume, an autonomically and stably self-determining partition into three mutually contacting phases, vapour and solid, dynamically depending only on the total internal energy of the mass of water.
For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment. There is a variety of kinds of temperature scale, it may be convenient to classify them theoretically based. Empirical temperature scales are older, while theoretically based scales arose in the middle of the nineteenth century. Empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a glass-walled capillary tube, is dependent on temperature, is the basis of the useful mercury-in-glass thermometer; such scales are valid only within convenient ranges of temperature. For example, above the boiling point of mercury, a mercury-in-glass thermometer is impracticable. Most materials expand with temperature increase, but some materials, such as water, contract with temperature increase over some specific range, they are hardly useful as thermometric materials. A material is of no use as a thermometer near one of its phase-change temperatures, for example its boiling-point.
In spite of these restrictions, most used practical thermometers are of the empirically based kind. It was used for calorimetry, which contributed to the discovery of thermodynamics. Empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Empirically based thermometers, beyond their base as simple direct measurements of ordinary physical properties of thermometric materials, can be re-calibrated, by use of theoretical physical reasoning, this can extend their range of adequacy. Theoretically-based temperature scales are based directly on theoretical arguments those of thermodynamics, kinetic theory and quantum mechanics, they rely on theoretical properties of idealized materials. They are more or less comparable with feasible physical devices and materials. Theoretically based temperature scales are used to provide calibrating standards for practi
The electron is a subatomic particle, symbol e− or β−, whose electric charge is negative one elementary charge. Electrons belong to the first generation of the lepton particle family, are thought to be elementary particles because they have no known components or substructure; the electron has a mass, 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant, ħ; as it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light; the wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy. Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism and thermal conductivity, they participate in gravitational and weak interactions.
Since an electron has charge, it has a surrounding electric field, if that electron is moving relative to an observer, it will generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, cathode ray tubes, electron microscopes, radiation therapy, gaseous ionization detectors and particle accelerators. Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics; the Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms.
Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge'electron' in 1891, J. J. Thomson and his team of British physicists identified it as a particle in 1897. Electrons can participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere; the antiparticle of the electron is called the positron. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
The ancient Greeks noticed. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electrica, to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον. In the early 1700s, Francis Hauksbee and French chemist Charles François du Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, du Fay theorized that electricity consists of two electrical fluids and resinous, that are separated by friction, that neutralize each other when combined. American scientist Ebenezer Kinnersley also independently reached the same conclusion. A decade Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess or deficit.
He gave them the modern charge nomenclature of negative respectively. Franklin thought of the charge carrier as being positive, but he did not identify which situation was a surplus of the charge carrier, which situation was a deficit. Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion, he was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".
Stoney coined the term
Ambiguity is a type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty, it is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved according to a rule or process with a finite number of steps. The concept of ambiguity is contrasted with vagueness. In ambiguity and distinct interpretations are permitted, whereas with information, vague, it is difficult to form any interpretation at the desired level of specificity. Context may play a role in resolving ambiguity. For example, the same piece of information may be ambiguous in one context and unambiguous in another. Lexical ambiguity is contrasted with semantic ambiguity; the former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning.
This form of ambiguity is related to vagueness. Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is of paramount importance; the lexical ambiguity of a word or phrase pertains to its having more than one meaning in the language to which the word belongs. "Meaning" here refers to. For instance, the word "bank" has several distinct lexical definitions, including "financial institution" and "edge of a river". Or consider "apothecary". One could say "I bought herbs from the apothecary"; this could mean one spoke to the apothecary or went to the apothecary. The context in which an ambiguous word is used makes it evident which of the meanings is intended. If, for instance, someone says "I buried $100 in the bank", most people would not think someone used a shovel to dig in the mud. However, some linguistic contexts do not provide sufficient information to disambiguate a used word. Lexical ambiguity can be addressed by algorithmic methods that automatically associate the appropriate meaning with a word in context, a task referred to as word sense disambiguation.
The use of multi-defined words requires the author or speaker to clarify their context, sometimes elaborate on their specific intended meaning. The goal of clear concise communication is that the receiver have no misunderstanding about what was meant to be conveyed. An exception to this could include a politician whose "weasel words" and obfuscation are necessary to gain support from multiple constituents with mutually exclusive conflicting desires from their candidate of choice. Ambiguity is a powerful tool of political science. More problematic are words whose senses express related concepts. "Good", for example, can mean "useful" or "functional", "exemplary", "pleasing", "moral", "righteous", etc. I have a good daughter"; the various ways to apply prefixes and suffixes can create ambiguity. Semantic ambiguity happens when a sentence contains an ambiguous word or phrase—a word or phrase that has more than one meaning. In "We saw her duck", the word "duck" can refer either to the person's bird, or to a motion she made.
Syntactic ambiguity arises when a sentence can have two different meanings because of the structure of the sentence—its syntax. This is due to a modifying expression, such as a prepositional phrase, the application of, unclear. "He ate the cookies on the couch", for example, could mean that he ate those cookies that were on the couch, or it could mean that he was sitting on the couch when he ate the cookies. "To get in, you will need an entrance fee of $10 or your voucher and your drivers' license." This could mean that you need EITHER ten dollars OR BOTH your license. Or it could mean that you need you need EITHER ten dollars OR a voucher. Only rewriting the sentence, or placing appropriate punctuation can resolve a syntactic ambiguity. For the notion of, theoretic results about, syntactic ambiguity in artificial, formal languages, see Ambiguous grammar. Spoken language can contain many more types of ambiguities which are called phonological ambiguities, where there is more than one way to compose a set of sounds into words.
For example, "ice cream" and "I scream". Such ambiguity is resolved according to the context. A mishearing of such, based on incorrectly resolved ambiguity, is called a mondegreen. Metonymy involves the use of the name of a subcomponent part as an abbreviation, or jargon, for the name of the whole object. In modern vocabulary, critical semiotics, metonymy encompasses any ambiguous word substitution, based on contextual contiguity, or a function or process that an object performs, such as "sweet ride" to refer to a nice car. Metonym miscommunication is considered a primary mechanism of linguistic humor. Philosophers