1.
Emission spectrum
–
The photon energy of the emitted photon is equal to the energy difference between the two states. There are many possible electron transitions for each atom, and each transition has an energy difference. This collection of different transitions, leading to different radiated wavelengths, each elements emission spectrum is unique. Therefore, spectroscopy can be used to identify the elements in matter of unknown composition, similarly, the emission spectra of molecules can be used in chemical analysis of substances. The frequency of light emitted is a function of the energy of the transition, since energy must be conserved, the energy difference between the two states equals the energy carried off by the photon. The energy states of the transitions can lead to emissions over a large range of frequencies. For example, visible light is emitted by the coupling of states in atoms. On the other hand, nuclear shell transitions can emit high energy gamma rays, the emittance of an object quantifies how much light is emitted by it. This may be related to properties of the object through the Stefan–Boltzmann law. For most substances, the amount of emission varies with the temperature, precise measurements at many wavelengths allow the identification of a substance via emission spectroscopy. The quantum mechanics problem is treated using time-dependent perturbation theory and leads to the result known as Fermis golden rule. The description has been superseded by quantum electrodynamics, although the semi-classical version continues to be useful in most practical computations. When the electrons in the atom are excited, for example by being heated, when the electrons fall back down and leave the excited state, energy is re-emitted in the form of a photon. The wavelength of the photon is determined by the difference in energy between the two states and these emitted photons form the elements spectrum. The fact that certain colors appear in an elements atomic emission spectrum means that only certain frequencies of light are emitted. Each of these frequencies are related to energy by the formula, E photon = h ν, where E photon is the energy of the photon, ν is its frequency and this concludes that only photons with specific energies are emitted by the atom. The principle of the emission spectrum explains the varied colors in neon signs. The frequencies of light that an atom can emit are dependent on states the electrons can be in, when excited, an electron moves to a higher energy level or orbital
2.
Absorption spectroscopy
–
Absorption spectroscopy refers to spectroscopic techniques that measure the absorption of radiation, as a function of frequency or wavelength, due to its interaction with a sample. The sample absorbs energy, i. e. photons, from the radiating field, the intensity of the absorption varies as a function of frequency, and this variation is the absorption spectrum. Absorption spectroscopy is performed across the electromagnetic spectrum, Infrared and ultraviolet-visible spectroscopy are particularly common in analytical applications. Absorption spectroscopy is employed in studies of molecular and atomic physics, astronomical spectroscopy. There are a range of experimental approaches for measuring absorption spectra. The most common arrangement is to direct a beam of radiation at a sample. The transmitted energy can be used to calculate the absorption, the source, sample arrangement and detection technique vary significantly depending on the frequency range and the purpose of the experiment. A materials absorption spectrum is the fraction of incident radiation absorbed by the material over a range of frequencies, the absorption spectrum is primarily determined by the atomic and molecular composition of the material. Radiation is more likely to be absorbed at frequencies that match the difference between two quantum mechanical states of the molecules. The absorption that occurs due to a transition between two states is referred to as a line and a spectrum is typically composed of many lines. The frequencies where absorption lines occur, as well as their relative intensities, primarily depend on the electronic, the frequencies will also depend on the interactions between molecules in the sample, the crystal structure in solids, and on several environmental factors. The lines will also have a width and shape that are determined by the spectral density or the density of states of the system. Absorption lines are classified by the nature of the quantum mechanical change induced in the molecule or atom. Rotational lines, for instance, occur when the state of a molecule is changed. Rotational lines are found in the microwave spectral region. Vibrational lines correspond to changes in the state of the molecule and are typically found in the infrared region. Electronic lines correspond to a change in the state of an atom or molecule and are typically found in the visible. X-ray absorptions are associated with the excitation of inner shell electrons in atoms and these changes can also be combined, leading to new absorption lines at the combined energy of the two changes
3.
Fraunhofer lines
–
In physics and optics, the Fraunhofer lines are a set of spectral lines named after the German physicist Joseph von Fraunhofer. The lines were observed as dark features in the optical spectrum of the Sun. In 1802, the English chemist William Hyde Wollaston was the first person to note the appearance of a number of features in the solar spectrum. In 1814, Fraunhofer independently rediscovered the lines and began a systematic study, in all, he mapped over 570 lines, and designated the principal features with the letters A through K, and weaker lines with other letters. Modern observations of sunlight can detect thousands of lines. About 45 years later Kirchhoff and Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identified in the spectra of heated elements and it was correctly deduced that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Some of the features were identified as telluric lines originating from absorption by oxygen molecules in the Earths atmosphere. The Fraunhofer lines are typical spectral absorption lines, absorption lines are dark lines, narrow regions of decreased intensity, that are the result of photons being absorbed as light passes from the source to the detector. In the Sun, Fraunhofer lines are a result of gas in the photosphere, the photosphere gas is colder than the inner regions and absorbs light emitted from those regions. The D1 and D2 lines form the sodium doublet, the centre wavelength of which is given the designation letter D. This historical designation for this line has stuck and is given to all the transitions between the state and the first excited state of the other alkali atoms as well. The D1 and D2 lines correspond to the splitting of the excited states. This may be confusing because the state for this transition is the P-state of the alkali. Similarly, there is ambiguity with reference to the e-line, since it can refer to the lines of both iron and mercury. In order to resolve ambiguities that arise in usage, ambiguous Fraunhofer line designations are preceded by the element with which they are associated, because of their well defined wavelengths, Fraunhofer lines are often used to characterize the refractive index and dispersion properties of optical materials. Abbe number, measure of glass dispersion defined using Fraunhofer lines Timeline of solar astronomy Spectrum analysis Myles W. Jackson, Joseph von Fraunhofer and the Craft of Precision Optics
4.
Rayleigh scattering
–
Rayleigh scattering does not change the state of material and is, hence, a parametric process. The particles may be atoms or molecules. It can occur when light travels through transparent solids and liquids, Rayleigh scattering results from the electric polarizability of the particles. The oscillating electric field of a light wave acts on the charges within a particle, the particle therefore becomes a small radiating dipole whose radiation we see as scattered light. Rayleigh scattering of sunlight in the atmosphere causes diffuse sky radiation, which is the reason for the color of the sky. The amount of scattering is proportional to the fourth power of the wavelength. This can lead to changes in the state of the molecules. Furthermore, the contribution has the same wavelengths dependency as the elastic part. Scattering by particles similar to, or larger than, the wavelength of light is treated by the Mie theory. Rayleigh scattering applies to particles that are small with respect to wavelengths of light, on the other hand, anomalous diffraction theory applies to optically soft but larger particles. The size of a particle is often parameterized by the ratio where r is its characteristic length. The wavelength dependence is characteristic of dipole scattering and the volume dependence will apply to any scattering mechanism, objects with x ≫1 act as geometric shapes, scattering light according to their projected area. At the intermediate x ≃1 of Mie scattering, interference effects develop through phase variations over the objects surface, Rayleigh scattering applies to the case when the scattering particle is very small and the whole surface re-radiates with the same phase. For example, the constituent of the atmosphere, nitrogen, has a Rayleigh cross section of 5. 1×10−31 m2 at a wavelength of 532 nm. This means that at atmospheric pressure, where there are about 2×1025 molecules per cubic meter, the strong wavelength dependence of the scattering means that shorter wavelengths are scattered more strongly than longer wavelengths. This results in the blue light coming from all regions of the sky. Rayleigh scattering is an approximation of the manner in which light scattering occurs within various media for which scattering particles have a small size. A portion of the beam of light coming from the sun scatters off molecules of gas, here, Rayleigh scattering primarily occurs through sunlights interaction with randomly located air molecules
5.
Atom
–
An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms, Atoms are very small, typical sizes are around 100 picometers. Atoms are small enough that attempting to predict their behavior using classical physics - as if they were billiard balls, through the development of physics, atomic models have incorporated quantum principles to better explain and predict the behavior. Every atom is composed of a nucleus and one or more bound to the nucleus. The nucleus is made of one or more protons and typically a number of neutrons. Protons and neutrons are called nucleons, more than 99. 94% of an atoms mass is in the nucleus. The protons have an electric charge, the electrons have a negative electric charge. If the number of protons and electrons are equal, that atom is electrically neutral, if an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. The electrons of an atom are attracted to the protons in a nucleus by this electromagnetic force. The number of protons in the nucleus defines to what chemical element the atom belongs, for example, the number of neutrons defines the isotope of the element. The number of influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the changes observed in nature. The idea that matter is made up of units is a very old idea, appearing in many ancient cultures such as Greece. The word atom was coined by ancient Greek philosophers, however, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what look like. They also could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, in the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers
6.
Molecule
–
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge, however, in quantum physics, organic chemistry, and biochemistry, the term molecule is often used less strictly, also being applied to polyatomic ions. In the kinetic theory of gases, the molecule is often used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are in fact monoatomic molecules. A molecule may be homonuclear, that is, it consists of atoms of one element, as with oxygen, or it may be heteronuclear. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are not considered single molecules. Molecules as components of matter are common in organic substances and they also make up most of the oceans and atmosphere. Also, no typical molecule can be defined for ionic crystals and covalent crystals, the theme of repeated unit-cellular-structure also holds for most condensed phases with metallic bonding, which means that solid metals are also not made of molecules. In glasses, atoms may also be together by chemical bonds with no presence of any definable molecule. The science of molecules is called molecular chemistry or molecular physics, in practice, however, this distinction is vague. In molecular sciences, a molecule consists of a system composed of two or more atoms. Polyatomic ions may sometimes be thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i. e, according to Merriam-Webster and the Online Etymology Dictionary, the word molecule derives from the Latin moles or small unit of mass. Molecule – extremely minute particle, from French molécule, from New Latin molecula, diminutive of Latin moles mass, a vague meaning at first, the vogue for the word can be traced to the philosophy of Descartes. The definition of the molecule has evolved as knowledge of the structure of molecules has increased, earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. Molecules are held together by covalent bonding or ionic bonding. Several types of non-metal elements exist only as molecules in the environment, for example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements, a covalent bond is a chemical bond that involves the sharing of electron pairs between atoms
7.
Incandescent light bulb
–
An incandescent light bulb, incandescent lamp or incandescent light globe is an electric light with a wire filament heated to such a high temperature that it glows with visible light. The filament, heated by passing a current through it, is protected from oxidation with a glass or quartz bulb that is filled with inert gas or evacuated. In a halogen lamp, filament evaporation is prevented by a process that redeposits metal vapor onto the filament. The light bulb is supplied with current by feed-through terminals or wires embedded in the glass. Most bulbs are used in a socket which provides mechanical support, Incandescent bulbs are manufactured in a wide range of sizes, light output, and voltage ratings, from 1.5 volts to about 300 volts. They require no external regulating equipment, have low manufacturing costs, the remaining energy is converted into heat. The luminous efficacy of an incandescent bulb is 16 lumens per watt. Some applications of the incandescent bulb deliberately use the heat generated by the filament, such applications include incubators, brooding boxes for poultry, heat lights for reptile tanks, infrared heating for industrial heating and drying processes, lava lamps, and the Easy-Bake Oven toy. In addressing the question of who invented the incandescent lamp, historians Robert Friedel and Paul Israel list 22 inventors of incandescent lamps prior to Joseph Swan, historian Thomas Hughes has attributed Edisons success to his development of an entire, integrated system of electric lighting. In 1761 Ebenezer Kinnersley demonstrated heating a wire to incandescence and it was not bright enough nor did it last long enough to be practical, but it was the precedent behind the efforts of scores of experimenters over the next 75 years. Over the first three-quarters of the 19th century many experimenters worked with various combinations of platinum or iridium wires, carbon rods, many of these devices were demonstrated and some were patented. In 1835, James Bowman Lindsay demonstrated a constant electric light at a meeting in Dundee. He stated that he could read a book at a distance of one, however, having perfected the device to his own satisfaction, he turned to the problem of wireless telegraphy and did not develop the electric light any further. His claims are not well documented, although he is credited in Challoner et al. with being the inventor of the Incandescent Light Bulb, in 1838, Belgian lithographer Marcellin Jobard invented an incandescent light bulb with a vacuum atmosphere using a carbon filament. In 1840, British scientist Warren de la Rue enclosed a coiled platinum filament in a vacuum tube, although a workable design, the cost of the platinum made it impractical for commercial use. In 1841, Frederick de Moleyns of England was granted the first patent for an incandescent lamp, in 1845, American John W. Starr acquired a patent for his incandescent light bulb involving the use of carbon filaments. He died shortly after obtaining the patent, and his invention was never produced commercially, little else is known about him. In 1851, Jean Eugène Robert-Houdin publicly demonstrated incandescent light bulbs on his estate in Blois and his light bulbs are on display in the museum of the Château de Blois
8.
Compact fluorescent lamp
–
The lamps use a tube which is curved or folded to fit into the space of an incandescent bulb, and a compact electronic ballast in the base of the lamp. Compared to general-service incandescent lamps giving the amount of visible light, CFLs use one-fifth to one-third the electric power. A CFL has a purchase price than an incandescent lamp. Like all fluorescent lamps, CFLs contain toxic mercury which complicates their disposal, in many countries, governments have banned the disposal of CFLs together with regular garbage. These countries have established special collection systems for CFLs and other hazardous waste, CFLs radiate a spectral power distribution that is different from that of incandescent lamps. White LED lamps now compete with CFLs for high-efficiency house lighting, the parent to the modern fluorescent lamp was invented in the late 1890s by Peter Cooper Hewitt. The Cooper Hewitt lamps were used for photographic studios and industries, edmund Germer, Friedrich Meyer, and Hans Spanner patented a high-pressure vapor lamp in 1927. George Inman later teamed with General Electric to create a practical fluorescent lamp, sold in 1938, circular and U-shaped lamps were devised to reduce the length of fluorescent light fixtures. The first fluorescent light bulb and fixture were displayed to the public at the 1939 New York Worlds Fair. The spiral CFL was invented in 1976 by Edward E. Hammer, although the design met its goals, it would have cost GE about $25 million to build new factories to produce the lamps, and thus the invention was shelved. The design was copied by others. In 1995, helical CFLs, manufactured in China, became commercially available, since that time, their sales have steadily increased. In 1980, Philips introduced its model SL, which was a lamp with integral magnetic ballast. The lamp used a folded T4 tube, stable tri-color phosphors, and this was the first successful screw-in replacement for an incandescent lamp. In 1985, Osram started selling its model EL lamp, which was the first CFL to include an electronic ballast, in 2016, General Electric announced the phase out of CFL production. LED prices had dropped steadily, falling well below $5 for a basic bulb in 2015, as a result, customers had been migrating toward LEDs. CFLs were also having difficulty qualifying for the Energy Star rating under newer regulations, there are two types of CFLs, integrated and non-integrated lamps. Integrated lamps combine the tube and ballast in a single unit and these lamps allow consumers to replace incandescent lamps easily with CFLs
9.
Quantum mechanics
–
Quantum mechanics, including quantum field theory, is a branch of physics which is the fundamental theory of nature at small scales and low energies of atoms and subatomic particles. Classical physics, the physics existing before quantum mechanics, derives from quantum mechanics as an approximation valid only at large scales, early quantum theory was profoundly reconceived in the mid-1920s. The reconceived theory is formulated in various specially developed mathematical formalisms, in one of them, a mathematical function, the wave function, provides information about the probability amplitude of position, momentum, and other physical properties of a particle. In 1803, Thomas Young, an English polymath, performed the famous experiment that he later described in a paper titled On the nature of light. This experiment played a role in the general acceptance of the wave theory of light. In 1838, Michael Faraday discovered cathode rays, Plancks hypothesis that energy is radiated and absorbed in discrete quanta precisely matched the observed patterns of black-body radiation. In 1896, Wilhelm Wien empirically determined a distribution law of black-body radiation, ludwig Boltzmann independently arrived at this result by considerations of Maxwells equations. However, it was only at high frequencies and underestimated the radiance at low frequencies. Later, Planck corrected this model using Boltzmanns statistical interpretation of thermodynamics and proposed what is now called Plancks law, following Max Plancks solution in 1900 to the black-body radiation problem, Albert Einstein offered a quantum-based theory to explain the photoelectric effect. Among the first to study quantum phenomena in nature were Arthur Compton, C. V. Raman, robert Andrews Millikan studied the photoelectric effect experimentally, and Albert Einstein developed a theory for it. In 1913, Peter Debye extended Niels Bohrs theory of structure, introducing elliptical orbits. This phase is known as old quantum theory, according to Planck, each energy element is proportional to its frequency, E = h ν, where h is Plancks constant. Planck cautiously insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to do with the reality of the radiation itself. In fact, he considered his quantum hypothesis a mathematical trick to get the right rather than a sizable discovery. He won the 1921 Nobel Prize in Physics for this work, lower energy/frequency means increased time and vice versa, photons of differing frequencies all deliver the same amount of action, but do so in varying time intervals. High frequency waves are damaging to human tissue because they deliver their action packets concentrated in time, the Copenhagen interpretation of Niels Bohr became widely accepted. In the mid-1920s, developments in mechanics led to its becoming the standard formulation for atomic physics. In the summer of 1925, Bohr and Heisenberg published results that closed the old quantum theory, out of deference to their particle-like behavior in certain processes and measurements, light quanta came to be called photons
10.
Atomic nucleus
–
After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. Almost all of the mass of an atom is located in the nucleus, protons and neutrons are bound together to form a nucleus by the nuclear force. The diameter of the nucleus is in the range of 6985175000000000000♠1.75 fm for hydrogen to about 6986150000000000000♠15 fm for the heaviest atoms and these dimensions are much smaller than the diameter of the atom itself, by a factor of about 23,000 to about 145,000. The branch of physics concerned with the study and understanding of the nucleus, including its composition. The nucleus was discovered in 1911, as a result of Ernest Rutherfords efforts to test Thomsons plum pudding model of the atom, the electron had already been discovered earlier by J. J. Knowing that atoms are electrically neutral, Thomson postulated that there must be a charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge, to his surprise, many of the particles were deflected at very large angles. This justified the idea of an atom with a dense center of positive charge. The term nucleus is from the Latin word nucleus, a diminutive of nux, in 1844, Michael Faraday used the term to refer to the central point of an atom. The modern atomic meaning was proposed by Ernest Rutherford in 1912, the adoption of the term nucleus to atomic theory, however, was not immediate. In 1916, for example, Gilbert N, the nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations, which chemical element an atom represents is determined by the number of protons in the nucleus, the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons and it is that sharing of electrons to create stable electronic orbits about the nucleus that appears to us as the chemistry of our macro world. Protons define the entire charge of a nucleus, and hence its chemical identity, neutrons are electrically neutral, but contribute to the mass of a nucleus to nearly the same extent as the protons. Neutrons explain the phenomenon of isotopes – varieties of the chemical element which differ only in their atomic mass. They are sometimes viewed as two different quantum states of the particle, the nucleon
11.
Photon
–
A photon is an elementary particle, the quantum of the electromagnetic field including electromagnetic radiation such as light, and the force carrier for the electromagnetic force. The photon has zero rest mass and is moving at the speed of light. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a photon may be refracted by a lens and exhibit wave interference with itself. The quanta in a light wave cannot be spatially localized, some defined physical parameters of a photon are listed. The modern concept of the photon was developed gradually by Albert Einstein in the early 20th century to explain experimental observations that did not fit the classical model of light. The benefit of the model was that it accounted for the frequency dependence of lights energy. The photon model accounted for observations, including the properties of black-body radiation. In that model, light was described by Maxwells equations, in 1926 the optical physicist Frithiof Wolfers and the chemist Gilbert N. Lewis coined the name photon for these particles. After Arthur H. Compton won the Nobel Prize in 1927 for his studies, most scientists accepted that light quanta have an independent existence. In the Standard Model of particle physics, photons and other particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass and it has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Recently, photons have been studied as elements of quantum computers, in 1900, the German physicist Max Planck was studying black-body radiation and suggested that the energy carried by electromagnetic waves could only be released in packets of energy. In his 1901 article in Annalen der Physik he called these packets energy elements, the word quanta was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1905, Albert Einstein suggested that waves could only exist as discrete wave-packets. He called such a wave-packet the light quantum, the name photon derives from the Greek word for light, φῶς. Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, the name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolferss and Lewiss theories were contradicted by many experiments and never accepted, in physics, a photon is usually denoted by the symbol γ
12.
Electron
–
The electron is a subatomic particle, symbol e− or β−, with a negative elementary electric charge. Electrons belong to the first generation of the lepton particle family, the electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the include a intrinsic angular momentum of a half-integer value, expressed in units of the reduced Planck constant. As it is a fermion, no two electrons can occupy the same state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of particles and waves, they can collide with other particles and can be diffracted like light. Since an electron has charge, it has an electric field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law, electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields, special telescopes can detect electron plasma in outer space. Electrons are involved in applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors. Interactions involving electrons with other particles are of interest in fields such as chemistry. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms, ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the cause of chemical bonding. In 1838, British natural philosopher Richard Laming first hypothesized the concept of a quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge electron in 1891, electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of isotopes and in high-energy collisions. The antiparticle of the electron is called the positron, it is identical to the electron except that it carries electrical, when an electron collides with a positron, both particles can be totally annihilated, producing gamma ray photons. The ancient Greeks noticed that amber attracted small objects when rubbed with fur, along with lightning, this phenomenon is one of humanitys earliest recorded experiences with electricity. In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, both electric and electricity are derived from the Latin ēlectrum, which came from the Greek word for amber, ἤλεκτρον
13.
Electron configuration
–
In atomic physics and quantum chemistry, the electron configuration is the distribution of electrons of an atom or molecule in atomic or molecular orbitals. For example, the configuration of the neon atom is 1s2 2s2 2p6. Electronic configurations describe electrons as each moving independently in an orbital, mathematically, configurations are described by Slater determinants or configuration state functions. Knowledge of the configuration of different atoms is useful in understanding the structure of the periodic table of elements. This is also useful for describing the chemical bonds that hold atoms together, in bulk materials, this same idea helps explain the peculiar properties of lasers and semiconductors. An electron shell is the set of allowed states that share the principal quantum number, n. An atoms nth electron shell can accommodate 2n2 electrons, e. g. the first shell can accommodate 2 electrons, the second shell 8 electrons, a subshell is the set of states defined by a common azimuthal quantum number, ℓ, within a shell. The values ℓ =0,1,2,3 correspond to the s, p, d, for example the 3d subshell has n =3 and ℓ =2. The maximum number of electrons that can be placed in a subshell is given by 2 and this gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. Physicists and chemists use a notation to indicate the electron configurations of atoms. For atoms, the notation consists of a sequence of atomic subshell labels with the number of electrons assigned to each subshell placed as a superscript, for example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the 2s-subshell, so its configuration is written 1s2 2s1, phosphorus is as follows, 1s2 2s2 2p6 3s2 3p3. For atoms with many electrons, this notation can become lengthy, phosphorus, for instance, is in the third period. It differs from the neon, whose configuration is 1s2 2s2 2p6. This convention is useful as it is the electrons in the outermost shell that most determine the chemistry of the element, for a given configuration, the order of writing the orbitals is not completely fixed since only the orbital occupancies have physical significance. For example, the configuration of the titanium ground state can be written as either 4s2 3d2 or 3d2 4s2. The first notation follows the order based on the Madelung rule for the configurations of atoms, 4s is filled before 3d in the sequence Ar, K, Ca, Sc. The superscript 1 for a singly occupied subshell is not compulsory and it is quite common to see the letters of the orbital labels written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry recommends a normal typeface
14.
Gas
–
Gas is one of the four fundamental states of matter. A pure gas may be made up of atoms, elemental molecules made from one type of atom. A gas mixture would contain a variety of pure gases much like the air, what distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer, the interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image. One type of commonly known gas is steam, the gaseous state of matter is found between the liquid and plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention, high-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these states of matter see list of states of matter. The only chemical elements which are stable multi atom homonuclear molecules at temperature and pressure, are hydrogen, nitrogen and oxygen. These gases, when grouped together with the noble gases. Alternatively they are known as molecular gases to distinguish them from molecules that are also chemical compounds. The word gas is a neologism first used by the early 17th-century Flemish chemist J. B. van Helmont, according to Paracelsuss terminology, chaos meant something like ultra-rarefied water. An alternative story is that Van Helmonts word is corrupted from gahst and these four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a relationship among these properties expressed by the ideal gas law. Gas particles are separated from one another, and consequently have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another, transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion, the drifting smoke particles in the image provides some insight into low pressure gas behavior
15.
Helium
–
Helium is a chemical element with symbol He and atomic number 2. It is a colorless, odorless, tasteless, non-toxic, inert, monatomic gas and its boiling point is the lowest among all the elements. Its abundance is similar to figure in the Sun and in Jupiter. This is due to the high nuclear binding energy of helium-4 with respect to the next three elements after helium. This helium-4 binding energy also accounts for why it is a product of nuclear fusion and radioactive decay. Most helium in the universe is helium-4, and is believed to have formed during the Big Bang. Large amounts of new helium are being created by fusion of hydrogen in stars. Helium is named for the Greek god of the Sun, Helios and it was first detected as an unknown yellow spectral line signature in sunlight during a solar eclipse in 1868 by French astronomer Jules Janssen. Janssen is jointly credited with detecting the element along with Norman Lockyer, Janssen observed during the solar eclipse of 1868 while Lockyer observed from Britain. Lockyer was the first to propose that the line was due to a new element, the formal discovery of the element was made in 1895 by two Swedish chemists, Per Teodor Cleve and Nils Abraham Langlet, who found helium emanating from the uranium ore cleveite. In 1903, large reserves of helium were found in gas fields in parts of the United States. Liquid helium is used in cryogenics, particularly in the cooling of superconducting magnets, a well-known but minor use is as a lifting gas in balloons and airships. As with any gas whose density differs from that of air, inhaling a small volume of helium temporarily changes the timbre, on Earth it is relatively rare—5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created by the radioactive decay of heavy radioactive elements. Previously, terrestrial helium—a non-renewable resource, because once released into the atmosphere it readily escapes into space—was thought to be in short supply. The first evidence of helium was observed on August 18,1868, the line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. This line was assumed to be sodium. He concluded that it was caused by an element in the Sun unknown on Earth, Lockyer and English chemist Edward Frankland named the element with the Greek word for the Sun, ἥλιος
16.
Thallium
–
Thallium is a chemical element with symbol Tl and atomic number 81. This soft gray post-transition metal is not found free in nature, when isolated, thallium resembles tin, but discolors when exposed to air. Chemists William Crookes and Claude-Auguste Lamy discovered thallium independently in 1861, both used the newly developed method of flame spectroscopy, in which thallium produces a notable green spectral line. Thallium, from Greek θαλλός, thallos, meaning a green shoot or twig, was named by Crookes and it was isolated by both Lamy and Crookes in 1862, Lamy by electrolysis and Crookes by precipitation and melting of the resultant powder. Crookes exhibited it as a powder precipitated by zinc at the International exhibition which opened on 1 May, Thallium tends to oxidize to the +3 and +1 oxidation states as ionic salts. The +3 state resembles that of the elements in group 13. Commercially, however, thallium is produced not from potassium ores, approximately 60–70% of thallium production is used in the electronics industry, and the remainder is used in the pharmaceutical industry and in glass manufacturing. It is also used in infrared detectors, the radioisotope thallium-201 is used in small, nontoxic amounts as an agent in a nuclear medicine scan, during one type of nuclear cardiac stress test. Soluble thallium salts are toxic, and they were used in rat poisons. Use of these compounds has been restricted or banned in many countries, notably, thallium poisoning results in hair loss. Because of its popularity as a murder weapon, thallium has gained notoriety as the poisoners poison. A thallium atom has 81 electrons, arranged in the electron configuration 4f145d106s26p1, of these, however, due to the inert pair effect, the 6s electron pair is relativistically stabilised and it is more difficult to get them involved in chemical bonding than for the heavier elements. Thallium is malleable and sectile enough to be cut with a knife at room temperature and it has a metallic luster that, when exposed to air, quickly tarnishes to a bluish-gray tinge, resembling lead. It may be preserved by immersion in oil, a heavy layer of oxide builds up on thallium if left in air. In the presence of water, thallium hydroxide is formed, sulfuric and nitric acid dissolve thallium rapidly to make the sulfate and nitrate salts, while hydrochloric acid forms an insoluble thallium chloride layer. Thallium has 25 isotopes which have atomic masses that range from 184 to 210, 203Tl and 205Tl are the only stable isotopes and make up nearly all of natural thallium. 204Tl is the most stable radioisotope, with a half-life of 3.78 years and it is made by the neutron activation of stable thallium in a nuclear reactor. It is the most popular isotope used for thallium nuclear cardiac stress tests, Thallium compounds resemble the corresponding aluminium compounds
17.
Cerium
–
Cerium is a soft, ductile, silvery-white metallic chemical element with symbol Ce and atomic number 58. Cerium tarnishes when exposed to air, and it is enough to be cut with a knife. Cerium is the element in the lanthanide series, and while it often shows the +3 state characteristic of the series. It is also considered to be one of the rare earth elements. Cerium has no role, and is not very toxic. It is the most common of the lanthanides, followed by neodymium, lanthanum and it is the 26th most abundant element, making up 66 ppm of the Earths crust, half as much as chlorine and five times as much as lead. The first of the lanthanides to be discovered, cerium was discovered in Bastnäs, Sweden by Jöns Jakob Berzelius and Wilhelm Hisinger in 1803 and it was first isolated by Carl Gustaf Mosander in 1839. Today, cerium and its compounds have a variety of uses, for example, cerium metal is used in ferrocerium lighters for its pyrophoric properties. Cerium is the element of the lanthanide series. In the periodic table, it appears between the lanthanides lanthanum to its left and praseodymium to its right, and above the actinide thorium and it is a ductile metal with a hardness similar to that of silver. Its 58 electrons are arranged in the configuration 4f15d16s2, of which the four electrons are valence electrons. The stable form below 726 °C to approximately room temperature is γ-cerium, the dhcp form of β-cerium is the equilibrium structure approximately from room temperature to −150 °C. The fcc α-cerium exists below about −150 °C, it has a density of 8.16 g/cm3, other solid phases occurring only at high pressures are shown on the phase diagram. Both γ and β forms are stable at room temperature. Cerium has an electronic structure. This gives rise to dual valency states, for example, a volume change of about 10% occurs when cerium is subjected to high pressures or low temperatures. It appears that the changes from about 3 to 4 when it is cooled or compressed. At lower temperatures the behavior of cerium is complicated by the rates of transformation
18.
Star
–
A star is a luminous sphere of plasma held together by its own gravity. The nearest star to Earth is the Sun, many other stars are visible to the naked eye from Earth during the night, appearing as a multitude of fixed luminous points in the sky due to their immense distance from Earth. Historically, the most prominent stars were grouped into constellations and asterisms, astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. However, most of the stars in the Universe, including all stars outside our galaxy, indeed, most are invisible from Earth even through the most powerful telescopes. Almost all naturally occurring elements heavier than helium are created by stellar nucleosynthesis during the stars lifetime, near the end of its life, a star can also contain degenerate matter. Astronomers can determine the mass, age, metallicity, and many properties of a star by observing its motion through space, its luminosity. The total mass of a star is the factor that determines its evolution. Other characteristics of a star, including diameter and temperature, change over its life, while the environment affects its rotation. A plot of the temperature of stars against their luminosities produces a plot known as a Hertzsprung–Russell diagram. Plotting a particular star on that allows the age and evolutionary state of that star to be determined. A stars life begins with the collapse of a gaseous nebula of material composed primarily of hydrogen, along with helium. When the stellar core is sufficiently dense, hydrogen becomes steadily converted into helium through nuclear fusion, the remainder of the stars interior carries energy away from the core through a combination of radiative and convective heat transfer processes. The stars internal pressure prevents it from collapsing further under its own gravity, a star with mass greater than 0.4 times the Suns will expand to become a red giant when the hydrogen fuel in its core is exhausted. In some cases, it will fuse heavier elements at the core or in shells around the core, as the star expands it throws a part of its mass, enriched with those heavier elements, into the interstellar environment, to be recycled later as new stars. Meanwhile, the core becomes a remnant, a white dwarf. Binary and multi-star systems consist of two or more stars that are bound and generally move around each other in stable orbits. When two such stars have a close orbit, their gravitational interaction can have a significant impact on their evolution. Stars can form part of a much larger gravitationally bound structure, historically, stars have been important to civilizations throughout the world
19.
Electromagnetic spectrum
–
The electromagnetic spectrum is the collective term for all known frequencies and their linked wavelengths of the known photons. The electromagnetic spectrum of an object has a different meaning, and is instead the characteristic distribution of radiation emitted or absorbed by that particular object. Visible light lies toward the end, with wavelengths from 400 to 700 nanometres. The limit for long wavelengths is the size of the universe itself, until the middle of the 20th century it was believed by most physicists that this spectrum was infinite and continuous. Nearly all types of radiation can be used for spectroscopy, to study. Other technological uses are described under electromagnetic radiation, for most of history, visible light was the only known part of the electromagnetic spectrum. The ancient Greeks recognized that light traveled in straight lines and studied some of its properties, the study of light continued, and during the 16th and 17th centuries conflicting theories regarded light as either a wave or a particle. The first discovery of electromagnetic radiation other than visible light came in 1800 and he was studying the temperature of different colors by moving a thermometer through light split by a prism. He noticed that the highest temperature was beyond red and he theorized that this temperature change was due to calorific rays that were a type of light ray that could not be seen. The next year, Johann Ritter, working at the end of the spectrum. These behaved similarly to visible light rays, but were beyond them in the spectrum. They were later renamed ultraviolet radiation, during the 1860s James Maxwell developed four partial differential equations for the electromagnetic field. Two of these equations predicted the possibility of, and behavior of, analyzing the speed of these theoretical waves, Maxwell realized that they must travel at a speed that was about the known speed of light. This startling coincidence in value led Maxwell to make the inference that light itself is a type of electromagnetic wave, maxwells equations predicted an infinite number of frequencies of electromagnetic waves, all traveling at the speed of light. This was the first indication of the existence of the electromagnetic spectrum. Maxwells predicted waves included waves at very low compared to infrared. Hertz found the waves and was able to infer that they traveled at the speed of light, Hertz also demonstrated that the new radiation could be both reflected and refracted by various dielectric media, in the same manner as light. For example, Hertz was able to focus the waves using a lens made of tree resin, in a later experiment, Hertz similarly produced and measured the properties of microwaves
20.
Radio wave
–
Radio waves are a type of electromagnetic radiation with wavelengths in the electromagnetic spectrum longer than infrared light. Radio waves have frequencies as high as 300 GHz to as low as 3 kHz, though some definitions describe waves above 1 or 3 GHz as microwaves, at 300 GHz, the corresponding wavelength is 1 mm, and at 3 kHz is 100 km. Like all other electromagnetic waves, they travel at the speed of light, naturally occurring radio waves are generated by lightning, or by astronomical objects. Radio waves are generated by radio transmitters and received by radio receivers, the radio spectrum is divided into a number of radio bands on the basis of frequency, allocated to different uses. Radio waves were first predicted by mathematical work done in 1867 by Scottish mathematical physicist James Clerk Maxwell, Maxwell noticed wavelike properties of light and similarities in electrical and magnetic observations. Radio waves were first used for communication in the mid 1890s by Guglielmo Marconi, different frequencies experience different combinations of these phenomena in the Earths atmosphere, making certain radio bands more useful for specific purposes than others. It does not necessarily require a cleared sight path, at lower frequencies radio waves can pass through buildings, foliage and this is the only method of propagation possible at microwave frequencies and above. On the surface of the Earth, line of propagation is limited by the visual horizon to about 40 miles. This is the used by cell phones, cordless phones, walkie-talkies, wireless networks, FM and television broadcasting. Indirect propagation, Radio waves can reach points beyond the line-of-sight by diffraction, diffraction allows a radio wave to bend around obstructions such as a building edge, a vehicle, or a turn in a hall. Radio waves also reflect from surfaces such as walls, floors, ceilings, vehicles and these effects are used in short range radio communication systems. Ground waves allow mediumwave and longwave broadcasting stations to have coverage areas beyond the horizon, the nonzero resistance of the earth absorbs energy from ground waves, so as they propagate the waves lose power and the wavefronts bend over at an angle to the surface. As frequency decreases, the decrease and the achievable range increases. Military very low frequency and extremely low frequency communication systems can communicate over most of the Earth, and with submarines hundreds of feet underwater. Tropospheric propagation, In the VHF and UHF bands, radio waves can travel somewhat beyond the horizon due to refraction in the troposphere. This is due to changes in the index of air with temperature and pressure. At times, radio waves can travel up to 500 -1000 km due to tropospheric ducting and these effects are variable and not as reliable as ionospheric propagation, below. So radio waves directed at an angle into the sky can return to Earth beyond the horizon, by using multiple skips communication at intercontinental distances can be achieved
21.
Gamma ray
–
Gamma ray, denoted by the lower-case Greek letter gamma, is penetrating electromagnetic radiation of a kind arising from the radioactive decay of atomic nuclei. It consists of photons in the highest observed range of photon energy, paul Villard, a French chemist and physicist, discovered gamma radiation in 1900 while studying radiation emitted by radium. In 1903, Ernest Rutherford named this radiation gamma rays, Rutherford had previously discovered two other types of radioactive decay, which he named alpha and beta rays. Gamma rays are able to ionize atoms, and are thus biologically hazardous. The decay of a nucleus from a high energy state to a lower energy state. Natural sources of gamma rays on Earth are observed in the decay of radionuclides. There are rare terrestrial natural sources, such as lightning strikes and terrestrial gamma-ray flashes, However, a large fraction of such astronomical gamma rays are screened by Earths atmosphere and can only be detected by spacecraft. Gamma rays typically have frequencies above 10 exahertz, and therefore have energies above 100 keV and wavelengths less than 10 picometers, However, this is not a strict definition, but rather only a rule-of-thumb description for natural processes. Electromagnetic radiation from radioactive decay of nuclei is referred to as gamma rays no matter its energy. This radiation commonly has energy of a few hundred keV, in astronomy, gamma rays are defined by their energy, and no production process needs to be specified. The energies of gamma rays from astronomical sources range to over 10 TeV, a notable example is the extremely powerful bursts of high-energy radiation referred to as long duration gamma-ray bursts, of energies higher than can be produced by radioactive decay. These bursts of gamma rays are thought to be due to the collapse of stars called hypernovae, the first gamma ray source to be discovered historically was the radioactive decay process called gamma decay. In this type of decay, a nucleus emits a gamma ray almost immediately upon formation. Paul Villard, a French chemist and physicist, discovered gamma radiation in 1900, However, Villard did not consider naming them as a different fundamental type. Rutherford also noted that gamma rays were not deflected by a field, another property making them unlike alpha. Gamma rays were first thought to be particles with mass, like alpha, Rutherford initially believed that they might be extremely fast beta particles, but their failure to be deflected by a magnetic field indicated that they had no charge. In 1914, gamma rays were observed to be reflected from crystal surfaces, Rutherford and his coworker Edward Andrade measured the wavelengths of gamma rays from radium, and found that they were similar to X-rays, but with shorter wavelengths and higher frequency. This was eventually recognized as giving them more energy per photon
22.
Visible spectrum
–
The visible spectrum is the portion of the electromagnetic spectrum that is visible to the human eye. Electromagnetic radiation in this range of wavelengths is called light or simply light. A typical human eye will respond to wavelengths from about 390 to 700 nm, in terms of frequency, this corresponds to a band in the vicinity of 430–770 THz. The spectrum does not, however, contain all the colors that the human eyes, unsaturated colors such as pink, or purple variations such as magenta, are absent, for example, because they can be made only by a mix of multiple wavelengths. Colors containing only one wavelength are called pure colors or spectral colors. Visible wavelengths pass through the window, the region of the electromagnetic spectrum that allows wavelengths to pass largely unattenuated through the Earths atmosphere. An example of this phenomenon is that clean air scatters blue light more than red wavelengths, the optical window is also referred to as the visible window because it overlaps the human visible response spectrum. The near infrared window lies just out of the vision, as well as the Medium Wavelength IR window. In the 13th century, Roger Bacon theorized that rainbows were produced by a process to the passage of light through glass or crystal. In the 17th century, Isaac Newton discovered that prisms could disassemble and reassemble white light and he was the first to use the word spectrum in this sense in print in 1671 in describing his experiments in optics. The result is red light is bent less sharply than violet as it passes through the prism. Newton divided the spectrum into seven named colors, red, orange, yellow, green, blue, indigo, the human eye is relatively insensitive to indigos frequencies, and some people who have otherwise-good vision cannot distinguish indigo from blue and violet. For this reason, some commentators, including Isaac Asimov, have suggested that indigo should not be regarded as a color in its own right. However, the evidence indicates that what Newton meant by indigo, comparing Newtons observation of prismatic colors to a color image of the visible light spectrum shows that indigo corresponds to what is today called blue, whereas blue corresponds to cyan. In the 18th century, Goethe wrote about optical spectra in his Theory of Colours, Goethe used the word spectrum to designate a ghostly optical afterimage, as did Schopenhauer in On Vision and Colors. Goethe argued that the spectrum was a compound phenomenon. Where Newton narrowed the beam of light to isolate the phenomenon, Goethe observed that a wider aperture produces not a spectrum but rather reddish-yellow, the spectrum appears only when these edges are close enough to overlap. Young was the first to measure the wavelengths of different colors of light, the connection between the visible spectrum and color vision was explored by Thomas Young and Hermann von Helmholtz in the early 19th century
23.
Calcium
–
Calcium is a chemical element with symbol Ca and atomic number 20. Calcium is a soft grayish-yellow alkaline earth metal, fifth-most-abundant element by mass in the Earths crust, the ion Ca2+ is also the fifth-most-abundant dissolved ion in seawater by both molarity and mass, after sodium, chloride, magnesium, and sulfate. Free calcium metal is too reactive to occur in nature, Calcium is produced in supernova nucleosynthesis. Calcium is a trace element in living organisms. It is the most abundant metal by mass in animals, and it is an important constituent of bone, teeth. In cell biology, the movement of the calcium ion into, Calcium carbonate and calcium citrate are often taken as dietary supplements. Calcium is on the World Health Organizations List of Essential Medicines, Calcium has a wide variety of applications, almost all of which are associated with calcium compounds and salts. Calcium metal is used as a deoxidizer, desulfurizer, and decarbonizer for production of ferrous and nonferrous alloys. In steelmaking and production of iron, Ca reacts with oxygen, Calcium carbonate is used in manufacturing cement and mortar, lime, limestone and aids in production in the glass industry. It also has chemical and optical uses as mineral specimens in toothpastes, Calcium hydroxide solution is used to detect the presence of carbon dioxide in a gas sample bubbled through a solution. The solution turns cloudy where CO2 is present, Calcium arsenate is used in insecticides. Calcium carbide is used to make acetylene gas and various plastics, Calcium chloride is used in ice removal and dust control on dirt roads, as a conditioner for concrete, as an additive in canned tomatoes, and to provide body for automobile tires. Calcium citrate is used as a food preservative, Calcium cyclamate is used as a sweetening agent in several countries. In the United States, it has been outlawed as a suspected carcinogen, Calcium gluconate is used as a food additive and in vitamin pills. Calcium hypochlorite is used as a swimming pool disinfectant, as an agent, as an ingredient in deodorant. Calcium permanganate is used in rocket propellant, textile production, as a water sterilizing agent. Calcium phosphate is used as a supplement for animal feed, fertilizer, in production for dough and yeast products, in the manufacture of glass. Calcium phosphide is used in fireworks, rodenticide, torpedoes, Calcium sulfate is used as common blackboard chalk, as well as, in its hemihydrate form, Plaster of Paris
24.
Ion
–
An ion is an atom or a molecule in which the total number of electrons is not equal to the total number of protons, giving the atom or molecule a net positive or negative electrical charge. Ions can be created, by chemical or physical means. In chemical terms, if an atom loses one or more electrons. If an atom gains electrons, it has a net charge and is known as an anion. Ions consisting of only a single atom are atomic or monatomic ions, because of their electric charges, cations and anions attract each other and readily form ionic compounds, such as salts. In the case of ionization of a medium, such as a gas, which are known as ion pairs are created by ion impact, and each pair consists of a free electron. The word ion comes from the Greek word ἰόν, ion, going and this term was introduced by English physicist and chemist Michael Faraday in 1834 for the then-unknown species that goes from one electrode to the other through an aqueous medium. Faraday also introduced the words anion for a charged ion. In Faradays nomenclature, cations were named because they were attracted to the cathode in a galvanic device, arrhenius explanation was that in forming a solution, the salt dissociates into Faradays ions. Arrhenius proposed that ions formed even in the absence of an electric current, ions in their gas-like state are highly reactive, and do not occur in large amounts on Earth, except in flames, lightning, electrical sparks, and other plasmas. These gas-like ions rapidly interact with ions of charge to give neutral molecules or ionic salts. These stabilized species are commonly found in the environment at low temperatures. A common example is the present in seawater, which are derived from the dissolved salts. Electrons, due to their mass and thus larger space-filling properties as matter waves, determine the size of atoms. Thus, anions are larger than the parent molecule or atom, as the excess electron repel each other, as such, in general, cations are smaller than the corresponding parent atom or molecule due to the smaller size of its electron cloud. One particular cation contains no electrons, and thus consists of a single proton - very much smaller than the parent hydrogen atom. Since the electric charge on a proton is equal in magnitude to the charge on an electron, an anion, from the Greek word ἄνω, meaning up, is an ion with more electrons than protons, giving it a net negative charge. A cation, from the Greek word κατά, meaning down, is an ion with fewer electrons than protons, there are additional names used for ions with multiple charges
25.
Roman numeral
–
The numeric system represented by Roman numerals originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers in this system are represented by combinations of letters from the Latin alphabet, Roman numerals, as used today, are based on seven symbols, The use of Roman numerals continued long after the decline of the Roman Empire. The numbers 1 to 10 are usually expressed in Roman numerals as follows, I, II, III, IV, V, VI, VII, VIII, IX, Numbers are formed by combining symbols and adding the values, so II is two and XIII is thirteen. Symbols are placed left to right in order of value. Named after the year of its release,2014 as MMXIV, the year of the games of the XXII Olympic Winter Games The standard forms described above reflect typical modern usage rather than a universally accepted convention. Usage in ancient Rome varied greatly and remained inconsistent in medieval, Roman inscriptions, especially in official contexts, seem to show a preference for additive forms such as IIII and VIIII instead of subtractive forms such as IV and IX. Both methods appear in documents from the Roman era, even within the same document, double subtractives also occur, such as XIIX or even IIXX instead of XVIII. Sometimes V and L are not used, with such as IIIIII. Such variation and inconsistency continued through the period and into modern times. Clock faces that use Roman numerals normally show IIII for four o’clock but IX for nine o’clock, however, this is far from universal, for example, the clock on the Palace of Westminster in London uses IV. Similarly, at the beginning of the 20th century, different representations of 900 appeared in several inscribed dates. For instance,1910 is shown on Admiralty Arch, London, as MDCCCCX rather than MCMX, although Roman numerals came to be written with letters of the Roman alphabet, they were originally independent symbols. The Etruscans, for example, used
26.
Chemical element
–
A chemical element or element is a species of atoms having the same number of protons in their atomic nuclei. There are 118 elements that have identified, of which the first 94 occur naturally on Earth with the remaining 24 being synthetic elements. There are 80 elements that have at least one stable isotope and 38 that have exclusively radioactive isotopes, iron is the most abundant element making up Earth, while oxygen is the most common element in the Earths crust. Chemical elements constitute all of the matter of the universe. The two lightest elements, hydrogen and helium, were formed in the Big Bang and are the most common elements in the universe. The next three elements were formed mostly by cosmic ray spallation, and are rarer than those that follow. Formation of elements with from 6 to 26 protons occurred and continues to occur in main sequence stars via stellar nucleosynthesis, the high abundance of oxygen, silicon, and iron on Earth reflects their common production in such stars. The term element is used for atoms with a number of protons as well as for a pure chemical substance consisting of a single element. A single element can form multiple substances differing in their structure, when different elements are chemically combined, with the atoms held together by chemical bonds, they form chemical compounds. Only a minority of elements are found uncombined as relatively pure minerals, among the more common of such native elements are copper, silver, gold, carbon, and sulfur. All but a few of the most inert elements, such as gases and noble metals, are usually found on Earth in chemically combined form. While about 32 of the elements occur on Earth in native uncombined forms. For example, atmospheric air is primarily a mixture of nitrogen, oxygen, and argon, the history of the discovery and use of the elements began with primitive human societies that found native elements like carbon, sulfur, copper and gold. Later civilizations extracted elemental copper, tin, lead and iron from their ores by smelting, using charcoal, alchemists and chemists subsequently identified many more, almost all of the naturally occurring elements were known by 1900. Save for unstable radioactive elements with short half-lives, all of the elements are available industrially, almost all other elements found in nature were made by various natural methods of nucleosynthesis. On Earth, small amounts of new atoms are produced in nucleogenic reactions, or in cosmogenic processes. Of the 94 naturally occurring elements, those with atomic numbers 1 through 82 each have at least one stable isotope, Isotopes considered stable are those for which no radioactive decay has yet been observed. Elements with atomic numbers 83 through 94 are unstable to the point that radioactive decay of all isotopes can be detected, the very heaviest elements undergo radioactive decay with half-lives so short that they are not found in nature and must be synthesized
27.
Iron
–
Iron is a chemical element with symbol Fe and atomic number 26. It is a metal in the first transition series and it is by mass the most common element on Earth, forming much of Earths outer and inner core. It is the fourth most common element in the Earths crust, like the other group 8 elements, ruthenium and osmium, iron exists in a wide range of oxidation states, −2 to +6, although +2 and +3 are the most common. Elemental iron occurs in meteoroids and other low oxygen environments, but is reactive to oxygen, fresh iron surfaces appear lustrous silvery-gray, but oxidize in normal air to give hydrated iron oxides, commonly known as rust. Unlike the metals that form passivating oxide layers, iron oxides occupy more volume than the metal and thus flake off, Iron metal has been used since ancient times, although copper alloys, which have lower melting temperatures, were used even earlier in human history. Pure iron is soft, but is unobtainable by smelting because it is significantly hardened and strengthened by impurities, in particular carbon. A certain proportion of carbon steel, which may be up to 1000 times harder than pure iron. Crude iron metal is produced in blast furnaces, where ore is reduced by coke to pig iron, further refinement with oxygen reduces the carbon content to the correct proportion to make steel. Steels and iron alloys formed with metals are by far the most common industrial metals because they have a great range of desirable properties. Iron chemical compounds have many uses, Iron oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding and purifying ores. Iron forms binary compounds with the halogens and the chalcogens, among its organometallic compounds is ferrocene, the first sandwich compound discovered. Iron plays an important role in biology, forming complexes with oxygen in hemoglobin and myoglobin. Iron is also the metal at the site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants. A human male of average height has about 4 grams of iron in his body and this iron is distributed throughout the body in hemoglobin, tissues, muscles, bone marrow, blood proteins, enzymes, ferritin, hemosiderin, and transport in plasma. The mechanical properties of iron and its alloys can be evaluated using a variety of tests, including the Brinell test, Rockwell test, the data on iron is so consistent that it is often used to calibrate measurements or to compare tests. An increase in the content will cause a significant increase in the hardness. Maximum hardness of 65 Rc is achieved with a 0. 6% carbon content, because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium. Because of its significance for planetary cores, the properties of iron at high pressures and temperatures have also been studied extensively
28.
Wavelength
–
In physics, the wavelength of a sinusoidal wave is the spatial period of the wave—the distance over which the waves shape repeats, and thus the inverse of the spatial frequency. Wavelength is commonly designated by the Greek letter lambda, the concept can also be applied to periodic waves of non-sinusoidal shape. The term wavelength is also applied to modulated waves. Wavelength depends on the medium that a wave travels through, examples of wave-like phenomena are sound waves, light, water waves and periodic electrical signals in a conductor. A sound wave is a variation in air pressure, while in light and other electromagnetic radiation the strength of the electric, water waves are variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary, wavelength is a measure of the distance between repetitions of a shape feature such as peaks, valleys, or zero-crossings, not a measure of how far any given particle moves. For example, in waves over deep water a particle near the waters surface moves in a circle of the same diameter as the wave height. The range of wavelengths or frequencies for wave phenomena is called a spectrum, the name originated with the visible light spectrum but now can be applied to the entire electromagnetic spectrum as well as to a sound spectrum or vibration spectrum. In linear media, any pattern can be described in terms of the independent propagation of sinusoidal components. The wavelength λ of a sinusoidal waveform traveling at constant speed v is given by λ = v f, in a dispersive medium, the phase speed itself depends upon the frequency of the wave, making the relationship between wavelength and frequency nonlinear. In the case of electromagnetic radiation—such as light—in free space, the speed is the speed of light. Thus the wavelength of a 100 MHz electromagnetic wave is about, the wavelength of visible light ranges from deep red, roughly 700 nm, to violet, roughly 400 nm. For sound waves in air, the speed of sound is 343 m/s, the wavelengths of sound frequencies audible to the human ear are thus between approximately 17 m and 17 mm, respectively. Note that the wavelengths in audible sound are much longer than those in visible light, a standing wave is an undulatory motion that stays in one place. A sinusoidal standing wave includes stationary points of no motion, called nodes, the upper figure shows three standing waves in a box. The walls of the box are considered to require the wave to have nodes at the walls of the box determining which wavelengths are allowed, the stationary wave can be viewed as the sum of two traveling sinusoidal waves of oppositely directed velocities. Consequently, wavelength, period, and wave velocity are related just as for a traveling wave, for example, the speed of light can be determined from observation of standing waves in a metal box containing an ideal vacuum. In that case, the k, the magnitude of k, is still in the same relationship with wavelength as shown above
29.
Hydrogen
–
Hydrogen is a chemical element with chemical symbol H and atomic number 1. With a standard weight of circa 1.008, hydrogen is the lightest element on the periodic table. Its monatomic form is the most abundant chemical substance in the Universe, non-remnant stars are mainly composed of hydrogen in the plasma state. The most common isotope of hydrogen, termed protium, has one proton, the universal emergence of atomic hydrogen first occurred during the recombination epoch. At standard temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic, since hydrogen readily forms covalent compounds with most nonmetallic elements, most of the hydrogen on Earth exists in molecular forms such as water or organic compounds. Hydrogen plays an important role in acid–base reactions because most acid-base reactions involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a charge when it is known as a hydride. The hydrogen cation is written as though composed of a bare proton, Hydrogen gas was first artificially produced in the early 16th century by the reaction of acids on metals. Industrial production is mainly from steam reforming natural gas, and less often from more energy-intensive methods such as the electrolysis of water. Most hydrogen is used near the site of its production, the two largest uses being fossil fuel processing and ammonia production, mostly for the fertilizer market, Hydrogen is a concern in metallurgy as it can embrittle many metals, complicating the design of pipelines and storage tanks. Hydrogen gas is flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume. The enthalpy of combustion is −286 kJ/mol,2 H2 + O2 →2 H2O +572 kJ Hydrogen gas forms explosive mixtures with air in concentrations from 4–74%, the explosive reactions may be triggered by spark, heat, or sunlight. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C, the detection of a burning hydrogen leak may require a flame detector, such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames, the destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible orange flames in that incident were the result of a mixture of hydrogen to oxygen combined with carbon compounds from the airship skin. H2 reacts with every oxidizing element, the ground state energy level of the electron in a hydrogen atom is −13.6 eV, which is equivalent to an ultraviolet photon of roughly 91 nm wavelength. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, however, the atomic electron and proton are held together by electromagnetic force, while planets and celestial objects are held by gravity. The most complicated treatments allow for the effects of special relativity
30.
Hydrogen spectral series
–
The emission spectrum of atomic hydrogen is divided into a number of spectral series, with wavelengths given by the Rydberg formula. These observed spectral lines are due to the electron making transitions between two levels in the atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics, the spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts. A hydrogen atom consists of an electron orbiting its nucleus, the electromagnetic force between the electron and the nuclear proton leads to a set of quantum states for the electron, each with its own energy. These states were visualized by the Bohr model of the atom as being distinct orbits around the nucleus. Each energy state, or orbit, is designated by an integer, Spectral emission occurs when an electron transitions, or jumps, from a higher energy state to a lower energy state. To distinguish the two states, the energy state is commonly designated as n′, and the higher energy state is designated as n. The energy of a photon corresponds to the energy difference between the two states. Because the energy of state is fixed, the energy difference between them is fixed, and the transition will always produce a photon with the same energy. The spectral lines are grouped into series according to n′, lines are named sequentially starting from the longest wavelength/lowest frequency of the series, using Greek letters within each series. For example, the 2 →1 line is called Lyman-alpha, there are emission lines from hydrogen that fall outside of these series, such as the 21 cm line. These emission lines correspond to much rarer atomic events such as hyperfine transitions, the fine structure also results in single spectral lines appearing as two or more closely grouped thinner lines, due to relativistic corrections. Meaningful values are returned only when n is greater than n′, note that this equation is valid for all hydrogen-like species, i. e. atoms having only a single electron, and the particular case of hydrogen spectral lines are given by Z=1. The series is named after its discoverer, Theodore Lyman, who discovered the lines from 1906–1914. All the wavelengths in the Lyman series are in the ultraviolet band, named after Johann Balmer, who discovered the Balmer formula, an empirical equation to predict the Balmer series, in 1885. Balmer lines are referred to as H-alpha, H-beta, H-gamma and so on. Four of the Balmer lines are in the visible part of the spectrum, with wavelengths longer than 400 nm. Parts of the Balmer series can be seen in the solar spectrum, H-alpha is an important line used in astronomy to detect the presence of hydrogen
31.
Balmer series
–
The Balmer series or Balmer lines in atomic physics, is the designation of one of a set of six named series describing the spectral line emissions of the hydrogen atom. The Balmer series is calculated using the Balmer formula, an empirical equation discovered by Johann Balmer in 1885, there are several prominent ultraviolet Balmer lines with wavelengths shorter than 400 nm. The number of lines is an infinite continuum as it approaches a limit of 364.6 nm in the ultraviolet. The Balmer series is characterized by the electron transitioning from n ≥3 to n =2, where n refers to the radial quantum number or principal quantum number of the electron. The transitions are named sequentially by Greek letter, n =3 to n =2 is called H-α,4 to 2 is H-β,5 to 2 is H-γ, and 6 to 2 is H-δ. Although physicists were aware of atomic emissions before 1885, they lacked a tool to predict where the spectral lines should appear. The Balmer equation predicts the four visible lines of hydrogen with high accuracy. The familiar red H-alpha spectral line of gas, which is the transition from the shell n =3 to the Balmer series shell n =2, is one of the conspicuous colours of the universe. It contributes a bright red line to the spectra of emission or ionisation nebula, like the Orion Nebula, in true-colour pictures, these nebula have a distinctly pink colour from the combination of visible Balmer lines that hydrogen emits. Later, it was discovered that when the lines of the hydrogen spectrum are examined at very high resolution. This splitting is called fine structure and it was also found that excited electrons could jump to the Balmer series n =2 from orbitals where n was greater than 6, emitting shades of violet when doing so. Balmer noticed that a number had a relation to every line in the hydrogen spectrum that was in the visible light region. When any integer higher than 2 was squared and then divided by itself squared minus 4 and his number also proved to be the limit of the series. The Balmer equation could be used to find the wavelength of the lines and was originally presented as follows. B is a constant with the value of 3. 6450682×10−7 m or 364.50682 nm, M is equal to 2 n is an integer such that n > m. In 1888 the physicist Johannes Rydberg generalized the Balmer equation for all transitions of hydrogen, where λ is the wavelength of the absorbed/emitted light and RH is the Rydberg constant for hydrogen. The Rydberg constant is seen to be equal to 4/B in Balmers formula, other characteristics of a star that can be determined by close analysis of its spectrum include surface gravity and composition. Because the Balmer lines are seen in the spectra of various objects
32.
Sharp series
–
The sharp series is a series of spectral lines in the atomic emission spectrum caused when electrons jump between the lowest p orbital and s orbitals of an atom. The spectral lines include some in the light, and they extend into ultraviolet. The lines get closer and closer together as the frequency increases never exceeding the series limit, the sharp series was important in the development of the understanding of electron shells and subshells in atoms. The sharp series has given the letter s to the s atomic orbital or subshell, the sharp series has a limit given by v = R2 − R2 with m =2,3,4,5,6. The series is caused by transitions from the lowest P state to higher energy S orbitals, the terms can have different designations, mS for single line systems, mσ for doublets and ms for triplets. Since the P state is not the lowest energy level for the atom the sharp series will not show up as absorption in a cool gas. The Rydberg correction is largest for the S term as the electron penetrates the inner core of electrons more, the limit for the series corresponds to electron emission, where the electron has so much energy it escapes the atom. Even though the series is called sharp, the lines may not be sharp, in alkali metals the P terms are split 2 P32 and 2 P12. This causes the lines to be doublets, with a constant spacing between the two parts of the double line. The sharp series used to be called the second series, with the diffuse series being the first subordinate. The sharp series limit is the same as the series limit. In the late 1800s these two were termed supplementary series, but in the next issue of the journal he realised that Rydberg had published the idea a few months earlier. Rydberg Schuster Law, Using wave numbers, the difference between the sharp and diffuse series limits and principle series limit is the same as the first transition in the principal series and this difference is the lowest P level. Runges Law, Using wave numbers the difference between the series limit and fundamental series limit is the same as the first transition in the diffuse series. This difference is the lowest D level energy, the sharp series has wave numbers given by, ν s = R n =4,5,6. The sodium diffuse series has wave numbers given by, ν d = R n =4,5,6, when n tends to infinity the diffuse and sharp series end up with the same limit. A sharp series of lines is designated by series letter s. The sharp series of lines has series letter S and formula 1P-mS
33.
Diffuse series
–
The diffuse series is a series of spectral lines in the atomic emission spectrum caused when electrons jump between the lowest p orbital and d orbitals of an atom. The total orbital angular momentum changes between 1 and 2, the spectral lines include some in the visible light, and may extend into ultraviolet or near infrared. The lines get closer and closer together as the frequency increases never exceeding the series limit, the diffuse series was important in the development of the understanding of electron shells and subshells in atoms. The diffuse series has given the letter d to the d atomic orbital or subshell, the diffuse series has values given by v = R2 − R2 w i t h m =2,3,4,5,6. The series is caused by transitions from the lowest P state to higher energy D orbitals, the terms can have different designations, mD for single line systems, mδ for doublets and md for triplets. Since the Electron in the D subshell state is not the lowest energy level for the atom the diffuse series will not show up as absorption in a cool gas. The Rydberg correction is largest for the S term as the electron penetrates the inner core of electrons more, the limit for the series corresponds to electron emission, where the electron has so much energy it escapes the atom. In alkali metals the P terms are split 2 P32 and 2 P12 and this causes the spectral lines to be doublets, with a constant spacing between the two parts of the double line. This splitting is called fine structure, the splitting is larger for atoms with higher atomic number. The splitting decreases towards the series limit, another splitting occurs on the redder line of the doublet. This is because of splitting in the D level n d 2 D32 and n d 2 D52, splitting in the D level has a lesser amount than the P level, and it reduces as the series limit is approached. The diffuse series used to be called the first subordinate series, with the series being the second subordinate. The diffuse series limit is the same as the series limit. In the late 1800s these two were termed supplementary series, spectral lines of the diffuse series are split into three lines in what is called fine structure. These lines cause the line to look diffuse. The reason this happens is that both the P and D levels are split into two closely spaced energies, P is split into P12 a n d P32. D is split into D32 a n d D52, only three of the possible four transitions can take place because the angular momentum change cannot have a magnitude greater than one. But in the issue of the journal he realised that Rydberg had published the idea a few months earlier
34.
Uncertainty principle
–
The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928. Heisenberg offered such an effect at the quantum level as a physical explanation of quantum uncertainty. Thus, the uncertainty principle actually states a fundamental property of quantum systems, since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems, applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers. The uncertainty principle is not readily apparent on the scales of everyday experience. So it is helpful to demonstrate how it applies to more easily understood physical situations, two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, a nonzero function and its Fourier transform cannot both be sharply localized. In matrix mechanics, the formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value, for example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. According to the de Broglie hypothesis, every object in the universe is a wave, the position of the particle is described by a wave function Ψ. The time-independent wave function of a plane wave of wavenumber k0 or momentum p0 is ψ ∝ e i k 0 x = e i p 0 x / ℏ. In the case of the plane wave, | ψ |2 is a uniform distribution. In other words, the position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. The figures to the right show how with the addition of many plane waves, in mathematical terms, we say that ϕ is the Fourier transform of ψ and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, One way to quantify the precision of the position and momentum is the standard deviation σ. Since | ψ |2 is a probability density function for position, the precision of the position is improved, i. e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i. e. increased σp. Another way of stating this is that σx and σp have a relationship or are at least bounded from below
35.
Spontaneous emission
–
Spontaneous emission is the process in which a quantum mechanical system transitions from an excited energy state to a lower energy state and emits a quantum in the form of a photon. Spontaneous emission is responsible for most of the light we see all around us. If atoms are excited by means other than heating, the spontaneous emission is called luminescence. And there are different forms of luminescence depending on how excited atoms are produced, if the excitation is effected by the absorption of radiation the spontaneous emission is called fluorescence. Sometimes molecules have a level and continue to fluoresce long after the exciting radiation is turned off. Figurines that glow in the dark are phosphorescent, lasers start via spontaneous emission, then during continuous operation work by stimulated emission. Spontaneous emission cannot be explained by classical theory and is fundamentally a quantum process. Contemporary physicists, when asked to give an explanation for spontaneous emission. In 1963 the Jaynes-Cummings model was developed describing the system of an atom interacting with a quantized field mode within an optical cavity. It gave the nonintuitive prediction that the rate of spontaneous emission could be controlled depending on the conditions of the surrounding vacuum field. These experiments gave rise to cavity quantum electrodynamics, the study of effects of mirrors and cavities on radiative corrections. If a light source is in a state with energy E2, it may spontaneously decay to a lower lying level with energy E1. The photon will have angular frequency ω and an energy ℏ ω, E2 − E1 = ℏ ω, note, ℏ ω = h ν, where h is the Planck constant and ν is the linear frequency. The phase of the photon in spontaneous emission is random as is the direction in which the photon propagates and this is not true for stimulated emission. In the rate-equation A21 is a proportionality constant for this transition in this particular light source. The constant is referred to as the Einstein A coefficient, and has units s −1, the number of excited states N thus decays exponentially with time, similar to radioactive decay. After one lifetime, the number of excited states decays to 36. 8% of its original value, the radiative decay rate Γ r a d is inversely proportional to the lifetime τ21, A21 = Γ21 =1 τ21. Spontaneous transitions were not explainable within the framework of the Schroedinger equation, in which the energy levels were quantized
36.
Auger effect
–
The Auger effect is a physical phenomenon in which the filling of an inner-shell vacancy of an atom is accompanied by the emission of an electron from the same atom. When a core electron is removed, leaving a vacancy, an electron from an energy level may fall into the vacancy. The effect was first discovered by Lise Meitner in 1922, Pierre Victor Auger independently discovered the effect shortly after and is credited with the discovery in most of the scientific community. These energy levels depend on the type of atom and the environment in which the atom was located. The resulting spectra can be used to determine the identity of the emitting atoms, Auger recombination is a similar Auger effect which occurs in semiconductors. An electron and electron hole can recombine giving up their energy to an electron in the conduction band, the reverse effect is known as impact ionization. The French physicist Pierre Victor Auger independently discovered it in 1923 upon analysis of a Wilson cloud chamber experiment, high-energy X-rays were applied to ionize gas particles and observe photoelectric electrons. Auger electron spectroscopy Coster–Kronig transition Electron capture Radiative Auger effect Charge carrier generation and recombination Auger therapy
37.
Cauchy distribution
–
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz function. The Cauchy distribution f is the distribution of the X-intercept of a ray issuing from with a uniformly distributed angle and it is also the distribution of the ratio of two independent normally distributed Gaussian random variables. The Cauchy distribution is used in statistics as the canonical example of a pathological distribution since both its expected value and its variance are undefined. The Cauchy distribution does not have moments of order greater than or equal to one. The Cauchy distribution has no moment generating function and its importance in physics is the result of it being the solution to the differential equation describing forced resonance. In mathematics, it is related to the Poisson kernel. Many mechanisms cause homogeneous broadening, most notably collision broadening and it is one of the few distributions that is stable and has a probability density function that can be expressed analytically, the others being the normal distribution and the Lévy distribution. Functions with the form of the Cauchy distribution were studied by mathematicians in the 17th century, as such, the name of the distribution is a case of Stiglers Law of Eponymy. Poisson noted that if the mean of observations following such a distribution were taken, as such, Laplaces use of the Central Limit Theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, γ is also equal to half the interquartile range and is sometimes called the probable error. Augustin-Louis Cauchy exploited such a density function in 1827 with a scale parameter. The maximum value or amplitude of the Cauchy PDF is 1 π γ, in physics, a three-parameter Lorentzian function is often used, f = I = I, where I is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case where I =1 π γ. The cumulative distribution function is, F =1 π arctan +12 and it follows that the first and third quartiles are, and hence the interquartile range is 2 γ. In its standard form, it is the maximum entropy probability distribution for a random variate X for which E = ln , the Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are equal to x 0. When U and V are two independent normally distributed variables with expected value 0 and variance 1, then the ratio U / V has the standard Cauchy distribution
38.
Doppler effect
–
The Doppler effect is the change in frequency or wavelength of a wave for an observer moving relative to its source. It is named after the Austrian physicist Christian Doppler, who proposed it in 1842 in Prague, a common example of Doppler shift is the change of pitch heard when a vehicle sounding a siren or horn approaches, passes, and recedes from an observer. Compared to the frequency, the received frequency is higher during the approach, identical at the instant of passing by. When the source of the waves is moving towards the observer, therefore, each wave takes slightly less time to reach the observer than the previous wave. Hence, the time between the arrival of successive wave crests at the observer is reduced, causing an increase in the frequency, while they are travelling, the distance between successive wave fronts is reduced, so the waves bunch together. The distance between wave fronts is then increased, so the waves spread out. For waves that propagate in a medium, such as sound waves, the total Doppler effect may therefore result from motion of the source, motion of the observer, or motion of the medium. Each of these effects is analyzed separately, for waves which do not require a medium, such as light or gravity in general relativity, only the relative difference in velocity between the observer and the source needs to be considered. Doppler first proposed this effect in 1842 in his treatise Über das farbige Licht der Doppelsterne und einiger anderer Gestirne des Himmels, the hypothesis was tested for sound waves by Buys Ballot in 1845. He confirmed that the pitch was higher than the emitted frequency when the sound source approached him. Hippolyte Fizeau discovered independently the same phenomenon on electromagnetic waves in 1848, in Britain, John Scott Russell made an experimental study of the Doppler effect. The frequency is decreased if either is moving away from the other, the above formula assumes that the source is either directly approaching or receding from the observer. If the source approaches the observer at an angle, the frequency that is first heard is higher than the objects emitted frequency. When the observer is close to the path of the object. When the observer is far from the path of the object, to understand what happens, consider the following analogy. Someone throws one ball every second at a man, assume that balls travel with constant velocity. If the thrower is stationary, the man will receive one every second. However, if the thrower is moving towards the man, he will receive balls more frequently because the balls will be spaced out
39.
Gaussian function
–
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the form, f = a e −22 c 2 for arbitrary real constants a, b and c. It is named after the mathematician Carl Friedrich Gauss, the graph of a Gaussian is a characteristic symmetric bell curve shape. The parameter a is the height of the peak, b is the position of the center of the peak. Gaussian functions arise by composing the exponential function with a quadratic function. The Gaussian functions are thus those functions whose logarithm is a quadratic function. The parameter c is related to the width at half maximum of the peak according to F W H M =22 ln 2 c ≈2.35482 c. The full width at tenth of maximum for a Gaussian could be of interest and is F W T M =22 ln 10 c ≈4.29193 c, Gaussian functions are analytic, and their limit as x → ∞ is 0. Gaussian functions are among those functions that are elementary but lack elementary antiderivatives and these Gaussians are plotted in the accompanying figure. Gaussian functions centered at zero minimize the Fourier uncertainty principle. The product of two Gaussian functions is a Gaussian, and the convolution of two Gaussian functions is also a Gaussian, with variance being the sum of the original variances, the product of two Gaussian probability density functions, though, is not in general a Gaussian PDF. Taking the Fourier transform of a Gaussian function with parameters a =1, b =0 and c yields another Gaussian function, so in particular the Gaussian functions with b =0 and c =1 are kept fixed by the Fourier transform. A physical realization is that of the pattern, for example. The integral ∫ − ∞ ∞ a e −2 /2 c 2 d x for some real constants a, b, c >0 can be calculated by putting it into the form of a Gaussian integral. First, the constant a can simply be factored out of the integral, next, the variable of integration is changed from x to y = x - b. Consequently, the sets of the Gaussian will always be ellipses. A particular example of a two-dimensional Gaussian function is f = A exp , here the coefficient A is the amplitude, xo, yo is the center and σx, σy are the x and y spreads of the blob. The figure on the right was created using A =1, xo =0, yo =0, σx = σy =1. The volume under the Gaussian function is given by V = ∫ − ∞ ∞ ∫ − ∞ ∞ f d x d y =2 π A σ x σ y
40.
Density
–
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume, ρ = m V, where ρ is the density, m is the mass, and V is the volume. In some cases, density is defined as its weight per unit volume. For a pure substance the density has the numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity, osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. Thus a relative density less than one means that the floats in water. The density of a material varies with temperature and pressure and this variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object, increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a results in convection of the heat from the bottom to the top. This causes it to rise relative to more dense unheated material, the reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is a property in that increasing the amount of a substance does not increase its density. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass, upon this discovery, he leapt from his bath and ran naked through the streets shouting, Eureka. As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment, the story first appeared in written form in Vitruvius books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time, from the equation for density, mass density has units of mass divided by volume. As there are units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per metre and the cgs unit of gram per cubic centimetre are probably the most commonly used units for density.1,000 kg/m3 equals 1 g/cm3. In industry, other larger or smaller units of mass and or volume are often more practical, see below for a list of some of the most common units of density
41.
Temperature
–
A temperature is an objective comparative measurement of hot or cold. It is measured by a thermometer, several scales and units exist for measuring temperature, the most common being Celsius, Fahrenheit, and, especially in science, Kelvin. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, the kinetic theory offers a valuable but limited account of the behavior of the materials of macroscopic bodies, especially of fluids. Temperature is important in all fields of science including physics, geology, chemistry, atmospheric sciences, medicine. The Celsius scale is used for temperature measurements in most of the world. Because of the 100 degree interval, it is called a centigrade scale.15, the United States commonly uses the Fahrenheit scale, on which water freezes at 32°F and boils at 212°F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scottish physicist who first defined it and it is a thermodynamic or absolute temperature scale. Its zero point, 0K, is defined to coincide with the coldest physically-possible temperature and its degrees are defined through thermodynamics. The temperature of zero occurs at 0K = −273. 15°C. For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment, Temperature is one of the principal quantities in the study of thermodynamics. There is a variety of kinds of temperature scale and it may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century, empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a capillary tube, is dependent largely on temperature. Such scales are only within convenient ranges of temperature. For example, above the point of mercury, a mercury-in-glass thermometer is impracticable. A material is of no use as a thermometer near one of its phase-change temperatures, in spite of these restrictions, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics, nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Theoretically based temperature scales are based directly on theoretical arguments, especially those of thermodynamics, kinetic theory and they rely on theoretical properties of idealized devices and materials
42.
Stable distribution
–
The stable distribution family is also sometimes referred to as the Lévy alpha-stable distribution, after Paul Lévy, the first mathematician to have studied it. Of the four parameters defining the family, most attention has focused on the stability parameter. Stable distributions have 0 < α ≤2, with the upper bound corresponding to the normal distribution, the distributions have undefined variance for α <2, and undefined mean for α ≤1. The importance of stable probability distributions is that they are attractors for properly normed sums of independent, the normal distribution defines a family of stable distributions. By the classical central limit theorem the properly normed sum of a set of variables, each with finite variance. Without the finite variance assumption, the limit may be a distribution that is not normal. Mandelbrot referred to such distributions as stable Paretian distributions, after Vilfredo Pareto, a non-degenerate distribution is a stable distribution if it satisfies the following property, Let X1 and X2 be independent copies of a random variable X. Then X is said to be stable if for any constants a >0 and b >0 the random variable aX1 + bX2 has the distribution as cX + d for some constants c >0 and d. The distribution is said to be stable if this holds with d =0. Since the normal distribution, the Cauchy distribution, and the Lévy distribution all have the above property, although the probability density function for a general stable distribution cannot be written analytically, the general characteristic function can be. Notice that in context the usual skewness is not well defined, as for α <2 the distribution does not admit 2nd or higher moments. The reason this gives a distribution is that the characteristic function for the sum of two random variables equals the product of the two corresponding characteristic functions. Adding two random variables from a distribution gives something with the same values of α and β. The value of the function at some value t is the complex conjugate of its value at −t as it should be so that the probability distribution function will be real. When α <1 and β =1, the distribution is supported by [μ, the above definition is only one of the parametrizations in use for stable distributions, it is the most common but is not continuous in the parameters at α =1. In either parametrization one can make a linear transformation of the variable to get a random variable whose density is f. In the first parametrization, if the mean exists then it is equal to μ, a stable distribution is therefore specified by the above four parameters. It can be shown that any non-degenerate stable distribution has a density function