1.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations
2.
Latin
–
Latin is a classical language belonging to the Italic branch of the Indo-European languages. The Latin alphabet is derived from the Etruscan and Greek alphabets, Latin was originally spoken in Latium, in the Italian Peninsula. Through the power of the Roman Republic, it became the dominant language, Vulgar Latin developed into the Romance languages, such as Italian, Portuguese, Spanish, French, and Romanian. Latin, Italian and French have contributed many words to the English language, Latin and Ancient Greek roots are used in theology, biology, and medicine. By the late Roman Republic, Old Latin had been standardised into Classical Latin, Vulgar Latin was the colloquial form spoken during the same time and attested in inscriptions and the works of comic playwrights like Plautus and Terence. Late Latin is the language from the 3rd century. Later, Early Modern Latin and Modern Latin evolved, Latin was used as the language of international communication, scholarship, and science until well into the 18th century, when it began to be supplanted by vernaculars. Ecclesiastical Latin remains the language of the Holy See and the Roman Rite of the Catholic Church. Today, many students, scholars and members of the Catholic clergy speak Latin fluently and it is taught in primary, secondary and postsecondary educational institutions around the world. The language has been passed down through various forms, some inscriptions have been published in an internationally agreed, monumental, multivolume series, the Corpus Inscriptionum Latinarum. Authors and publishers vary, but the format is about the same, volumes detailing inscriptions with a critical apparatus stating the provenance, the reading and interpretation of these inscriptions is the subject matter of the field of epigraphy. The works of several hundred ancient authors who wrote in Latin have survived in whole or in part and they are in part the subject matter of the field of classics. The Cat in the Hat, and a book of fairy tales, additional resources include phrasebooks and resources for rendering everyday phrases and concepts into Latin, such as Meissners Latin Phrasebook. The Latin influence in English has been significant at all stages of its insular development. From the 16th to the 18th centuries, English writers cobbled together huge numbers of new words from Latin and Greek words, dubbed inkhorn terms, as if they had spilled from a pot of ink. Many of these words were used once by the author and then forgotten, many of the most common polysyllabic English words are of Latin origin through the medium of Old French. Romance words make respectively 59%, 20% and 14% of English, German and those figures can rise dramatically when only non-compound and non-derived words are included. Accordingly, Romance words make roughly 35% of the vocabulary of Dutch, Roman engineering had the same effect on scientific terminology as a whole
3.
Chi (letter)
–
Chi is the 22nd letter of the Greek alphabet, pronounced /ˈkaɪ/ or /ˈkiː/ in English. Its value in Ancient Greek was a velar stop /kʰ/. In Koine Greek and later dialects it became a fricative along with Θ and Φ, in front of low or back vowels and consonants, it is pronounced as a voiceless velar fricative, as in German ach. Chi is romanized as ⟨ch⟩ in most systematic transliteration conventions, in addition, in Modern Greek, it is often also romanized as ⟨h⟩ or ⟨x⟩ in informal practice. In the system of Greek numerals, it has a value of 600, in ancient times, some local forms of the Greek alphabet used the chi instead of xi to represent the /ks/ sound. This was borrowed into the early Latin language, which led to the use of the letter X for the sound in Latin. Chi was also included in the Cyrillic script as the letter Х, in the International Phonetic Alphabet, the minuscule chi is the symbol for the voiceless uvular fricative. Chi is the basis for the name literary Chiastic structure and the name of Chiasmus, in Platos Timaeus, it is explained that the two bands that form the soul of the world cross each other like the letter Χ. Platos analogy, along several other examples of chi as a symbol occur in Thomas Brownes discourse The Garden of Cyrus. Chi or X is often used to abbreviate the name Christ, when fused within a single typespace with the Greek letter Rho, it is called the labarum and used to represent the person of Jesus Christ. These characters are used only as mathematical symbols, stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style. In statistics, the term chi-squared or χ2 has various uses, including the distribution, the chi-squared test. In algebraic topology, Chi is used to represent the Euler characteristic of a surface, in neurology, the optic chiasm is named for the letter Chi because of its Χ-shape. In chemistry, the fraction and electronegativity may be denoted by the lowercase χ. In rhetoric, both chiastic structure and the figure of speech Chiasmus derive from their names from the shape of the letter Chi. In engineering, chi is used as a symbol for the factor of relevant buckling loads in the EN1993. In graph theory, a lowercase chi is used to represent a graphs chromatic number
4.
Chemical bond
–
A chemical bond is a lasting attraction between atoms that enables the formation of chemical compounds. The bond may result from the force of attraction between atoms with opposite charges, or through the sharing of electrons as in the covalent bonds. Since opposite charges attract via an electromagnetic force, the negatively charged electrons that are orbiting the nucleus. An electron positioned between two nuclei will be attracted to both of them, and the nuclei will be attracted toward electrons in this position and this attraction constitutes the chemical bond. This phenomenon limits the distance between nuclei and atoms in a bond, in general, strong chemical bonding is associated with the sharing or transfer of electrons between the participating atoms. All bonds can be explained by quantum theory, but, in practice, simplification rules allow chemists to predict the strength, directionality, the octet rule and VSEPR theory are two examples. Electrostatics are used to describe bond polarities and the effects they have on chemical substances, a chemical bond is an attraction between atoms. This attraction may be seen as the result of different behaviors of the outermost or valence electrons of atoms and these behaviors merge into each other seamlessly in various circumstances, so that there is no clear line to be drawn between them. However it remains useful and customary to differentiate different types of bond, which result in different properties of condensed matter. In the simplest view of a covalent bond, one or more electrons are drawn into the space between the two atomic nuclei, energy is released by bond formation. This is not as a reduction in energy, because the attraction of the two electrons to the two protons is offset by the electron-electron and proton-proton repulsions. In a polar covalent bond, one or more electrons are shared between two nuclei. Such weak intermolecular bonds give organic molecular substances, such as waxes and oils, their soft bulk character, also, the melting points of such covalent polymers and networks increase greatly. In a simplified view of a bond, the bonding electron is not shared at all. In this type of bond, the atomic orbital of one atom has a vacancy which allows the addition of one or more electrons. These newly added electrons potentially occupy a lower energy-state than they experience in a different atom, thus, one nucleus offers a more tightly bound position to an electron than does another nucleus, with the result that one atom may transfer an electron to the other. This transfer causes one atom to assume a net charge. The bond then results from electrostatic attraction between atoms and the atoms become positive or negatively charged ions, ionic bonds may be seen as extreme examples of polarization in covalent bonds
5.
Energy level
–
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy. This contrasts with classical particles, which can have any energy and these discrete values are called energy levels. The energy spectrum of a system with discrete energy levels is said to be quantized. In chemistry and atomic physics, a shell, or a principal energy level. The closest shell to the nucleus is called the 1 shell, followed by the 2 shell, then the 3 shell, the shells correspond with the principal quantum numbers or are labeled alphabetically with letters used in the X-ray notation. Each shell can contain only a number of electrons, The first shell can hold up to two electrons, the second shell can hold up to eight electrons, the third shell can hold up to 18. The general formula is that the nth shell can in principle hold up to 2 electrons, since electrons are electrically attracted to the nucleus, an atoms electrons will generally occupy outer shells only if the more inner shells have already been completely filled by other electrons. However, this is not a requirement, atoms may have two or even three incomplete outer shells. For an explanation of why electrons exist in these shells see electron configuration, if the potential energy is set to zero at infinite distance from the atomic nucleus or molecule, the usual convention, then bound electron states have negative potential energy. If an atom, ion, or molecule is at the lowest possible level, it. If it is at an energy level, it is said to be excited. If more than one quantum state is at the same energy. They are then called degenerate energy levels, quantized energy levels result from the relation between a particles energy and its wavelength. For a confined particle such as an electron in an atom, only stationary states with energies corresponding to integral numbers of wavelengths can exist, for other states the waves interfere destructively, resulting in zero probability density. Elementary examples that show mathematically how energy levels come about are the particle in a box, the first evidence of quantization in atoms was the observation of spectral lines in light from the sun in the early 1800s by Joseph von Fraunhofer and William Hyde Wollaston. The notion of levels was proposed in 1913 by Danish physicist Niels Bohr in the Bohr theory of the atom. The modern quantum mechanical theory giving an explanation of these levels in terms of the Schrödinger equation was advanced by Erwin Schrödinger and Werner Heisenberg in 1926. When the electron is bound to the atom in any closer value of n, assume there is one electron in a given atomic orbital in a hydrogen-like atom
6.
Paramagnetism
–
In contrast with this behavior, diamagnetic materials are repelled by magnetic fields and form induced magnetic fields in the direction opposite to that of the applied magnetic field. Paramagnetic materials include most chemical elements and some compounds, they have a magnetic permeability greater than or equal to 1. The magnetic moment induced by the field is linear in the field strength. It typically requires a sensitive balance to detect the effect. Paramagnetic materials have a small, positive susceptibility to magnetic fields and these materials are slightly attracted by a magnetic field and the material does not retain the magnetic properties when the external field is removed. Paramagnetic properties are due to the presence of unpaired electrons. Paramagnetic materials include magnesium, molybdenum, lithium, and tantalum, unlike ferromagnets, paramagnets do not retain any magnetization in the absence of an externally applied magnetic field because thermal motion randomizes the spin orientations. Thus the total magnetization drops to zero when the field is removed. Even in the presence of the field there is only a small induced magnetization because only a fraction of the spins will be oriented by the field. This fraction is proportional to the strength and this explains the linear dependency. Constituent atoms or molecules of paramagnetic materials have permanent magnetic moments, the permanent moment generally is due to the spin of unpaired electrons in atomic or molecular electron orbitals. In pure paramagnetism, the dipoles do not interact with one another and are oriented in the absence of an external field due to thermal agitation. When a magnetic field is applied, the dipoles will tend to align with the applied field, however, the true origins of the alignment can only be understood via the quantum-mechanical properties of spin and angular momentum. Paramagnetic behavior can also be observed in materials that are above their Curie temperature. At these temperatures, the thermal energy simply overcomes the interaction energy between the spins. In conductive materials the electrons are delocalized, that is, they travel through the more or less as free electrons. Conductivity can be understood in a band structure picture as arising from the filling of energy bands. In an ordinary nonmagnetic conductor the band is identical for both spin-up and spin-down electrons
7.
Diamagnetism
–
Diamagnetic materials are repelled by a magnetic field, an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field, diamagnetism is a quantum mechanical effect that occurs in all materials, when it is the only contribution to the magnetism the material is called diamagnetic. In paramagnetic and ferromagnetic substances the weak force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than μ0, the permeability of vacuum, diamagnetism was first discovered when Sebald Justinus Brugmans observed in 1778 that bismuth and antimony were repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and he adopted the term diamagnetism after it was suggested to him by William Whewell. Diamagnetism, to a greater or lesser degree, is a property of all materials, for materials that show some other form of magnetism, the diamagnetic contribution becomes negligible. Substances that mostly display diamagnetic behaviour are termed diamagnetic materials, or diamagnets, the magnetic susceptibility values of various molecular fragments are called Pascals constants. This means that diamagnetic materials are repelled by magnetic fields, however, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the susceptibility of diamagnets such as water is χv = −9. 05×10−6. The most strongly diamagnetic material is bismuth, χv = −1. 66×10−4, nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Note that because χv is derived from the ratio of the magnetic field to the applied field. All conductors exhibit an effective diamagnetism when they experience a magnetic field. The Lorentz force on electrons causes them to circulate around forming eddy currents, the eddy currents then produce an induced magnetic field opposite the applied field, resisting the conductors motion. Superconductors may be considered perfect diamagnets, because they expel all fields due to the Meissner effect, however this effect is not due to eddy currents, as in ordinary diamagnetic materials. If a powerful magnet is covered with a layer of water then the field of the magnet significantly repels the water and this causes a slight dimple in the waters surface that may be seen by its reflection. Diamagnets may be levitated in stable equilibrium in a magnetic field, Earnshaws theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaws theorem applies only to objects with positive susceptibilities and these are attracted to field maxima, which do not exist in free space. Diamagnets are attracted to field minima, and there can be a minimum in free space
8.
Permeability (electromagnetism)
–
In electromagnetism, permeability is the measure of the ability of a material to support the formation of a magnetic field within itself. Hence, it is the degree of magnetization that a material obtains in response to a magnetic field. Magnetic permeability is typically represented by the Greek letter µ, the term was coined in September 1885 by Oliver Heaviside. The opposite of magnetic permeability is magnetic reluctance, in SI units, permeability is measured in henries per meter, or equivalently in newtons per ampere squared. The magnetic constant has the exact value and its relation to permeability is B = μ H, where the permeability, µ, is a scalar if the medium is isotropic or a second rank tensor for an anisotropic medium. In general, permeability is not a constant, as it can vary with the position in the medium, the frequency of the applied, humidity, temperature. In a nonlinear medium, the permeability can depend on the strength of the magnetic field, Permeability as a function of frequency can take on real or complex values. In ferromagnetic materials, the relationship between B and H exhibits both non-linearity and hysteresis, B is not a function of H, but depends also on the history of the material. For these materials it is useful to consider the incremental permeability defined as Δ B = μ Δ Δ H. Permeability is the inductance per unit length. In SI units, permeability is measured in henries per metre, the auxiliary magnetic field H has dimensions current per unit length and is measured in units of amperes per metre. The product µ H thus has dimensions inductance times current per unit area, but inductance is magnetic flux per unit current, so the product has dimensions magnetic flux per unit area, that is, magnetic flux density. This is the magnetic field B, which is measured in webers per square-metre, B is related to the Lorentz force on a moving charge q, F = q. A magnetic dipole is a circulation of electric current. The dipole moment has dimensions current times area, units ampere square-metre, the H field at a distance from a dipole has magnitude proportional to the dipole moment divided by distance cubed, which has dimensions current per unit length. Relative permeability, denoted by the symbol µr, is the ratio of the permeability of a medium to the permeability of free space µ0, μ r = μ μ0. In terms of relative permeability, the susceptibility is χ m = μ r −1. The number χm is a quantity, sometimes called volumetric or bulk susceptibility, to distinguish it from χp. Diamagnetism is the property of an object causes it to create a magnetic field in opposition of an externally applied magnetic field
9.
Magnetization
–
In classical electromagnetism, magnetization or magnetic polarization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. Magnetization is not always uniform within a body, but rather varies between different points and it can be compared to electric polarization, which is the measure of the corresponding response of a material to an electric field in electrostatics. Physicists and engineers usually define magnetization as the quantity of magnetic moment per unit volume and it is represented by a pseudovector M. This is better illustrated through the following relation, m = ∭ M d V where m is a magnetic moment. Those definitions of P and M as a moments per unit volume are widely adopted, the M-field is measured in amperes per meter in SI units. The magnetization is often not listed as a parameter for commercially available ferromagnets. Instead the parameter that is listed is residual flux density, denoted B r, physicists often need the magnetization to calculate the moment of a ferromagnet. V is the volume of the magnet, μ0 =4 π ⋅10 −7 H/m is the permeability of vacuum. The behavior of magnetic fields, electric fields, charge density, the role of the magnetization is described below. The magnetization defines the magnetic field H as B = μ0 B = H +4 π M which is convenient for various calculations. The vacuum permeability μ0 is, by definition, 6993400000000000000♠4π×10−7 V·s/, a relation between M and H exists in many materials. In diamagnets and paramagnets, the relation is linear, M = χ m H where χm is called the volume magnetic susceptibility. In ferromagnets there is no correspondence between M and H because of Magnetic hysteresis. The magnetization M makes a contribution to the current density J and it is important to note that there is no such thing as a magnetic charge, but that issue was still debated through the whole 19th century. Other concepts, that went along with it, such as the auxiliary field H, however, they are convenient mathematical tools, and are therefore still used today for applications such as modeling the magnetic field of the Earth. The time-dependent behavior of magnetization becomes important when considering nanoscale and nanosecond timescale magnetization, technologically, this is one of the most important processes in magnetism that is linked to the magnetic data storage process such as used in modern hard disk drives. e. Incident electromagnetic radiation that is circularly polarized Demagnetization is the reduction or elimination of magnetization, another way is to pull it out of an electric coil with alternating current running through it, giving rise to fields that oppose the magnetization. One application of demagnetization is to eliminate unwanted magnetic fields, for example, magnetic fields can interfere with electronic devices such as cell phones or computers, and with machining by making cuttings cling to their parent
10.
Magnetic field
–
A magnetic field is the magnetic effect of electric currents and magnetic materials. The magnetic field at any point is specified by both a direction and a magnitude, as such it is represented by a vector field. The term is used for two distinct but closely related fields denoted by the symbols B and H, where H is measured in units of amperes per meter in the SI, B is measured in teslas and newtons per meter per ampere in the SI. B is most commonly defined in terms of the Lorentz force it exerts on moving electric charges, Magnetic fields can be produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. In quantum physics, the field is quantized and electromagnetic interactions result from the exchange of photons. Magnetic fields are used throughout modern technology, particularly in electrical engineering. The Earth produces its own field, which is important in navigation. Rotating magnetic fields are used in electric motors and generators. Magnetic forces give information about the carriers in a material through the Hall effect. The interaction of magnetic fields in electric devices such as transformers is studied in the discipline of magnetic circuits, noting that the resulting field lines crossed at two points he named those points poles in analogy to Earths poles. He also clearly articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them, almost three centuries later, William Gilbert of Colchester replicated Petrus Peregrinus work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilberts work, De Magnete, helped to establish magnetism as a science, in 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law. Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that the north and south poles cannot be separated, building on this force between poles, Siméon Denis Poisson created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic H-field is produced by magnetic poles, three discoveries challenged this foundation of magnetism, though. First, in 1819, Hans Christian Ørsted discovered that an electric current generates a magnetic field encircling it, then in 1820, André-Marie Ampère showed that parallel wires having currents in the same direction attract one another. Finally, Jean-Baptiste Biot and Félix Savart discovered the Biot–Savart law in 1820, extending these experiments, Ampère published his own successful model of magnetism in 1825. This has the benefit of explaining why magnetic charge can not be isolated. Also in this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism, in 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field
11.
Magnetic moment
–
The magnetic moment of a magnet is a quantity that determines the torque it will experience in an external magnetic field. A loop of current, a bar magnet, an electron, a molecule. The magnetic moment may be considered to be a vector having a magnitude, the direction of the magnetic moment points from the south to north pole of the magnet. The magnetic field produced by the magnet is proportional to its magnetic moment, more precisely, the term magnetic moment normally refers to a systems magnetic dipole moment, which produces the first term in the multipole expansion of a general magnetic field. The dipole component of a magnetic field is symmetric about the direction of its magnetic dipole moment. The magnetic moment is defined as a vector relating the aligning torque on the object from an applied magnetic field to the field vector itself. The relationship is given by, τ = μ × B where τ is the acting on the dipole and B is the external magnetic field. This definition is based on how one would measure the magnetic moment, in principle, the unit for magnetic moment is not a base unit in the International System of Units. As the torque is measured in newton-meters and the field in teslas. This has equivalents in other units, N·m/T = A·m2 = J/T where A is amperes. In the CGS system, there are different sets of electromagnetism units, of which the main ones are ESU, Gaussian. The ratio of these two non-equivalent CGS units is equal to the speed of light in space, expressed in cm·s−1. All formulae in this article are correct in SI units, they may need to be changed for use in other unit systems. For example, in SI units, a loop of current with current I and area A has magnetic moment IA, the preferred classical explanation of a magnetic moment has changed over time. Before the 1930s, textbooks explained the moment using hypothetical magnetic point charges, since then, most have defined it in terms of Ampèrian currents. The sources of magnetic moments in materials can be represented by poles in analogy to electrostatics, consider a bar magnet which has magnetic poles of equal magnitude but opposite polarity. Each pole is the source of force which weakens with distance. Since magnetic poles always come in pairs, their forces partially cancel each other because while one pole pulls and this cancellation is greatest when the poles are close to each other i. e. when the bar magnet is short
12.
Electric susceptibility
–
In electricity, the electric susceptibility χ e is a dimensionless proportionality constant that indicates the degree of polarization of a dielectric material in response to an applied electric field. The greater the susceptibility, the greater the ability of a material to polarize in response to the field. A similar parameter exists to relate the magnitude of the dipole moment p of an individual molecule to the local electric field E that induced the dipole. We have, P = N p = N ε0 α E local, where P is the polarization per unit volume, and N is the number of molecules per unit volume contributing to the polarization. Thus, if the electric field is parallel to the ambient electric field, we have, χ e E = N α E local Thus only if the local field equals the ambient field can we write. Otherwise, one should find a relation between the local and the macroscopic field, in some materials, the Clausius–Mossotti relation holds and reads χ e 3 + χ e = N α3. The definition of the molecular polarizability depends on the author, in the above definition, p = ε0 α E local, p and E are in SI units and the molecular polarizability α has the dimension of a volume. Another definition would be to keep SI units and to integrate ε0 into α, p = α E local, in this second definition, the polarizability would have the SI unit of C. m2/V. Yet another definition exists where p and E are expressed in the cgs system, using the cgs units gives α the dimension of a volume, as in the first definition, but with a value that is 4 π lower. In many materials the polarizability starts to saturate at high values of electric field and this saturation can be modelled by a nonlinear susceptibility. These susceptibilities are important in nonlinear optics and lead to such as second harmonic generation. The first susceptibility term, χ, corresponds to the linear susceptibility described above, while this first term is dimensionless, the subsequent nonlinear susceptibilities χ have units of n-1. The nonlinear susceptibilities can be generalized to anisotropic materials, in general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is P = ε0 ∫ − ∞ t χ e E d t ′. That is, the polarization is a convolution of the field at previous times with time-dependent susceptibility given by χ e. The upper limit of this integral can be extended to infinity as well if one defines χ e =0 for Δ t <0, an instantaneous response corresponds to Dirac delta function susceptibility χ e = χ e δ. It is more convenient in a system to take the Fourier transform. Due to the theorem, the integral becomes a simple product. This frequency dependence of the susceptibility leads to frequency dependence of the permittivity, the shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material
13.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
14.
Magnetic dipole moment
–
The magnetic moment of a magnet is a quantity that determines the torque it will experience in an external magnetic field. A loop of current, a bar magnet, an electron, a molecule. The magnetic moment may be considered to be a vector having a magnitude, the direction of the magnetic moment points from the south to north pole of the magnet. The magnetic field produced by the magnet is proportional to its magnetic moment, more precisely, the term magnetic moment normally refers to a systems magnetic dipole moment, which produces the first term in the multipole expansion of a general magnetic field. The dipole component of a magnetic field is symmetric about the direction of its magnetic dipole moment. The magnetic moment is defined as a vector relating the aligning torque on the object from an applied magnetic field to the field vector itself. The relationship is given by, τ = μ × B where τ is the acting on the dipole and B is the external magnetic field. This definition is based on how one would measure the magnetic moment, in principle, the unit for magnetic moment is not a base unit in the International System of Units. As the torque is measured in newton-meters and the field in teslas. This has equivalents in other units, N·m/T = A·m2 = J/T where A is amperes. In the CGS system, there are different sets of electromagnetism units, of which the main ones are ESU, Gaussian. The ratio of these two non-equivalent CGS units is equal to the speed of light in space, expressed in cm·s−1. All formulae in this article are correct in SI units, they may need to be changed for use in other unit systems. For example, in SI units, a loop of current with current I and area A has magnetic moment IA, the preferred classical explanation of a magnetic moment has changed over time. Before the 1930s, textbooks explained the moment using hypothetical magnetic point charges, since then, most have defined it in terms of Ampèrian currents. The sources of magnetic moments in materials can be represented by poles in analogy to electrostatics, consider a bar magnet which has magnetic poles of equal magnitude but opposite polarity. Each pole is the source of force which weakens with distance. Since magnetic poles always come in pairs, their forces partially cancel each other because while one pole pulls and this cancellation is greatest when the poles are close to each other i. e. when the bar magnet is short
15.
Ampere
–
The ampere, often shortened to amp, is a unit of electric current. In the International System of Units the ampere is one of the seven SI base units and it is named after André-Marie Ampère, French mathematician and physicist, considered the father of electrodynamics. SI defines the ampere in terms of base units by measuring the electromagnetic force between electrical conductors carrying electric current. The ampere was then defined as one coulomb of charge per second, in SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second. In the future, the SI definition may shift back to charge as the base unit, ampères force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the definition of the ampere. The SI unit of charge, the coulomb, is the quantity of electricity carried in 1 second by a current of 1 ampere, conversely, a current of one ampere is one coulomb of charge going past a given point per second,1 A =1 C s. In general, charge Q is determined by steady current I flowing for a time t as Q = It, constant, instantaneous and average current are expressed in amperes and the charge accumulated, or passed through a circuit over a period of time is expressed in coulombs. The relation of the ampere to the coulomb is the same as that of the watt to the joule, the ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the derived from it in the MKSA system would be conveniently sized. The international ampere was a realization of the ampere, defined as the current that would deposit 0.001118 grams of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is 0.99985 A, at present, techniques to establish the realization of an ampere have a relative uncertainty of approximately a few parts in 107, and involve realizations of the watt, the ohm and the volt. Rather than a definition in terms of the force between two current-carrying wires, it has proposed that the ampere should be defined in terms of the rate of flow of elementary charges. Since a coulomb is equal to 6. 2415093×1018 elementary charges. The proposed change would define 1 A as being the current in the direction of flow of a number of elementary charges per second. In 2005, the International Committee for Weights and Measures agreed to study the proposed change, the new definition was discussed at the 25th General Conference on Weights and Measures in 2014 but for the time being was not adopted. The current drawn by typical constant-voltage energy distribution systems is usually dictated by the power consumed by the system, for this reason the examples given below are grouped by voltage level
16.
Tesla (unit)
–
The tesla is a unit of measurement of the strength of a magnetic field. It is a unit of the International System of Units. One tesla is equal to one weber per square metre, the unit was announced during the General Conference on Weights and Measures in 1960 and is named in honour of Nikola Tesla, upon the proposal of the Slovenian electrical engineer France Avčin. The strongest fields encountered from permanent magnets are from Halbach spheres, the strongest field trapped in a laboratory superconductor as of June 2014 is 21 T. This may be appreciated by looking at the units for each, the unit of electric field in the MKS system of units is newtons per coulomb, N/C, while the magnetic field can be written as N/. The dividing factor between the two types of field is metres per second, which is velocity, in ferromagnets, the movement creating the magnetic field is the electron spin. In a current-carrying wire the movement is due to moving through the wire. One tesla is equivalent to,10,000 G, used in the CGS system, thus,10 kG =1 T, and 1 G = 10−4 T.1,000,000,000 γ, used in geophysics. Thus,1 γ =1 nT.42.6 MHz of the 1H nucleus frequency, thus, the magnetic field associated with NMR at 1 GHz is 23.5 T. One tesla is equal to 1 V·s/m2 and this can be shown by starting with the speed of light in vacuum, c = −1/2, and inserting the SI values and units for c, the vacuum permittivity ε0, and the vacuum permeability μ0. Cancellation of numbers and units then produces this relation, for those concerned with low-frequency electromagnetic radiation in the home, the following conversions are needed most,1000 nT =1 µT =10 mG,1,000,000 µT =1 T. For the relation to the units of the field, see the article on permeability
17.
Mole (unit)
–
The mole is the unit of measurement in the International System of Units for amount of substance. This number is expressed by the Avogadro constant, which has a value of 6. 022140857×1023 mol−1, the mole is one of the base units of the SI, and has the unit symbol mol. The mole is used in chemistry as a convenient way to express amounts of reactants and products of chemical reactions. For example, the chemical equation 2 H2 + O2 →2 H2O implies that 2 moles of dihydrogen and 1 mole of dioxygen react to form 2 moles of water. The mole may also be used to express the number of atoms, ions, the concentration of a solution is commonly expressed by its molarity, defined as the number of moles of the dissolved substance per litre of solution. For example, the relative molecular mass of natural water is about 18.015, therefore. The term gram-molecule was formerly used for essentially the same concept, the term gram-atom has been used for a related but distinct concept, namely a quantity of a substance that contains Avogadros number of atoms, whether isolated or combined in molecules. Thus, for example,1 mole of MgBr2 is 1 gram-molecule of MgBr2 but 3 gram-atoms of MgBr2, in honor of the unit, some chemists celebrate October 23, which is a reference to the 1023 scale of the Avogadro constant, as Mole Day. Some also do the same for February 6 and June 2, thus, by definition, one mole of pure 12C has a mass of exactly 12 g. It also follows from the definition that X moles of any substance will contain the number of molecules as X moles of any other substance. The mass per mole of a substance is called its molar mass, the number of elementary entities in a sample of a substance is technically called its amount. Therefore, the mole is a convenient unit for that physical quantity, one can determine the chemical amount of a known substance, in moles, by dividing the samples mass by the substances molar mass. Other methods include the use of the volume or the measurement of electric charge. The mass of one mole of a substance depends not only on its molecular formula, since the definition of the gram is not mathematically tied to that of the atomic mass unit, the number NA of molecules in a mole must be determined experimentally. The value adopted by CODATA in 2010 is NA =6. 02214129×1023 ±0. 00000027×1023, in 2011 the measurement was refined to 6. 02214078×1023 ±0. 00000018×1023. The number of moles of a sample is the sample mass divided by the mass of the material. The history of the mole is intertwined with that of mass, atomic mass unit, Avogadros number. The first table of atomic mass was published by John Dalton in 1805
18.
Density
–
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume, ρ = m V, where ρ is the density, m is the mass, and V is the volume. In some cases, density is defined as its weight per unit volume. For a pure substance the density has the numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity, osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. Thus a relative density less than one means that the floats in water. The density of a material varies with temperature and pressure and this variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object, increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a results in convection of the heat from the bottom to the top. This causes it to rise relative to more dense unheated material, the reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is a property in that increasing the amount of a substance does not increase its density. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass, upon this discovery, he leapt from his bath and ran naked through the streets shouting, Eureka. As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment, the story first appeared in written form in Vitruvius books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time, from the equation for density, mass density has units of mass divided by volume. As there are units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per metre and the cgs unit of gram per cubic centimetre are probably the most commonly used units for density.1,000 kg/m3 equals 1 g/cm3. In industry, other larger or smaller units of mass and or volume are often more practical, see below for a list of some of the most common units of density
19.
Ferromagnetism
–
Not to be confused with Ferrimagnetism, for an overview see Magnetism. Ferromagnetism is the mechanism by which certain materials form permanent magnets. In physics, several different types of magnetism are distinguished, an everyday example of ferromagnetism is a refrigerator magnet used to hold notes on a refrigerator door. The attraction between a magnet and ferromagnetic material is the quality of magnetism first apparent to the ancient world, permanent magnets are either ferromagnetic or ferrimagnetic, as are the materials that are noticeably attracted to them. Only a few substances are ferromagnetic, the common ones are iron, nickel, cobalt and most of their alloys, some compounds of rare earth metals, and a few naturally occurring minerals, including some varieties of lodestone. Historically, the term ferromagnetism was used for any material that could exhibit spontaneous magnetization and this general definition is still in common use. In particular, a material is ferromagnetic in this sense only if all of its magnetic ions add a positive contribution to the net magnetization. If some of the magnetic ions subtract from the net magnetization, if the moments of the aligned and anti-aligned ions balance completely so as to have zero net magnetization, despite the magnetic ordering, then it is an antiferromagnet. These alignment effects only occur at temperatures below a critical temperature. Among the first investigations of ferromagnetism are the works of Aleksandr Stoletov on measurement of the magnetic permeability of ferromagnetics. The table on the right lists a selection of ferromagnetic and ferrimagnetic compounds, ferromagnetism is a property not just of the chemical make-up of a material, but of its crystalline structure and microstructure. There are ferromagnetic metal alloys whose constituents are not themselves ferromagnetic, called Heusler alloys, conversely there are non-magnetic alloys, such as types of stainless steel, composed almost exclusively of ferromagnetic metals. Amorphous ferromagnetic metallic alloys can be made by rapid quenching of a liquid alloy. These have the advantage that their properties are isotropic, this results in low coercivity, low hysteresis loss, high permeability. One such typical material is a transition metal-metalloid alloy, made from about 80% transition metal, a relatively new class of exceptionally strong ferromagnetic materials are the rare-earth magnets. They contain lanthanide elements that are known for their ability to carry large magnetic moments in well-localized f-orbitals, a number of actinide compounds are ferromagnets at room temperature or exhibit ferromagnetism upon cooling. PuP is a paramagnet with cubic symmetry at room temperature, in its ferromagnetic state, PuPs easy axis is in the <100> direction. In NpFe2 the easy axis is <111>, above TC ≈500 K NpFe2 is also paramagnetic and cubic
20.
Ferrimagnetism
–
This happens when the populations consist of different materials or ions. Ferrimagnetism is exhibited by ferrites and magnetic garnets, the oldest known magnetic substance, magnetite, is a ferrimagnet, it was originally classified as a ferromagnet before Néels discovery of ferrimagnetism and antiferromagnetism in 1948. Ferrimagnetic materials are like ferromagnets in that they hold a spontaneous magnetization below the Curie temperature and this compensation point is observed easily in garnets and rare earth-transition metal alloys. Furthermore, ferrimagnets may also have an angular momentum compensation point at which the net angular momentum vanishes and this compensation point is a crucial point for achieving high speed magnetization reversal in magnetic memory devices. Ferrimagnetic materials have high resistivity and have anisotropic properties, the anisotropy is actually induced by an external applied field. When the interaction is strong, the signal can pass through the material. This directional property is used in the construction of devices like isolators, circulators and gyrators. Ferrimagnetic materials are used to produce optical isolators and circulators. Ferrimagnetic minerals in various types are used to study ancient geomagnetic properties of Earth. That field of study is known as paleomagnetism, ferrimagnetism can also occur in molecular magnets. A classic example is a dodecanuclear manganese molecule with a spin of S =10 derived from antiferromagnetic interaction on Mn metal centres with Mn and Mn metal centres
21.
Antiferromagnetic
–
This is, like ferromagnetism and ferrimagnetism, a manifestation of ordered magnetism. Generally, antiferromagnetic order may exist at low temperatures, vanishing at and above a certain temperature. Above the Néel temperature, the material is typically paramagnetic, when no external field is applied, the antiferromagnetic structure corresponds to a vanishing total magnetization. Although the net magnetization should be zero at a temperature of absolute zero, the magnetic susceptibility of an antiferromagnetic material typically shows a maximum at the Néel temperature. In contrast, at the transition between the ferromagnetic to the paramagnetic phases the susceptibility will diverge, in the antiferromagnetic case, a divergence is observed in the staggered susceptibility. Various microscopic interactions between the magnetic moments or spins may lead to antiferromagnetic structures, in the simplest case, one may consider an Ising model on a bipartite lattice, e. g. the simple cubic lattice, with couplings between spins at nearest neighbor sites. Depending on the sign of that interaction, ferromagnetic or antiferromagnetic order will result, geometrical frustration or competing ferro- and antiferromagnetic interactions may lead to different and, perhaps, more complicated magnetic structures. Antiferromagnetic materials occur commonly among transition metal compounds, especially oxides, examples include hematite, metals such as chromium, alloys such as iron manganese, and oxides such as nickel oxide. There are also numerous examples among high nuclearity metal clusters, organic molecules can also exhibit antiferromagnetic coupling under rare circumstances, as seen in radicals such as 5-dehydro-m-xylylene. Unlike ferromagnetism, anti-ferromagnetic interactions can lead to multiple optimal states, in one dimension, the anti-ferromagnetic ground state is an alternating series of spins, up, down, up, down, etc. Yet in two dimensions, multiple ground states can occur, consider an equilateral triangle with three spins, one on each vertex. If each spin can take on two values, there are 23 =8 possible states of the system, six of which are ground states. The two situations which are not ground states are all three spins are up or are all down. In any of the six states, there will be two favorable interactions and one unfavorable one. This illustrates frustration, the inability of the system to find a ground state. This type of behavior has been found in minerals that have a crystal stacking structure such as a Kagome lattice or hexagonal lattice. Synthetic antiferromagnets are artificial antiferromagnets consisting of two or more thin ferromagnetic layers separated by a nonmagnetic layer, due to dipole coupling of the ferromagnetic layers results in antiparallel alignment of the magnetization of the ferromagnets. Antiferromagnetism plays a role in giant magnetoresistance, as had been discovered in 1988 by the Nobel prize winners Albert Fert
22.
Gouy balance
–
The Gouy balance, invented by Louis Georges Gouy, is a device for measuring the magnetic susceptibility of a sample. Amongst a wide range of interest in optics, Brownian motion, Gouy derived a mathematical expression showing that force is proportional to volume susceptibility for the interaction of material in a uniform magnetic field in 1889. From this derivation, Gouy proposed that balance measurements taken for tubes of material suspended in a field could evaluate his expression for volume susceptibility. Though Gouy never tested the scientific suggestion himself, this simple and inexpensive method became the foundation for measuring magnetic susceptibility, the Gouy balance measures the apparent change in the mass of the sample as it is repelled or attracted by the region of high magnetic field between the poles. Some commercially available balances have a port at their base for this application, in use, a long, cylindrical sample to be tested is suspended from a balance, partially entering between the poles of a magnet. The sample can be in solid or liquid form, and is placed in a cylindrical container such as a test tube. Solid compounds are generally ground into a powder to allow for uniformity amongst the sample. The sample is suspended between the poles through an attached thread or string. The experimental procedure requires two separate reading to be performed, an initial balance reading is performed on the sample of interest without a magnetic field. A subsequent balance reading is taken with a magnetic field. The difference between two readings relates to the magnetic force on the sample. The apparent change in mass from the two readings is a result of magnetic force on the sample. The magnetic force is applied across the gradient of a strong, a sample with a paramagnetic compound will be pulled down towards the magnetic, and provide a positive difference in apparent mass mb – ma. Diamagnetic compounds can exhibit no apparent change in weight or a negative change as the sample is slightly repelled by the applied magnetic field. With a paramagnetic sample, the induction is stronger than the applied field. A diamagnetic sample has a magnetic induction much weaker than the field. The sample can also be enclosed in a thermostat in order to make measurements at different temperatures, since it requires a large and powerful electromagnet, the Gouy balance is a stationary instrument permanently set up on a bench. The apparatus is placed on a marble balance table in a non-ventilated room to minimize the vibrations
23.
Superconductivity
–
It was discovered by Dutch physicist Heike Kamerlingh Onnes on April 8,1911, in Leiden. Like ferromagnetism and atomic spectral lines, superconductivity is a mechanical phenomenon. It is characterized by the Meissner effect, the ejection of magnetic field lines from the interior of the superconductor as it transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be simply as the idealization of perfect conductivity in classical physics. The electrical resistance of a metallic conductor decreases gradually as temperature is lowered, in ordinary conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature, an electric current flowing through a loop of superconducting wire can persist indefinitely with no power source. In 1986, it was discovered that some cuprate-perovskite ceramic materials have a temperature above 90 K. Such a high temperature is theoretically impossible for a conventional superconductor. There are many criteria by which superconductors are classified, by theory of operation, It is conventional if it can be explained by the BCS theory or its derivatives, or unconventional, otherwise. By material, Superconductor material classes include chemical elements, alloys, ceramics, on the other hand, there is a class of properties that are independent of the underlying material. For instance, all superconductors have exactly zero resistivity to low applied currents when there is no magnetic field present or if the field does not exceed a critical value. The resistance of the sample is given by Ohms law as R = V / I, if the voltage is zero, this means that the resistance is zero. Superconductors are also able to maintain a current with no applied voltage whatsoever, experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a current lifetime of at least 100,000 years, theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In a normal conductor, an electric current may be visualized as a fluid of electrons moving across an ionic lattice. As a result, the energy carried by the current is constantly being dissipated and this is the phenomenon of electrical resistance and Joule heating. The situation is different in a superconductor, in a conventional superconductor, the electronic fluid cannot be resolved into individual electrons
24.
Nuclear magnetic resonance
–
Nuclear magnetic resonance is a physical phenomenon in which nuclei in a magnetic field absorb and re-emit electromagnetic radiation. NMR allows the observation of quantum mechanical magnetic properties of the atomic nucleus. Many scientific techniques exploit NMR phenomena to study physics, crystals. NMR is also used in advanced medical imaging techniques, such as in magnetic resonance imaging. The most commonly studied nuclei are 1H and 13C, although nuclei from isotopes of other elements have been studied by high-field NMR spectroscopy as well. A key feature of NMR is that the frequency of a particular substance is directly proportional to the strength of the applied magnetic field. Since the resolution of the technique depends on the magnitude of magnetic field gradient, many efforts are made to develop increased field strength. The effectiveness of NMR can also be improved using hyperpolarization, and/or using two-dimensional, three-dimensional and higher-dimensional multi-frequency techniques, the principle of NMR usually involves two sequential steps, The alignment of the magnetic nuclear spins in an applied, constant magnetic field B0. The perturbation of this alignment of the nuclear spins by employing an electro-magnetic, the required perturbing frequency is dependent upon the static magnetic field and the nuclei of observation. The two fields are chosen to be perpendicular to each other as this maximizes the NMR signal strength. The resulting response by the magnetization of the nuclear spins is the phenomenon that is exploited in NMR spectroscopy. NMR phenomena are also utilized in low-field NMR, NMR spectroscopy and MRI in the Earths magnetic field, in 1946, Felix Bloch and Edward Mills Purcell expanded the technique for use on liquids and solids, for which they shared the Nobel Prize in Physics in 1952. Yevgeny Zavoisky likely observed nuclear magnetic resonance in 1941, well before Felix Bloch and Edward Mills Purcell, russell H. Varian filed the Method and means for correlating nuclear properties of atoms and magnetic fields, U. S. Patent 2,561,490 on July 24,1951, Varian Associates developed the first NMR unit called NMR HR-30 in 1952. Purcell had worked on the development of radar during World War II at the Massachusetts Institute of Technologys Radiation Laboratory. His work during that project on the production and detection of radio frequency power, when this absorption occurs, the nucleus is described as being in resonance. Different atomic nuclei within a molecule resonate at different frequencies for the magnetic field strength. The observation of magnetic resonance frequencies of the nuclei present in a molecule allows any trained user to discover essential chemical and structural information about the molecule
25.
Crystal
–
A crystal or crystalline solid is a solid material whose constituents are arranged in a highly ordered microscopic structure, forming a crystal lattice that extends in all directions. In addition, macroscopic single crystals are usually identifiable by their geometrical shape, the scientific study of crystals and crystal formation is known as crystallography. The process of crystal formation via mechanisms of crystal growth is called crystallization or solidification, the word crystal derives from the Ancient Greek word κρύσταλλος, meaning both ice and rock crystal, from κρύος, icy cold, frost. Examples of large crystals include snowflakes, diamonds, and table salt, most inorganic solids are not crystals but polycrystals, i. e. many microscopic crystals fused together into a single solid. Examples of polycrystals include most metals, rocks, ceramics, a third category of solids is amorphous solids, where the atoms have no periodic structure whatsoever. Examples of amorphous solids include glass, wax, and many plastics, Crystals are often used in pseudoscientific practices such as crystal therapy, and, along with gemstones, are sometimes associated with spellwork in Wiccan beliefs and related religious movements. The scientific definition of a crystal is based on the arrangement of atoms inside it. A crystal is a solid where the form a periodic arrangement. For example, when liquid water starts freezing, the change begins with small ice crystals that grow until they fuse. Most macroscopic inorganic solids are polycrystalline, including almost all metals, ceramics, ice, rocks, solids that are neither crystalline nor polycrystalline, such as glass, are called amorphous solids, also called glassy, vitreous, or noncrystalline. These have no periodic order, even microscopically, there are distinct differences between crystalline solids and amorphous solids, most notably, the process of forming a glass does not release the latent heat of fusion, but forming a crystal does. A crystal structure is characterized by its cell, a small imaginary box containing one or more atoms in a specific spatial arrangement. The unit cells are stacked in three-dimensional space to form the crystal, the symmetry of a crystal is constrained by the requirement that the unit cells stack perfectly with no gaps. There are 219 possible crystal symmetries, called space groups. These are grouped into 7 crystal systems, such as cubic crystal system or hexagonal crystal system, Crystals are commonly recognized by their shape, consisting of flat faces with sharp angles. Euhedral crystals are those with obvious, well-formed flat faces, anhedral crystals do not, usually because the crystal is one grain in a polycrystalline solid. The flat faces of a crystal are oriented in a specific way relative to the underlying atomic arrangement of the crystal. This occurs because some surface orientations are more stable than others, as a crystal grows, new atoms attach easily to the rougher and less stable parts of the surface, but less easily to the flat, stable surfaces
26.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
27.
Cartesian coordinates
–
Each reference line is called a coordinate axis or just axis of the system, and the point where they meet is its origin, usually at ordered pair. The coordinates can also be defined as the positions of the projections of the point onto the two axis, expressed as signed distances from the origin. One can use the principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes. In general, n Cartesian coordinates specify the point in an n-dimensional Euclidean space for any dimension n and these coordinates are equal, up to sign, to distances from the point to n mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes can be described by Cartesian equations, algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, a familiar example is the concept of the graph of a function. Cartesian coordinates are also tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering. They are the most common system used in computer graphics, computer-aided geometric design. Nicole Oresme, a French cleric and friend of the Dauphin of the 14th Century, used similar to Cartesian coordinates well before the time of Descartes. The adjective Cartesian refers to the French mathematician and philosopher René Descartes who published this idea in 1637 and it was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. Both authors used a single axis in their treatments and have a length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes La Géométrie was translated into Latin in 1649 by Frans van Schooten and these commentators introduced several concepts while trying to clarify the ideas contained in Descartes work. Many other coordinate systems have developed since Descartes, such as the polar coordinates for the plane. The development of the Cartesian coordinate system would play a role in the development of the Calculus by Isaac Newton. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Choosing a Cartesian coordinate system for a one-dimensional space – that is, for a straight line—involves choosing a point O of the line, a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by O is the positive, and which is negative, we say that the line is oriented from the negative half towards the positive half
28.
Partial derivative
–
In mathematics, the symmetry of second derivatives refers to the possibility under certain conditions of interchanging the order of taking partial derivatives of a function f of n variables. This is sometimes known as Schwarzs theorem or Youngs theorem, in the context of partial differential equations it is called the Schwarz integrability condition. This matrix of partial derivatives of f is called the Hessian matrix of f. The entries in it off the diagonal are the mixed derivatives. In most real-life circumstances the Hessian matrix is symmetric, although there are a number of functions that do not have this property. Mathematical analysis reveals that symmetry requires a hypothesis on f that goes further than simply stating the existence of the derivatives at a particular point. Schwarz theorem gives a sufficient condition on f for this to occur, in symbols, the symmetry says that, for example, ∂ ∂ x = ∂ ∂ y. This equality can also be written as ∂ x y f = ∂ y x f, alternatively, the symmetry can be written as an algebraic statement involving the differential operator Di which takes the partial derivative with respect to xi, Di. From this relation it follows that the ring of operators with constant coefficients. But one should naturally specify some domain for these operators and it is easy to check the symmetry as applied to monomials, so that one can take polynomials in the xi as a domain. In fact smooth functions are possible, the partial differentiations of this function are commutative at that point. One easy way to establish this theorem is by applying Greens theorem to the gradient of f, a weaker condition than the continuity of second partial derivatives which nevertheless suffices to ensure symmetry is that all partial derivatives are themselves differentiable. The theory of distributions eliminates analytic problems with the symmetry, the derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions. In more detail, = − = f = f = − =, another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously. The symmetry may be if the function fails to have differentiable partial derivatives. An example of non-symmetry is the function, This function is everywhere continuous, however, the second partial derivatives are not continuous at, and the symmetry fails. In fact, along the x-axis the y-derivative is ∂ y f | = x, vice versa, along the y-axis the x-derivative ∂ x f | = − y, and so ∂ y ∂ x f | = −1
29.
Coercivity
–
An analogous property, electric coercivity, is the ability of a ferroelectric material to withstand an external electric field without becoming depolarized. Thus coercivity measures the resistance of a material to becoming demagnetized. Coercivity is usually measured in oersted or ampere/meter units and is denoted HC and it can be measured using a B-H analyzer or magnetometer. Ferromagnetic materials with high coercivity are called magnetically hard materials, and are used to permanent magnets. Materials with low coercivity are said to be magnetically soft, the latter are used in transformer and inductor cores, recording heads, microwave devices, and magnetic shielding. Typically the coercivity of a material is determined by measurement of the magnetic hysteresis loop, also called the magnetization curve. The apparatus used to acquire the data is typically a vibrating-sample or alternating-gradient magnetometer, the applied field where the data line crosses zero is the coercivity. If an antiferromagnet is present in the sample, the coercivities measured in increasing and decreasing fields may be unequal as a result of the exchange bias effect, the coercivity of a material depends on the time scale over which a magnetization curve is measured. The magnetization of a material measured at an applied reversed field which is smaller than the coercivity may, over a long time scale. Relaxation occurs when reversal of magnetization by domain wall motion is thermally activated and is dominated by magnetic viscosity, at the coercive field, the vector component of the magnetization of a ferromagnet measured along the applied field direction is zero. There are two modes of magnetization reversal, single-domain rotation and domain wall motion. When the magnetization of a material reverses by rotation, the component along the applied field is zero because the vector points in a direction orthogonal to the applied field. When the magnetization reverses by domain wall motion, the net magnetization is small in every vector direction because the moments of all the individual domains sum to zero, magnetization curves dominated by rotation and magnetocrystalline anisotropy are found in relatively perfect magnetic materials used in fundamental research. The role of walls in determining coercivity is complicated since defects may pin domain walls in addition to nucleating them. The dynamics of domain walls in ferromagnets is similar to that of grain boundaries, common dissipative processes in magnetic materials include magnetostriction and domain wall motion. The coercivity is a measure of the degree of magnetic hysteresis, the squareness and coercivity are figures of merit for hard magnets although energy product is most commonly quoted. The 1980s saw the development of rare-earth magnets with high energy products, since the 1990s new exchange spring hard magnets with high coercivities have been developed. com
30.
Saturation (magnetic)
–
Saturation is a characteristic of ferromagnetic and ferrimagnetic materials, such as iron, nickel, cobalt and their alloys. Saturation is most clearly seen in the curve of a substance. As the H field increases, the B field approaches a maximum value asymptotically, technically, above saturation, the B field continues increasing, but at the paramagnetic rate, which is several orders of magnitude smaller than the ferromagnetic rate seen below saturation. The permeability of ferromagnetic materials is not constant, but depends on H, in saturable materials the relative permeability increases with H to a maximum, then as it approaches saturation inverts and decreases toward one. Different materials have different saturation levels, for example, high permeability iron alloys used in transformers reach magnetic saturation at 1.6 -2.2 teslas, whereas ferrites saturate at 0.2 -0.5 T. Some amorphous alloys saturate at 1. 2-1.3 T. Mu-metal saturates at around 0.8 T, ferromagnetic materials are composed of microscopic regions called magnetic domains, that act like tiny permanent magnets that can change their direction of magnetization. The stronger the magnetic field H, the more the domains align. The magnetisation remains nearly constant, and is said to have saturated, the domain structure at saturation depends on the temperature. Saturation puts a limit on the maximum magnetic fields achievable in ferromagnetic-core electromagnets and transformers of around 2 T. This is one reason why high power motors, generators, and utility transformers are physically large, in electronic circuits, transformers and inductors with ferromagnetic cores operate nonlinearly when the current through them is large enough to drive their core materials into saturation. This means that their inductance and other properties vary with changes in drive current, in linear circuits this is usually considered an unwanted departure from ideal behavior. When AC signals are applied, this nonlinearity can cause the generation of harmonics, to prevent this, the level of signals applied to iron core inductors must be limited so they dont saturate. To lower its effects, an air gap is created in some kinds of transformer cores, the saturation current, the current through the winding required to saturate the magnetic core, is given by manufacturers in the specifications for many inductors and transformers. On the other hand, saturation is exploited in electronic devices. Saturation is employed to limit current in saturable-core transformers, used in arc welding, when the primary current exceeds a certain value, the core is pushed into its saturation region, limiting further increases in secondary current. In a more sophisticated application, saturable core inductors and magnetic amplifiers use a DC current through a separate winding to control an inductors impedance. Varying the current in the winding moves the operating point up and down in the saturation curve. These are used in variable fluorescent light ballasts, and power control systems, the magnetic saturation is also exploited in the fluxgate magnetometers and the fluxgate compasses
31.
Domain wall (magnetism)
–
A domain wall is a term used in physics which can have similar meanings in magnetism, optics, or string theory. These phenomena can all be described as topological solitons which occur whenever a discrete symmetry is spontaneously broken. In magnetism, a wall is an interface separating magnetic domains. It is a transition between different magnetic moments and usually undergoes a displacement of 90° or 180°. A domain wall is a gradual reorientation of individual moments across a finite distance, the domain wall thickness depends on the anisotropy of the material, but on average spans across around 100–150 atoms. The energy of a wall is simply the difference between the magnetic moments before and after the domain wall was created. This value is expressed as energy per unit wall area. The anisotropy energy is lowest when the magnetic moments are aligned with the crystal lattice axes thus reducing the width of the domain wall. Conversely, the energy is reduced when the magnetic moments are aligned parallel to each other and thus makes the wall thicker. In the end an equilibrium is reached between the two and the walls width is set as such. An ideal domain wall would be independent of position, but the structures are not ideal and so get stuck on inclusion sites within the medium. These include missing or different atoms, oxides, insulators and even stresses within the crystal and this prevents the formation of domain walls and also inhibits their propagation through the medium. Thus a greater applied magnetic field is required to overcome these sites, note that the magnetic domain walls are exact solutions to classical nonlinear equations of magnets. Since domain walls can be considered as thin layers, their symmetry is described by one of the 528 magnetic layer groups, to determine the layers physical properties, a continuum approximation is used which leads to point-like layer groups. If continuous translation operation is considering as identity, these groups transform to magnetic point groups and it was shown that there are 125 such groups. It was found if a magnetic point group is pyroelectric and/or pyromagnetic then the domain wall carries polarization and/or magnetization respectively. These criteria were derived from the conditions of the appearance of the uniform polarization and/or magnetization, after their application to any inhomogeneous region, they predict the existence of even parts in functions of the distribution of order parameters. Identification of the odd parts of these functions was formulated based on symmetry transformations that interrelate domains
32.
Alternating current
–
Alternating current, is an electric current which periodically reverses direction, whereas direct current flows only in one direction. A common source of DC power is a cell in a flashlight. The abbreviations AC and DC are often used to mean simply alternating and direct, the usual waveform of alternating current in most electric power circuits is a sine wave. In certain applications, different waveforms are used, such as triangular or square waves, audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information encoded onto the AC signal and these currents typically alternate at higher frequencies than those used in power transmission. Electrical energy is distributed as alternating current because AC voltage may be increased or decreased with a transformer, use of a higher voltage leads to significantly more efficient transmission of power. The power losses in a conductor are a product of the square of the current and this means that when transmitting a fixed power on a given wire, if the current is halved, the power loss will be four times less. Power is often transmitted at hundreds of kilovolts, and transformed to 100–240 volts for domestic use, high voltages have disadvantages, such as the increased insulation required, and generally increased difficulty in their safe handling. In a power plant, energy is generated at a convenient voltage for the design of a generator, near the loads, the transmission voltage is stepped down to the voltages used by equipment. Consumer voltages vary somewhat depending on the country and size of load, the voltage delivered to equipment such as lighting and motor loads is standardized, with an allowable range of voltage over which equipment is expected to operate. Standard power utilization voltages and percentage tolerance vary in the different mains power systems found in the world, high-voltage direct-current electric power transmission systems have become more viable as technology has provided efficient means of changing the voltage of DC power. HVDC systems, however, tend to be expensive and less efficient over shorter distances than transformers. Three-phase electrical generation is very common, the simplest way is to use three separate coils in the generator stator, physically offset by an angle of 120° to each other. Three current waveforms are produced that are equal in magnitude and 120° out of phase to each other, if coils are added opposite to these, they generate the same phases with reverse polarity and so can be simply wired together. In practice, higher pole orders are commonly used, for example, a 12-pole machine would have 36 coils. The advantage is that lower rotational speeds can be used to generate the same frequency, for example, a 2-pole machine running at 3600 rpm and a 12-pole machine running at 600 rpm produce the same frequency, the lower speed is preferable for larger machines. If the load on a system is balanced equally among the phases. Even in the worst-case unbalanced load, the current will not exceed the highest of the phase currents
33.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
34.
Direct current
–
Direct current is a flow of electrical charge carriers that always takes place in the same direction. The current need not always have the magnitude, but if it is to be defined as dc. This contrasts with alternating current which varies the direction of flow, sources of direct current include power supplies, electrochemical cells and batteries, and photovoltaic cells and panels. The intensity, or amplitude, of a direct current might fluctuate with time, in some such cases the dc has an ac component superimposed on it. An example of this is the output of a cell that receives a modulated light communications signal. A source of dc is sometimes called a dc generator, batteries and various other sources of dc produce a constant voltage. This is called pure dc and can be represented by a straight, the peak and effective values are the same. The peak to peak value is zero because the instantaneous amplitude never changes, in some instances the value of a dc voltage pulsates or oscillates rapidly with time, in a manner similar to the changes in an ac wave. The unfiltered output of a wave or a full wave rectifier. In 1820, Hans Christian Orsted discovered that electrical current creates a magnetic field and this discovery made scientists relate magnetism to the electric phenomena. In 1879, Thomas Edison invented the light bulb. He improved a 50-year-old idea using lower current electricity, a vacuum inside the globe and a small carbonized filament. At that time, the idea of lightning was not new. Edison not only invented an incandescent electric light, but an electric lighting system contained all the necessary elements to make the incandescent light safe, economical. Prior to 1879, direct current electricity had been used in lighting for the outdoors and it was in the 1880s when the modern electric utility industry began. It was an evolution from street lighting systems and from gas and it was located in Lower Manhattan, on Pearl Street. This station provided light and electricity to customers in a one square mile range, the station was called Thomas Edisons Pearl Street Electricity Generating Station. This station introduced four elements of an electric utility system, Efficient distribution, competitive price, reliable central generation
35.
Eddy current
–
Eddy currents are loops of electrical current induced within conductors by a changing magnetic field in the conductor, due to Faradays law of induction. Eddy currents flow in closed loops within conductors, in perpendicular to the magnetic field. By Lenzs law, an eddy current creates a field that opposes the magnetic field that created it. For example, a conductive surface will exert a drag force on a moving magnet that opposes its motion. This effect is employed in eddy current brakes which are used to stop rotating power tools quickly when they are turned off, the current flowing through the resistance of the conductor also dissipates energy as heat in the material. Eddy currents are used to heat objects in induction heating furnaces and equipment. The term eddy current comes from analogous currents seen in water in fluid dynamics, somewhat analogously, eddy currents can take time to build up and can persist for very short times in conductors due to their inductance. The first person to observe eddy currents was François Arago, the 25th Prime Minister of France, in 1824 he observed what has been called rotatory magnetism, and that most conductive bodies could be magnetized, these discoveries were completed and explained by Michael Faraday. Eddy currents produce a field that cancels a part of the external field. French physicist Léon Foucault is credited with having discovered eddy currents, the first use of eddy current for non-destructive testing occurred in 1879 when David E. Hughes used the principles to conduct metallurgical sorting tests. A magnet induces circular electric currents in a metal sheet moving past it and it shows a metal sheet moving to the right under a stationary magnet. The magnetic field of the north pole N passes down through the sheet. Since the metal is moving, the flux through the sheet is changing. At the part of the sheet under the edge of the magnet the magnetic field through the sheet is increasing as it gets nearer the magnet. From Faradays law of induction, this creates an electric field in the sheet in a counterclockwise direction around the magnetic field lines. This field induces a flow of electric current, in the sheet. At the trailing edge of the magnet the magnetic field through the sheet is decreasing, d B d t <0, the mobile charge carriers in the metal, the electrons, actually have a negative charge so their motion is opposite in direction to the conventional current shown. Both of these forces oppose the motion of the sheet, the kinetic energy which is consumed overcoming this drag force is dissipated as heat by the currents flowing through the resistance of the metal, so the metal gets warm under the magnet
36.
Temperature
–
A temperature is an objective comparative measurement of hot or cold. It is measured by a thermometer, several scales and units exist for measuring temperature, the most common being Celsius, Fahrenheit, and, especially in science, Kelvin. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, the kinetic theory offers a valuable but limited account of the behavior of the materials of macroscopic bodies, especially of fluids. Temperature is important in all fields of science including physics, geology, chemistry, atmospheric sciences, medicine. The Celsius scale is used for temperature measurements in most of the world. Because of the 100 degree interval, it is called a centigrade scale.15, the United States commonly uses the Fahrenheit scale, on which water freezes at 32°F and boils at 212°F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scottish physicist who first defined it and it is a thermodynamic or absolute temperature scale. Its zero point, 0K, is defined to coincide with the coldest physically-possible temperature and its degrees are defined through thermodynamics. The temperature of zero occurs at 0K = −273. 15°C. For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment, Temperature is one of the principal quantities in the study of thermodynamics. There is a variety of kinds of temperature scale and it may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century, empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a capillary tube, is dependent largely on temperature. Such scales are only within convenient ranges of temperature. For example, above the point of mercury, a mercury-in-glass thermometer is impracticable. A material is of no use as a thermometer near one of its phase-change temperatures, in spite of these restrictions, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics, nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Theoretically based temperature scales are based directly on theoretical arguments, especially those of thermodynamics, kinetic theory and they rely on theoretical properties of idealized devices and materials
37.
Pressure
–
Pressure is the force applied perpendicular to the surface of an object per unit area over which that force is distributed. Gauge pressure is the relative to the ambient pressure. Various units are used to express pressure, Pressure may also be expressed in terms of standard atmospheric pressure, the atmosphere is equal to this pressure and the torr is defined as 1⁄760 of this. Manometric units such as the centimetre of water, millimetre of mercury, Pressure is the amount of force acting per unit area. The symbol for it is p or P, the IUPAC recommendation for pressure is a lower-case p. However, upper-case P is widely used. The usage of P vs p depends upon the field in one is working, on the nearby presence of other symbols for quantities such as power and momentum. Mathematically, p = F A where, p is the pressure, F is the normal force and it relates the vector surface element with the normal force acting on it. It is incorrect to say the pressure is directed in such or such direction, the pressure, as a scalar, has no direction. The force given by the relationship to the quantity has a direction. If we change the orientation of the element, the direction of the normal force changes accordingly. Pressure is distributed to solid boundaries or across arbitrary sections of normal to these boundaries or sections at every point. It is a parameter in thermodynamics, and it is conjugate to volume. The SI unit for pressure is the pascal, equal to one newton per square metre and this name for the unit was added in 1971, before that, pressure in SI was expressed simply in newtons per square metre. Other units of pressure, such as pounds per square inch, the CGS unit of pressure is the barye, equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre, but using the names kilogram, gram, kilogram-force, or gram-force as units of force is expressly forbidden in SI. The technical atmosphere is 1 kgf/cm2, since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to density and may be expressed in units such as joules per cubic metre. Similar pressures are given in kilopascals in most other fields, where the prefix is rarely used
38.
Celsius
–
Celsius, also known as centigrade, is a metric scale and unit of measurement for temperature. As an SI derived unit, it is used by most countries in the world and it is named after the Swedish astronomer Anders Celsius, who developed a similar temperature scale. The degree Celsius can refer to a temperature on the Celsius scale as well as a unit to indicate a temperature interval. Before being renamed to honour Anders Celsius in 1948, the unit was called centigrade, from the Latin centum, which means 100, and gradus, which means steps. The scale is based on 0° for the point of water. This scale is widely taught in schools today, by international agreement the unit degree Celsius and the Celsius scale are currently defined by two different temperatures, absolute zero, and the triple point of VSMOW. This definition also precisely relates the Celsius scale to the Kelvin scale, absolute zero, the lowest temperature possible, is defined as being precisely 0 K and −273.15 °C. The temperature of the point of water is defined as precisely 273.16 K at 611.657 pascals pressure. This definition fixes the magnitude of both the degree Celsius and the kelvin as precisely 1 part in 273.16 of the difference between absolute zero and the point of water. Thus, it sets the magnitude of one degree Celsius and that of one kelvin as exactly the same, additionally, it establishes the difference between the two scales null points as being precisely 273.15 degrees. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that the point of ice is essentially unaffected by pressure. He also determined with precision how the boiling point of water varied as a function of atmospheric pressure. He proposed that the point of his temperature scale, being the boiling point. This pressure is known as one standard atmosphere, the BIPMs 10th General Conference on Weights and Measures later defined one standard atmosphere to equal precisely 1013250dynes per square centimetre. On 19 May 1743 he published the design of a mercury thermometer, in 1744, coincident with the death of Anders Celsius, the Swedish botanist Carolus Linnaeus reversed Celsiuss scale. In it, Linnaeus recounted the temperatures inside the orangery at the University of Uppsala Botanical Garden, since the 19th century, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the scale were often reported simply as degrees or. More properly, what was defined as centigrade then would now be hectograde.2 gradians, for scientific use, Celsius is the term usually used, with centigrade otherwise continuing to be in common but decreasing use, especially in informal contexts in English-speaking countries