International System of Units

The International System of Units is the modern form of the metric system, is the most used system of measurement. It comprises a coherent system of units of measurement built on seven base units, which are the ampere, second, kilogram, mole, a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units; the system specifies names for 22 derived units, such as lumen and watt, for other common physical quantities. The base units are derived from invariant constants of nature, such as the speed of light in vacuum and the triple point of water, which can be observed and measured with great accuracy, one physical artefact; the artefact is the international prototype kilogram, certified in 1889, consisting of a cylinder of platinum-iridium, which nominally has the same mass as one litre of water at the freezing point. Its stability has been a matter of significant concern, culminating in a revision of the definition of the base units in terms of constants of nature, scheduled to be put into effect on 20 May 2019.

Derived units may be defined in terms of other derived units. They are adopted to facilitate measurement of diverse quantities; the SI is intended to be an evolving system. The most recent derived unit, the katal, was defined in 1999; the reliability of the SI depends not only on the precise measurement of standards for the base units in terms of various physical constants of nature, but on precise definition of those constants. The set of underlying constants is modified as more stable constants are found, or may be more measured. For example, in 1983 the metre was redefined as the distance that light propagates in vacuum in a given fraction of a second, thus making the value of the speed of light in terms of the defined units exact; the motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second systems and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures, established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and standardise the rules for writing and presenting measurements.

The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units rather than any variant of the CGS. Since the SI has been adopted by all countries except the United States and Myanmar; the International System of Units consists of a set of base units, derived units, a set of decimal-based multipliers that are used as prefixes. The units, excluding prefixed units, form a coherent system of units, based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a. Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, thus are not independent.

Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, defined in SI units as m/s2. The SI base units are the building blocks of the system and all the other units are derived from them; when Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass and time. Giorgi identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units were added later; the early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are interchangeable, but in scientific contexts the difference matters. Mass the inertial mass, represents a quantity of matter, it relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. A force of 1 N applied to a mass of 1 kg will accelerate it at 1 m/s2.

This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision

Electric field

An electric field surrounds an electric charge, exerts force on other charges in the field, attracting or repelling them. Electric field is sometimes abbreviated as E-field. Mathematically the electric field is a vector field that associates to each point in space the force per unit of charge exerted on an infinitesimal positive test charge at rest at that point; the SI unit for electric field strength is volt per meter. Newtons per coulomb is used as a unit of electric field strengh. Electric fields are created by time-varying magnetic fields. Electric fields are important in many areas of physics, are exploited electrical technology. On an atomic scale, the electric field is responsible for the attractive force between the atomic nucleus and electrons that holds atoms together, the forces between atoms that cause chemical bonding. Electric fields and magnetic fields are both manifestations of the electromagnetic force, one of the four fundamental forces of nature. From Coulomb's law a particle with electric charge q 1 at position x 1 exerts a force on a particle with charge q 0 at position x 0 of F = 1 4 π ε 0 q 1 q 0 2 r ^ 1, 0 where r 1, 0 is the unit vector in the direction from point x 1 to point x 0, ε0 is the electric constant in C2 m−2 N−1When the charges q 0 and q 1 have the same sign this force is positive, directed away from the other charge, indicating the particles repel each other.

When the charges have unlike signs the force is negative, indicating the particles attract. To make it easy to calculate the Coulomb force on any charge at position x 0 this expression can be divided by q 0, leaving an expression that only depends on the other charge E = F q 0 = 1 4 π ε 0 q 1 2 r ^ 1, 0 This is the electric field at point x 0 due to the point charge q 1. Since this formula gives the electric field magnitude and direction at any point x 0 in space it defines a vector field. From the above formula it can be seen that the electric field due to a point charge is everywhere directed away from the charge if it is positive, toward the charge if it is negative, its magnitude decreases with the inverse square of the distance from the charge. If there are multiple charges, the resultant Coulomb force on a charge can be found by summing the vectors of the forces due to each charge; this shows the electric field obeys the superposition principle: the total electric field at a point due to a collection of charges is just equal to the vector sum of the electric fields at that point due to the individual charges.

E = E 1 + E 2 + E 3 + ⋯ = 1 4 π ε 0 q 1 2 r ^ 1 + 1 4 π ε 0 q 2 ( x 2 −

Magnetic field

A magnetic field is a vector field that describes the magnetic influence of electric charges in relative motion and magnetized materials. Magnetic fields are observed from subatomic particles to galaxies. In everyday life, the effects of magnetic fields are seen in permanent magnets, which pull on magnetic materials and attract or repel other magnets. Magnetic fields surround and are created by magnetized material and by moving electric charges such as those used in electromagnets. Magnetic fields exert forces on nearby moving electrical torques on nearby magnets. In addition, a magnetic field that varies with location exerts a force on magnetic materials. Both the strength and direction of a magnetic field vary with location; as such, it is an example of a vector field. The term'magnetic field' is used for two distinct but related fields denoted by the symbols B and H. In the International System of Units, H, magnetic field strength, is measured in the SI base units of ampere per meter. B, magnetic flux density, is measured in tesla, equivalent to newton per meter per ampere.

H and B differ in. In a vacuum, B and H are the same aside from units. Magnetic fields are produced by moving electric charges and the intrinsic magnetic moments of elementary particles associated with a fundamental quantum property, their spin. Magnetic fields and electric fields are interrelated, are both components of the electromagnetic force, one of the four fundamental forces of nature. Magnetic fields are used throughout modern technology in electrical engineering and electromechanics. Rotating magnetic fields are used in both electric generators; the interaction of magnetic fields in electric devices such as transformers is studied in the discipline of magnetic circuits. Magnetic forces give information about the charge carriers in a material through the Hall effect; the Earth produces its own magnetic field, which shields the Earth's ozone layer from the solar wind and is important in navigation using a compass. Although magnets and magnetism were studied much earlier, the research of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field on the surface of a spherical magnet using iron needles.

Noting that the resulting field lines crossed at two points he named those points'poles' in analogy to Earth's poles. He clearly articulated the principle that magnets always have both a north and south pole, no matter how finely one slices them. Three centuries William Gilbert of Colchester replicated Petrus Peregrinus' work and was the first to state explicitly that Earth is a magnet. Published in 1600, Gilbert's work, De Magnete, helped to establish magnetism as a science. In 1750, John Michell stated that magnetic poles attract and repel in accordance with an inverse square law. Charles-Augustin de Coulomb experimentally verified this in 1785 and stated explicitly that the north and south poles cannot be separated. Building on this force between poles, Siméon Denis Poisson created the first successful model of the magnetic field, which he presented in 1824. In this model, a magnetic H-field is produced by'magnetic poles' and magnetism is due to small pairs of north/south magnetic poles. Three discoveries in 1820 challenged this foundation of magnetism, though.

Hans Christian Ørsted demonstrated that a current-carrying wire is surrounded by a circular magnetic field. André-Marie Ampère showed that parallel wires with currents attract one another if the currents are in the same direction and repel if they are in opposite directions. Jean-Baptiste Biot and Félix Savart announced empirical results about the forces that a current-carrying long, straight wire exerted on a small magnet, determining that the forces were inversely proportional to the perpendicular distance from the wire to the magnet. Laplace deduced, but did not publish, a law of force based on the differential action of a differential section of the wire, which became known as the Biot–Savart law. Extending these experiments, Ampère published his own successful model of magnetism in 1825. In it, he showed the equivalence of electrical currents to magnets and proposed that magnetism is due to perpetually flowing loops of current instead of the dipoles of magnetic charge in Poisson's model.

This has the additional benefit of explaining. Further, Ampère derived both Ampère's force law describing the force between two currents and Ampère's law, like the Biot–Savart law described the magnetic field generated by a steady current. In this work, Ampère introduced the term electrodynamics to describe the relationship between electricity and magnetism. In 1831, Michael Faraday discovered electromagnetic induction when he found that a changing magnetic field generates an encircling electric field, he described this phenomenon in. Franz Ernst Neumann proved that, for a moving conductor in a magnetic field, induction is a consequence of Ampère's force law. In the process, he introduced the magnetic vector potential, shown to be equivalent to the underlying mechanism proposed by Faraday. In 1850, Lord Kelvin known as William Thomson, distinguished between two magnetic fields now denoted H and B; the former applied to the latter to Ampère's model and induction. Further, he derived how H and B relate to each other

Electromagnetism

Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force exhibits electromagnetic fields such as electric fields, magnetic fields, light, is one of the four fundamental interactions in nature; the other three fundamental interactions are the strong interaction, the weak interaction, gravitation. At high energy the weak force and electromagnetic force are unified as a single electroweak force. Electromagnetic phenomena are defined in terms of the electromagnetic force, sometimes called the Lorentz force, which includes both electricity and magnetism as different manifestations of the same phenomenon; the electromagnetic force plays a major role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of intermolecular forces between individual atoms and molecules in matter, is a manifestation of the electromagnetic force.

Electrons are bound by the electromagnetic force to atomic nuclei, their orbital shapes and their influence on nearby atoms with their electrons is described by quantum mechanics. The electromagnetic force governs all chemical processes, which arise from interactions between the electrons of neighboring atoms. There are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential and electric current. In Faraday's law, magnetic fields are associated with electromagnetic induction and magnetism, Maxwell's equations describe how electric and magnetic fields are generated and altered by each other and by charges and currents; the theoretical implications of electromagnetism the establishment of the speed of light based on properties of the "medium" of propagation, led to the development of special relativity by Albert Einstein in 1905. Electricity and magnetism were considered to be two separate forces; this view changed, with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force.

There are four main effects resulting from these interactions, all of which have been demonstrated by experiments: Electric charges attract or repel one another with a force inversely proportional to the square of the distance between them: unlike charges attract, like ones repel. Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire, its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it. While preparing for an evening lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation; as he was setting up his materials, he noticed a compass needle deflected away from magnetic north when the electric current from the battery he was using was switched on and off.

This deflection convinced him that magnetic fields radiate from all sides of a wire carrying an electric current, just as light and heat do, that it confirmed a direct relationship between electricity and magnetism. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire; the CGS unit of magnetic induction is named in honor of his contributions to the field of electromagnetism. His findings resulted in intensive research throughout the scientific community in electrodynamics, they influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery represented a major step toward a unified concept of energy.

This unification, observed by Michael Faraday, extended by James Clerk Maxwell, reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th century mathematical physics. It has had far-reaching consequences, one of, the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies. Ørsted was not the only person to examine the relationship between magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile; the factual setup of the experiment is not clear, so if current flew across the needle or not.

An account of the discovery was published in 1802 in an Italian newspaper, but it was overlooked by the contemporary scientific community, because Romagnosi did not belong to this community. An earlier, neglected, connec

Speed of light

The speed of light in vacuum denoted c, is a universal physical constant important in many areas of physics. Its exact value is 299,792,458 metres per second, it is exact because by international agreement a metre is defined as the length of the path travelled by light in vacuum during a time interval of 1/299792458 second. According to special relativity, c is the maximum speed at which all conventional matter and hence all known forms of information in the universe can travel. Though this speed is most associated with light, it is in fact the speed at which all massless particles and changes of the associated fields travel in vacuum; such particles and waves travel at c regardless of the motion of the source or the inertial reference frame of the observer. In the special and general theories of relativity, c interrelates space and time, appears in the famous equation of mass–energy equivalence E = mc2; the speed at which light propagates through transparent materials, such as glass or air, is less than c.

The ratio between c and the speed v at which light travels in a material is called the refractive index n of the material. For example, for visible light the refractive index of glass is around 1.5, meaning that light in glass travels at c / 1.5 ≈ 200,000 km/s. For many practical purposes and other electromagnetic waves will appear to propagate instantaneously, but for long distances and sensitive measurements, their finite speed has noticeable effects. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, or vice versa; the light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light limits the theoretical maximum speed of computers, since information must be sent within the computer from chip to chip; the speed of light can be used with time of flight measurements to measure large distances to high precision. Ole Rømer first demonstrated in 1676 that light travels at a finite speed by studying the apparent motion of Jupiter's moon Io.

In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, therefore travelled at the speed c appearing in his theory of electromagnetism. In 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source, he explored the consequences of that postulate by deriving the theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units as the distance travelled by light in vacuum in 1/299792458 of a second; the speed of light in vacuum is denoted by a lowercase c, for "constant" or the Latin celeritas. In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant shown to equal √2 times the speed of light in vacuum.

The symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, which by had become the standard symbol for the speed of light. Sometimes c is used for the speed of waves in any material medium, c0 for the speed of light in vacuum; this subscripted notation, endorsed in official SI literature, has the same form as other related constants: namely, μ0 for the vacuum permeability or magnetic constant, ε0 for the vacuum permittivity or electric constant, Z0 for the impedance of free space. This article uses c for the speed of light in vacuum. Since 1983, the metre has been defined in the International System of Units as the distance light travels in vacuum in 1⁄299792458 of a second; this definition fixes the speed of light in vacuum at 299,792,458 m/s. As a dimensional physical constant, the numerical value of c is different for different unit systems.

In branches of physics in which c appears such as in relativity, it is common to use systems of natural units of measurement or the geometrized unit system where c = 1. Using these units, c does not appear explicitly because multiplication or division by 1 does not affect the result; the speed at which light waves propagate in vacuum is independent both of the motion of the wave source and of the inertial frame of reference of the observer. This invariance of the speed of light was postulated by Einstein in 1905, after being motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous aether, it is only possible to verify experimentally that the two-way speed of light is frame-independent, because it is impossible to measure the one-way speed of light without some convention as to how clocks at the source and at the detector should be synchronized. However

System of measurement

A system of measurement is a collection of units of measurement and rules relating them to each other. Systems of measurement have been important and defined for the purposes of science and commerce. Systems of measurement in use include the International System of Units, the modern form of the metric system, the imperial system, United States customary units; the French Revolution gave rise to the metric system, this has spread around the world, replacing most customary units of measure. In most systems, length and time are base quantities. Science developments showed that either electric charge or electric current could be added to extend the set of base quantities by which many other metrological units could be defined. Other quantities, such as power and speed, are derived from the base set: for example, speed is distance per unit time. A wide range of units was used for the same type of quantity: in different contexts, length was measured in inches, yards, rods, furlongs, nautical miles, leagues, with conversion factors which were not powers of ten.

Such arrangements were satisfactory in their own contexts. The preference for a more universal and consistent system only spread with the growth of science. Changing a measurement system has substantial financial and cultural costs which must be offset against the advantages to be obtained from using a more rational system; however pressure built up, including from scientists and engineers for conversion to a more rational, internationally consistent, basis of measurement. In antiquity, systems of measurement were defined locally: the different units might be defined independently according to the length of a king's thumb or the size of his foot, the length of stride, the length of arm, or maybe the weight of water in a keg of specific size itself defined in hands and knuckles; the unifying characteristic is. Cubits and strides gave way to "customary units" to meet the needs of merchants and scientists. In the metric system and other recent systems, a single basic unit is used for each base quantity.

Secondary units are derived from the basic units by multiplying by powers of ten, i.e. by moving the decimal point. Thus the basic metric unit of length is the metre. Metrication is complete or nearly complete in all countries. US customary units are used in the United States and to some degree in Liberia. Traditional Burmese units of measurement are used in Burma. U. S. units are used in limited contexts in Canada due to the large volume of trade. A number of other jurisdictions have laws mandating or permitting other systems of measurement in some or all contexts, such as the United Kingdom – whose road signage legislation, for instance, only allows distance signs displaying imperial units – or Hong Kong. In the United States, metric units are used universally in science in the military, in industry, but customary units predominate in household use. At retail stores, the liter is a used unit for volume on bottles of beverages, milligrams, rather than grains, are used for medications; some other standard non-SI units are still in international use, such as nautical miles and knots in aviation and shipping.

Metric systems of units have evolved since the adoption of the first well-defined system in France in 1795. During this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, to English speaking countries. Multiples and submultiples of metric units are related by powers of ten and their names are formed with prefixes; this relationship is compatible with the decimal system of numbers and it contributes to the convenience of metric units. In the early metric system there were the metre for length and the gram for mass; the other units of length and mass, all units of area and derived units such as density were derived from these two base units. Mesures usuelles were a system of measurement introduced as a compromise between the metric system and traditional measurements, it was used in France from 1812 to 1839. A number of variations on the metric system have been in use; these include gravitational systems, the centimetre–gram–second systems useful in science, the metre–tonne–second system once used in the USSR and the metre–kilogram–second system.

The current international standard metric system is the International System of Units It is an mks system based on the metre and second as well as the kelvin, ampere and mole. The SI includes two classes of units which are agreed internationally; the first of these classes includes the seven SI base units for length, time, electric current, luminous intensity and amount of substance. The second class consists of the SI derived units; these derived. All other quantities are expressed in terms of SI derived units. Both imperial units and US customary units derive from earlier English units. Imperial units were used in the former British Empire and

Voltage

Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential between two points. The difference in electric potential between two points in a static electric field is defined as the work needed per unit of charge to move a test charge between the two points. In the International System of Units, the derived unit for voltage is named volt. In SI units, work per unit charge is expressed as joules per coulomb, where 1 volt = 1 joule per 1 coulomb; the official SI definition for volt uses current, where 1 volt = 1 watt per 1 ampere. This definition is equivalent to the more used'joules per coulomb'. Voltage or electric potential difference is denoted symbolically by ∆V, but more simply as V, for instance in the context of Ohm's or Kirchhoff's circuit laws. Electric potential differences between points can be caused by electric charge, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three.

A voltmeter can be used to measure the voltage between two points in a system. A voltage lost, used, or stored energy. There are multiple useful ways to define voltage, including the standard definition mentioned at the start of this page. There are other useful definitions of work per charge. Speaking, voltage is defined so that negatively charged objects are pulled towards higher voltages, while positively charged objects are pulled towards lower voltages. Therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Voltage has been referred to using terms like "tension" and "pressure". Today, the term "tension" is still used, for example within the phrase "high tension", used in thermionic valve based electronics; the voltage increase from some point x A to some point x B is given by Δ V A B = V − V = − ∫ r 0 x B E → ⋅ d l → − = − ∫ x A x B E → ⋅ d l → In this case, the voltage increase from point A to point B is equal to the work which would have to be done per unit charge, against the electric field, to move the charge from A to B without causing any acceleration.

Mathematically, this is expressed as the line integral of the electric field along that path. Under this definition, the voltage difference between two points is not uniquely defined when there are time-varying magnetic fields since the electric force is not a conservative force in such cases. If this definition of voltage is used, any circuit where there are time-varying magnetic fields, such as circuits containing inductors, will not have a well-defined voltage between nodes in the circuit. However, if magnetic fields are suitably contained to each component the electric field is conservative in the region exterior to the components and voltages are well-defined in that region. In this case, the voltage across an inductor, viewed externally, turns out to be Δ V = − L d I d t despite the fact that, the electric field in the coil is zero. Using the above definition, the electric potential is not defined whenever magnetic fields change with time. In physics, it's sometimes useful to generalize the electric potential by only considering the conservative part of the electric field.

This is done by the following decomposition used in electrodynamics: E → = − ∇ V − ∂ A → ∂ t where A → is the magnetic vector potential. The above decomposition is justified by Helmholtz's theorem. In this case, the voltage increase from x A to x B {\textstyle