Polytetrafluoroethylene is a synthetic fluoropolymer of tetrafluoroethylene that has numerous applications. The best-known brand name of PTFE-based formulas is Teflon by Chemours. Chemours is a spin-off of DuPont, which discovered the compound in 1938. Another popular brand name of PTFE is Syncolon® by Synco Chemical Corporation. PTFE is a fluorocarbon solid, as it is a high molecular weight compound consisting wholly of carbon and fluorine. PTFE is hydrophobic: neither water nor water-containing substances wet PTFE, as fluorocarbons demonstrate mitigated London dispersion forces due to the high electronegativity of fluorine. PTFE has one of the lowest coefficients of friction of any solid. PTFE is used as a non-stick coating for other cookware, it is nonreactive because of the strength of carbon–fluorine bonds, so it is used in containers and pipework for reactive and corrosive chemicals. Where used as a lubricant, PTFE reduces friction and energy consumption of machinery, it is used as a graft material in surgical interventions.
It is frequently employed as coating on catheters. PTFE was accidentally discovered in 1938 by Roy J. Plunkett while he was working in New Jersey for DuPont; as Plunkett attempted to make a new chlorofluorocarbon refrigerant, the tetrafluoroethylene gas in its pressure bottle stopped flowing before the bottle's weight had dropped to the point signaling "empty." Since Plunkett was measuring the amount of gas used by weighing the bottle, he became curious as to the source of the weight, resorted to sawing the bottle apart. He found the bottle's interior coated with a waxy white material, oddly slippery. Analysis showed that it was polymerized perfluoroethylene, with the iron from the inside of the container having acted as a catalyst at high pressure. Kinetic Chemicals patented the new fluorinated plastic in 1941, registered the Teflon trademark in 1945. By 1948, DuPont, which founded Kinetic Chemicals in partnership with General Motors, was producing over two million pounds of Teflon brand PTFE per year in Parkersburg, West Virginia.
An early use was in the Manhattan Project as a material to coat valves and seals in the pipes holding reactive uranium hexafluoride at the vast K-25 uranium enrichment plant in Oak Ridge, Tennessee. In 1954, Collette Grégoire, the wife of French engineer Marc Grégoire urged him to try the material he had been using on fishing tackle on her cooking pans, he subsequently created the first non-stick pans under the brandname Tefal. In the United States, Marion A. Trozzolo, using the substance on scientific utensils, marketed the first US-made PTFE-coated pan, "The Happy Pan", in 1961. However, Tefal was not the only company to utilize PTFE in nonstick cookware coatings. In subsequent years, many cookware manufacturers developed proprietary PTFE-based formulas, including Swiss Diamond International, which uses a diamond-reinforced PTFE formula. Other cookware companies, such as Meyer Corporation's Anolon, use Teflon nonstick coatings purchased from Chemours. Chemours is a 2015 corporate spin-off of DuPont.
In the 1990s, it was found that PTFE could be radiation cross-linked above its melting point in an oxygen-free environment. Electron beam processing is one example of radiation processing. Cross-linked PTFE has improved radiation stability; this was significant because, for many years, irradiation at ambient conditions has been used to break down PTFE for recycling. This radiation-induced chain scission allows it to be more reground and reused. PTFE is produced by free-radical polymerization of tetrafluoroethylene; the net equation is n F2C=CF2 → −n−Because tetrafluoroethylene can explosively decompose to tetrafluoromethane and carbon, special apparatus is required for the polymerization to prevent hot spots that might initiate this dangerous side reaction. The process is initiated with persulfate, which homolyzes to generate sulfate radicals: 2− ⇌ 2 SO4•−The resulting polymer is terminated with sulfate ester groups, which can be hydrolyzed to give OH end-groups; because PTFE is poorly soluble in all solvents, the polymerization is conducted as an emulsion in water.
This process gives a suspension of polymer particles. Alternatively, the polymerization is conducted using a surfactant such as PFOS. PTFE is a thermoplastic polymer, a white solid at room temperature, with a density of about 2200 kg/m3. According to Chemours, its melting point is 600 K, it maintains high strength and self-lubrication at low temperatures down to 5 K, good flexibility at temperatures above 194 K. PTFE gains its properties from the aggregate effect of carbon-fluorine bonds, as do all fluorocarbons; the only chemicals known to affect these carbon-fluorine bonds are reactive metals like the alkali metals, at higher temperatures such metals as aluminium and magnesium, fluorinating agents such as xenon difluoride and cobalt fluoride. The coefficient of friction of plastics is measured against polished steel. PTFE's coefficient of friction is 0.05 to 0.10, the third-lowest of any known solid material. PTFE's resistance to van de
Anisotropy, is the property of being directionally dependent, which implies different properties in different directions, as opposed to isotropy. It can be defined as a difference, when measured along different axes, in a material's physical or mechanical properties An example of anisotropy is light coming through a polarizer. Another is wood, easier to split along its grain than across it. In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet. Anisotropic filtering is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced. A chemical anisotropic filter, as used to filter particles, is a filter with smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions remove smaller particles, resulting in greater flow-through and more efficient filtration.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene; this abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change. In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g. to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon. Images of a gravity-bound or man-made environment are anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity. Physicists from University of California, Berkeley reported about their detection of the cosine anisotropy in cosmic microwave background radiation in 1977.
Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has been seen in the alignment of galaxies' rotation axes and polarisation angles of quasars. Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may show "filamentation", directional. An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids; some materials conduct heat in a way, isotropic, independent of spatial orientation around the heat source. Heat conduction is more anisotropic, which implies that detailed geometric modeling of diverse materials being thermally managed is required.
The materials used to transfer and reject heat from the heat source in electronics are anisotropic. Many crystals are anisotropic to light, exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along; some materials can have multiple such optical axes. Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength have a dominant alignment; this alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth. Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; this property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity.
Formation evaluation instruments measure this conductivity/resistivity and the results are used to help find oil and gas in wells. The hydraulic conductivity of aquifers is anisotropic for the same reason; when calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account, otherwise the results may be subject to error. Most common rock-forming minerals are anisotropic, including feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet. Anisotropy is a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, wh
Diatoms are a major group of algae microalgae, found in the oceans and soils of the world. Living diatoms number in the trillions: they generate about 20 percent of the oxygen produced on the planet each year, take in over 6.7 billion metric tons of silicon each year from the waters in which they live, contribute nearly half of the organic material found in the oceans. The shells of dead diatoms can reach as much as a half mile deep on the ocean floor, the entire Amazon basin is fertilized annually by 27 million tons of diatom shell dust transported by east-to-west transatlantic winds from the bed of a dried up lake once covering much of the African Sahara. Diatoms are unicellular: they occur either as solitary cells or in colonies, which can take the shape of ribbons, zigzags, or stars. Individual cells range in size from 2 to 200 micrometers. In the presence of adequate nutrients and sunlight, an assemblage of living diatoms doubles every 24 hours by asexual multiple fission. Diatoms have two distinct shapes: a few are radially symmetric, while most are broadly bilaterally symmetric.
A unique feature of diatom anatomy is that they are surrounded by a cell wall made of silica, called a frustule. These frustules have structural coloration due to their photonic nanostructure, prompting them to be described as "jewels of the sea" and "living opals". Movement in diatoms occurs passively as a result of both water currents and wind-induced water turbulence. Similar to plants, diatoms convert light energy to chemical energy by photosynthesis, although this shared autotrophy evolved independently in both lineages. Unusually for autotrophic organisms, diatoms possess a urea cycle, a feature that they share with animals, although this cycle is used to different metabolic ends in diatoms; the study of diatoms is a branch of phycology. Diatoms are classified as eukaryotes, organisms with a membrane-bound cell nucleus, that separates them from the prokaryotes archaea and bacteria. Diatoms are a type of plankton called phytoplankton, the most common of the plankton types. Diatoms grow attached to benthic substrates, floating debris, on macrophytes.
They comprise an integral component of the periphyton community. Another classification divides plankton into eight types based on size: in this scheme, diatoms are classed as microalgae. Several systems for classifying the individual diatom species exist. Fossil evidence suggests that diatoms originated during or before the early Jurassic period, about 150 to 200 million years ago. Diatoms are used to monitor past and present environmental conditions, are used in studies of water quality. Diatomaceous earth is a collection of diatom shells found in the earth's crust, they are soft, silica-containing sedimentary rocks which are crumbled into a fine powder and have a particle size of 10 to 200 μm. Diatomaceous earth is used for a variety of purposes including for water filtration, as a mild abrasive, in cat litter, as a dynamite stabilizer. Diatoms are 2 to 200 micrometers in length, their yellowish-brown chloroplasts, the site of photosynthesis, are typical of heterokonts, having four membranes and containing pigments such as the carotenoid fucoxanthin.
Individuals lack flagella, but they are present in male gametes of the centric diatoms and have the usual heterokont structure, except they lack the hairs characteristic in other groups. Diatoms are referred as "jewels of the sea" or "living opals" due to their photonic crystal properties; the biological function of this structural coloration is not clear, but it is speculated that it may be related to communication, thermal exchange and/or UV protection. Diatoms build intricate hard but porous cell walls called frustules composed of silica; this siliceous wall can be patterned with a variety of pores, minute spines, marginal ridges and elevations. The cell itself consists of two halves, each containing an flat plate, or valve and marginal connecting, or girdle band. One half, the hypotheca, is smaller than the other half, the epitheca. Diatom morphology varies. Although the shape of the cell is circular, some cells may be triangular, square, or elliptical, their distinguishing feature is a hard mineral frustule composed of opal.
Most diatoms are nonmotile, as their dense cell walls cause them to sink. Planktonic forms in open water rely on turbulent mixing of the upper layers of the oceanic waters by the wind to keep them suspended in sunlit surface waters; the only mechanism for regulating buoyancy is an ionic pump. Cells are solitary or united into colonies of various kinds, which may be linked by siliceous structures. Diatoms are photosynthetic. Diatom cells are contained within a unique silica cell wall known as a frustule made up of two valves called thecae, that overlap one another; the biogenic silica composing the cell wall is synthesised intracellularly by the polymerisation of silicic acid monomer
Hardness is a measure of the resistance to localized plastic deformation induced by either mechanical indentation or abrasion. Some materials are harder than others. Macroscopic hardness is characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex. Hardness is dependent on ductility, elastic stiffness, strain, toughness and viscosity. Common examples of hard matter are ceramics, certain metals, superhard materials, which can be contrasted with soft matter. There are three main types of hardness measurements: scratch and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between another. Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due to friction from a sharp object; the principle is that an object made of a harder material will scratch an object made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through the film to the substrate.
The most common test is Mohs scale, used in mineralogy. One tool to make this measurement is the sclerometer. Another tool used to make these tests is the pocket hardness tester; this tool consists of a scale arm with graduated markings attached to a four-wheeled carriage. A scratch tool with a sharp rim is mounted at a predetermined angle to the testing surface. In order to use it a weight of known mass is added to the scale arm at one of the graduated markings, the tool is drawn across the test surface; the use of the weight and markings allows a known pressure to be applied without the need for complicated machinery. Indentation hardness measures the resistance of a sample to material deformation due to a constant compression load from a sharp object. Tests for indentation hardness are used in engineering and metallurgy fields; the tests work on the basic premise of measuring the critical dimensions of an indentation left by a dimensioned and loaded indenter. Common indentation hardness scales are Rockwell, Vickers and Brinell, amongst others.
Rebound hardness known as dynamic hardness, measures the height of the "bounce" of a diamond-tipped hammer dropped from a fixed height onto a material. This type of hardness is related to elasticity; the device used to take this measurement is known as a scleroscope. Two scales that measures rebound hardness are the Leeb rebound hardness test and Bennett hardness scale. There are five hardening processes: Hall-Petch strengthening, work hardening, solid solution strengthening, precipitation hardening, martensitic transformation. In solid mechanics, solids have three responses to force, depending on the amount of force and the type of material: They exhibit elasticity—the ability to temporarily change shape, but return to the original shape when the pressure is removed. "Hardness" in the elastic range—a small temporary change in shape for a given force—is known as stiffness in the case of a given object, or a high elastic modulus in the case of a material. They exhibit plasticity—the ability to permanently change shape in response to the force, but remain in one piece.
The yield strength is the point. Deformation in the plastic range is non-linear, is described by the stress-strain curve; this response produces the observed properties of scratch and indentation hardness, as described and measured in materials science. Some materials exhibit both viscosity when undergoing plastic deformation, they fracture—split into two or more pieces. Strength is a measure of the extent of a material's elastic range, or elastic and plastic ranges together; this is quantified as compressive strength, shear strength, tensile strength depending on the direction of the forces involved. Ultimate strength is an engineering measure of the maximum load a part of a specific material and geometry can withstand. Brittleness, in technical usage, is the tendency of a material to fracture with little or no detectable plastic deformation beforehand, thus in technical terms, a material can be both strong. In everyday usage "brittleness" refers to the tendency to fracture under a small amount of force, which exhibits both brittleness and a lack of strength.
For brittle materials, yield strength and ultimate strength are the same, because they do not experience detectable plastic deformation. The opposite of brittleness is ductility; the toughness of a material is the maximum amount of energy it can absorb before fracturing, different from the amount of force that can be applied. Toughness tends to be small for brittle materials, because elastic and plastic deformations allow materials to absorb large amounts of energy. Hardness increases with decreasing particle size; this is known as the Hall-Petch relationship. However, below a critical grain-size, hardness decreases with decreasing grain size; this is known as the inverse Hall-Petch effect. Hardness of a material to deformation is dependent on its microdurability or small-scale shear modulus in any direction, not to any rigidity or stiffness properties such as its bulk modulus or Young's modulus. Stiffness is confused for hardness; some materials are stiffer than diamond but are not harder, are prone to spalling and flaking in squamose or acicular habits.
The key to understanding the mechanism behind hardness is understanding the metal
Reinforced concrete is a composite material in which concrete's low tensile strength and ductility are counteracted by the inclusion of reinforcement having higher tensile strength or ductility. The reinforcement is though not steel reinforcing bars and is embedded passively in the concrete before the concrete sets. Reinforcing schemes are designed to resist tensile stresses in particular regions of the concrete that might cause unacceptable cracking and/or structural failure. Modern reinforced concrete can contain varied reinforcing materials made of steel, polymers or alternate composite material in conjunction with rebar or not. Reinforced concrete may be permanently stressed, so as to improve the behaviour of the final structure under working loads. In the United States, the most common methods of doing this are known as pre-tensioning and post-tensioning. For a strong and durable construction the reinforcement needs to have the following properties at least: High relative strength High toleration of tensile strain Good bond to the concrete, irrespective of pH, similar factors Thermal compatibility, not causing unacceptable stresses in response to changing temperatures.
Durability in the concrete environment, irrespective of corrosion or sustained stress for example. François Coignet was the first to use iron-reinforced concrete as a technique for constructing building structures. In 1853, Coignet built the first iron reinforced concrete structure, a four-story house at 72 rue Charles Michels in the suburbs of Paris. Coignet's descriptions of reinforcing concrete suggests that he did not do it for means of adding strength to the concrete but for keeping walls in monolithic construction from overturning. In 1854, English builder William B. Wilkinson reinforced the concrete roof and floors in the two-storey house he was constructing, his positioning of the reinforcement demonstrated that, unlike his predecessors, he had knowledge of tensile stresses. Joseph Monier was a French gardener of the nineteenth century, a pioneer in the development of structural and reinforced concrete when dissatified with existing materials available for making durable flowerpots, he was granted a patent for reinforced flowerpots by means of mixing a wire mesh to a mortar shell.
In 1877, Monier was granted another patent for a more advanced technique of reinforcing concrete columns and girders with iron rods placed in a grid pattern. Though Monier undoubtedly knew reinforcing concrete would improve its inner cohesion, it is less known if he knew how much reinforcing improved concrete's tensile strength. Before 1877 the use of concrete construction, though dating back to the Roman Empire, having been reintroduced in the early 1800s, was not yet a proven scientific technology. American New Yorker Thaddeus Hyatt published a report titled An Account of Some Experiments with Portland-Cement-Concrete Combined with Iron as a Building Material, with Reference to Economy of Metal in Construction and for Security against Fire in the Making of Roofs and Walking Surfaces where he reported his experiments on the behavior of reinforced concrete, his work played a major role in the evolution of concrete construction as a proven and studied science. Without Hyatt's work, more dangerous trial and error methods would have been depended on for the advancement in the technology.
Ernest L. Ransome was an English-born engineer and early innovator of the reinforced concrete techniques in the end of the 19th century. With the knowledge of reinforced concrete developed during the previous 50 years, Ransome innovated nearly all styles and techniques of the previous known inventors of reinforced concrete. Ransome's key innovation was to twist the reinforcing steel bar improving bonding with the concrete. Gaining increasing fame from his concrete constructed buildings, Ransome was able to build two of the first reinforced concrete bridges in North America. One of the first concrete buildings constructed in the United States, was a private home, designed by William Ward in 1871; the home was designed to be fireproof for his wife. G. A. Wayss was a pioneer of the iron and steel concrete construction. In 1879, Wayss bought the German rights to Monier's patents and in 1884, he started the first commercial use for reinforced concrete in his firm Wayss & Freytag. Up until the 1890s, Wayss and his firm contributed to the advancement of Monier's system of reinforcing and established it as a well-developed scientific technology.
One of the first skyscrapers made with reinforced concrete was the 16-story Ingalls Building in Cincinnati, constructed in 1904. The first reinforced concrete building in Southern California was the Laughlin Annex in Downtown Los Angeles, constructed in 1905. In 1906, 16 building permits were issued for reinforced concrete buildings in the City of Los Angeles, including the Temple Auditorium and 8-story Hayward Hotel. On April 18, 1906 a magnitude 7.8 earthquake struck San Francisco. The strong ground shaking and subsequent fire killed thousands; the use of reinforced concrete after the earthquake was promoted within the U. S. construction industry due to its non-combustibility and perceived superior seismic performance relative to masonry. In 1906, a partial collapse of the Bixby Hotel in Long Beach killed 10 workers during construction when shoring was removed prematurely; this event spurred a scrutiny of concrete erection practices and building inspections. The structure was constructed of reinforced concrete frames with hollow clay tile ribbed flooring and hollow clay
Soil is a mixture of organic matter, gases and organisms that together support life. Earth's body of soil, called the pedosphere, has four important functions: as a medium for plant growth as a means of water storage and purification as a modifier of Earth's atmosphere as a habitat for organismsAll of these functions, in their turn, modify the soil; the pedosphere interfaces with the lithosphere, the hydrosphere, the atmosphere, the biosphere. The term pedolith, used to refer to the soil, translates to ground stone in the sense "fundamental stone". Soil consists of a solid phase of minerals and organic matter, as well as a porous phase that holds gases and water. Accordingly, soil scientists can envisage soils as a three-state system of solids and gases. Soil is a product of several factors: the influence of climate, relief and the soil's parent materials interacting over time, it continually undergoes development by way of numerous physical and biological processes, which include weathering with associated erosion.
Given its complexity and strong internal connectedness, soil ecologists regard soil as an ecosystem. Most soils have a dry bulk density between 1.1 and 1.6 g/cm3, while the soil particle density is much higher, in the range of 2.6 to 2.7 g/cm3. Little of the soil of planet Earth is older than the Pleistocene and none is older than the Cenozoic, although fossilized soils are preserved from as far back as the Archean. Soil science has two basic branches of study: pedology. Edaphology studies the influence of soils on living things. Pedology focuses on the formation and classification of soils in their natural environment. In engineering terms, soil is included in the broader concept of regolith, which includes other loose material that lies above the bedrock, as can be found on the Moon and on other celestial objects as well. Soil is commonly referred to as earth or dirt. Soil is a major component of the Earth's ecosystem; the world's ecosystems are impacted in far-reaching ways by the processes carried out in the soil, from ozone depletion and global warming to rainforest destruction and water pollution.
With respect to Earth's carbon cycle, soil is an important carbon reservoir, it is one of the most reactive to human disturbance and climate change. As the planet warms, it has been predicted that soils will add carbon dioxide to the atmosphere due to increased biological activity at higher temperatures, a positive feedback; this prediction has, been questioned on consideration of more recent knowledge on soil carbon turnover. Soil acts as an engineering medium, a habitat for soil organisms, a recycling system for nutrients and organic wastes, a regulator of water quality, a modifier of atmospheric composition, a medium for plant growth, making it a critically important provider of ecosystem services. Since soil has a tremendous range of available niches and habitats, it contains most of the Earth's genetic diversity. A gram of soil can contain billions of organisms, belonging to thousands of species microbial and in the main still unexplored. Soil has a mean prokaryotic density of 108 organisms per gram, whereas the ocean has no more than 107 procaryotic organisms per milliliter of seawater.
Organic carbon held in soil is returned to the atmosphere through the process of respiration carried out by heterotrophic organisms, but a substantial part is retained in the soil in the form of soil organic matter. Since plant roots need oxygen, ventilation is an important characteristic of soil; this ventilation can be accomplished via networks of interconnected soil pores, which absorb and hold rainwater making it available for uptake by plants. Since plants require a nearly continuous supply of water, but most regions receive sporadic rainfall, the water-holding capacity of soils is vital for plant survival. Soils can remove impurities, kill disease agents, degrade contaminants, this latter property being called natural attenuation. Soils maintain a net absorption of oxygen and methane and undergo a net release of carbon dioxide and nitrous oxide. Soils offer plants physical support, water, temperature moderation and protection from toxins. Soils provide available nutrients to plants and animals by converting dead organic matter into various nutrient forms.
A typical soil is about 50% solids, 50% voids of which half is occupied by water and half by gas. The percent soil mineral and organic content can be treated as a constant, while the percent soil water and gas content is considered variable whereby a rise in one is balanced by a reduction in the other; the pore space allows for the infiltration and movement of air and water, both of which are critical for life existing in soil. Compaction, a common problem with soils, reduces this space, preventing air and water from reaching plant roots and soil organisms. Given sufficient time, an undifferentiated soil will evolve a soil profile which consists of two or more layers, referred to as soil horizons, that differ in one or more properties such as in their texture, density, consistency, temperature and reactivity; the horizons differ in thickness and gene
International System of Units
The International System of Units is the modern form of the metric system, is the most used system of measurement. It comprises a coherent system of units of measurement built on seven base units, which are the ampere, second, kilogram, mole, a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units; the system specifies names for 22 derived units, such as lumen and watt, for other common physical quantities. The base units are derived from invariant constants of nature, such as the speed of light in vacuum and the triple point of water, which can be observed and measured with great accuracy, one physical artefact; the artefact is the international prototype kilogram, certified in 1889, consisting of a cylinder of platinum-iridium, which nominally has the same mass as one litre of water at the freezing point. Its stability has been a matter of significant concern, culminating in a revision of the definition of the base units in terms of constants of nature, scheduled to be put into effect on 20 May 2019.
Derived units may be defined in terms of other derived units. They are adopted to facilitate measurement of diverse quantities; the SI is intended to be an evolving system. The most recent derived unit, the katal, was defined in 1999; the reliability of the SI depends not only on the precise measurement of standards for the base units in terms of various physical constants of nature, but on precise definition of those constants. The set of underlying constants is modified as more stable constants are found, or may be more measured. For example, in 1983 the metre was redefined as the distance that light propagates in vacuum in a given fraction of a second, thus making the value of the speed of light in terms of the defined units exact; the motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second systems and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures, established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and standardise the rules for writing and presenting measurements.
The system was published in 1960 as a result of an initiative that began in 1948. It is based on the metre–kilogram–second system of units rather than any variant of the CGS. Since the SI has been adopted by all countries except the United States and Myanmar; the International System of Units consists of a set of base units, derived units, a set of decimal-based multipliers that are used as prefixes. The units, excluding prefixed units, form a coherent system of units, based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have the same form, including numerical factors, as the corresponding equations between the quantities. For example, 1 N = 1 kg × 1 m/s2 says that one newton is the force required to accelerate a mass of one kilogram at one metre per second squared, as related through the principle of coherence to the equation relating the corresponding quantities: F = m × a. Derived units apply to derived quantities, which may by definition be expressed in terms of base quantities, thus are not independent.
Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI system, such as acceleration, defined in SI units as m/s2. The SI base units are the building blocks of the system and all the other units are derived from them; when Maxwell first introduced the concept of a coherent system, he identified three quantities that could be used as base units: mass and time. Giorgi identified the need for an electrical base unit, for which the unit of electric current was chosen for SI. Another three base units were added later; the early metric systems defined a unit of weight as a base unit, while the SI defines an analogous unit of mass. In everyday use, these are interchangeable, but in scientific contexts the difference matters. Mass the inertial mass, represents a quantity of matter, it relates the acceleration of a body to the applied force via Newton's law, F = m × a: force equals mass times acceleration. A force of 1 N applied to a mass of 1 kg will accelerate it at 1 m/s2.
This is true whether the object is floating in space or in a gravity field e.g. at the Earth's surface. Weight is the force exerted on a body by a gravitational field, hence its weight depends on the strength of the gravitational field. Weight of a 1 kg mass at the Earth's surface is m × g. Since the acceleration due to gravity is local and varies by location and altitude on the Earth, weight is unsuitable for precision