Silicon is a chemical element with symbol Si and atomic number 14. It is a brittle crystalline solid with a blue-grey metallic lustre, it is a member of group 14 in the periodic table: carbon is above it. It is unreactive; because of its high chemical affinity for oxygen, it was not until 1823 that Jöns Jakob Berzelius was first able to prepare it and characterize it in pure form. Its melting and boiling points of 1414 °C and 3265 °C are the second-highest among all the metalloids and nonmetals, being only surpassed by boron. Silicon is the eighth most common element in the universe by mass, but rarely occurs as the pure element in the Earth's crust, it is most distributed in dusts, sands and planets as various forms of silicon dioxide or silicates. More than 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust after oxygen. Most silicon is used commercially without being separated, with little processing of the natural minerals.
Such use includes industrial construction with clays, silica sand, stone. Silicates are used in Portland cement for mortar and stucco, mixed with silica sand and gravel to make concrete for walkways and roads, they are used in whiteware ceramics such as porcelain, in traditional quartz-based soda-lime glass and many other specialty glasses. Silicon compounds such as silicon carbide are used as abrasives and components of high-strength ceramics. Silicon is the basis of the used synthetic polymers called silicones. Elemental silicon has a large impact on the modern world economy. Most free silicon is used in the steel refining, aluminium-casting, fine chemical industries. More visibly, the small portion of highly purified elemental silicon used in semiconductor electronics is essential to integrated circuits – most computers, cell phones, modern technology depend on it. Silicon is an essential element in biology. However, various sea sponges and microorganisms, such as diatoms and radiolaria, secrete skeletal structures made of silica.
Silica is deposited in many plant tissues. In 1787 Antoine Lavoisier suspected that silica might be an oxide of a fundamental chemical element, but the chemical affinity of silicon for oxygen is high enough that he had no means to reduce the oxide and isolate the element. After an attempt to isolate silicon in 1808, Sir Humphry Davy proposed the name "silicium" for silicon, from the Latin silex, silicis for flint, adding the "-ium" ending because he believed it to be a metal. Most other languages use transliterated forms of Davy's name, sometimes adapted to local phonology. A few others use instead a calque of the Latin root. Gay-Lussac and Thénard are thought to have prepared impure amorphous silicon in 1811, through the heating of isolated potassium metal with silicon tetrafluoride, but they did not purify and characterize the product, nor identify it as a new element. Silicon was given its present name in 1817 by Scottish chemist Thomas Thomson, he retained part of Davy's name but added "-on" because he believed that silicon was a nonmetal similar to boron and carbon.
In 1823, Jöns Jacob Berzelius prepared amorphous silicon using the same method as Gay-Lussac, but purifying the product to a brown powder by washing it. As a result, he is given credit for the element's discovery; the same year, Berzelius became the first to prepare silicon tetrachloride. Silicon in its more common crystalline form was not prepared until 31 years by Deville. By electrolyzing a mixture of sodium chloride and aluminium chloride containing 10% silicon, he was able to obtain a impure allotrope of silicon in 1854. More cost-effective methods have been developed to isolate several allotrope forms, the most recent being silicene in 2010. Meanwhile, research on the chemistry of silicon continued; the first organosilicon compound, was synthesised by Charles Friedel and James Crafts in 1863, but detailed characterisation of organosilicon chemistry was only done in the early 20th century by Frederic Kipping. Starting in the 1920s, the work of William Lawrence Bragg on X-ray crystallography elucidated the compositions of the silicates, known from analytical chemistry but had not yet been understood, together with Linus Pauling's development of crystal chemistry and Victor Goldschmidt's development of geochemistry.
The middle of the 20th century saw the development of the chemistry and industrial use of siloxanes and the growing use of silicone polymers and resins. In the late 20th century, the complexity of the crystal chemistry of silicides was mapped, along with the solid-state chemistry of doped semiconductors; because silicon is an important element in high-technology semiconductor devi
Microscopy is the technical field of using microscopes to view objects and areas of objects that cannot be seen with the naked eye. There are three well-known branches of microscopy: optical and scanning probe microscopy, along with the emerging field of X-ray microscopy. Optical microscopy and electron microscopy involve the diffraction, reflection, or refraction of electromagnetic radiation/electron beams interacting with the specimen, the collection of the scattered radiation or another signal in order to create an image; this process may be carried out by wide-field irradiation of the sample or by scanning a fine beam over the sample. Scanning probe microscopy involves the interaction of a scanning probe with the surface of the object of interest; the development of microscopy revolutionized biology, gave rise to the field of histology and so remains an essential technique in the life and physical sciences. X-ray microscopy is three-dimensional and non-destructive, allowing for repeated imaging of the same sample for in situ or 4D studies, providing the ability to "see inside" the sample being studied before sacrificing it to higher resolution techniques.
A 3D X-ray microscope uses the technique of computed tomography, rotating the sample 360 degrees and reconstructing the images. CT is carried out with a flat panel display. A 3D X-ray microscope employs a range of objectives, e.g. from 4X to 40X, can include a flat panel. The field of microscopy dates back to at least the 17th-century. Earlier microscopes, single lens magnifying glasses with limited magnification, date at least as far back as the wide spread use of lenses in eyeglasses in the 13th century but more advanced compound microscopes first appeared in Europe around 1620 The earliest practitioners of microscopy include Galileo Galilei, who found in 1610 that he could close focus his telescope to view small objects close up and Cornelis Drebbel, who may have invented the compound microscope around 1620 Antonie van Leeuwenhoek developed a high magnification simple microscope in the 1670's and is considered to be the first acknowledged microscopist and microbiologist. Optical or light microscopy involves passing visible light transmitted through or reflected from the sample through a single lens or multiple lenses to allow a magnified view of the sample.
The resulting image can be detected directly by the eye, imaged on a photographic plate, or captured digitally. The single lens with its attachments, or the system of lenses and imaging equipment, along with the appropriate lighting equipment, sample stage, support, makes up the basic light microscope; the most recent development is the digital microscope, which uses a CCD camera to focus on the exhibit of interest. The image is shown on a computer screen, so eye-pieces are unnecessary. Limitations of standard optical microscopy lie in three areas. Diffraction limits resolution to 0.2 micrometres. This limits the practical magnification limit to ~1500x. Out-of-focus light from points outside the focal plane reduces image clarity. Live cells in particular lack sufficient contrast to be studied since the internal structures of the cell are colorless and transparent; the most common way to increase contrast is to stain the different structures with selective dyes, but this involves killing and fixing the sample.
Staining may introduce artifacts, which are apparent structural details that are caused by the processing of the specimen and are thus not legitimate features of the specimen. In general, these techniques make use of differences in the refractive index of cell structures. Bright field microscopy is comparable to looking through a glass window: one sees not the glass but the dirt on the glass. There is a difference, as glass is a denser material, this creates a difference in phase of the light passing through; the human eye is not sensitive to this difference in phase, but clever optical solutions have been devised to change this difference in phase into a difference in amplitude. In order to improve specimen contrast or highlight certain structures in a sample, special techniques must be used. A huge selection of microscopy techniques are available to label a sample. Four examples of transillumination techniques used to generate contrast in a sample of tissue paper. 1.559 μm/pixel. Bright field microscopy is the simplest of all the light microscopy techniques.
Sample illumination is via transmitted white light, i.e. illuminated from below and observed from above. Limitations include low contrast of most biological samples and low apparent resolution due to the blur of out-of-focus material; the simplicity of the technique and the minimal sample preparation required are significant advantages. The use of oblique illumination gives the image a three-dimensional appearance and can highlight otherwise invisible features. A more recent technique based on this method is Hoffmann's modulation contrast, a system found on inverted microscopes for use in cell culture. Oblique illumination suffers from the same limitations as bright field microscopy. Dark field microscopy is a technique for improving the contrast of transparent specimens. Dark field illumination uses a aligned light source to minimize the quantity of direct
Standardization or standardisation is the process of implementing and developing technical standards based on the consensus of different parties that include firms, interest groups, standards organizations and governments Standardization can help to maximize compatibility, safety, repeatability, or quality. It can facilitate commoditization of custom processes. In social sciences, including economics, the idea of standardization is close to the solution for a coordination problem, a situation in which all parties can realize mutual gains, but only by making mutually consistent decisions; this view includes the case of "spontaneous standardization processes", to produce de facto standards. Standard weights and measures were developed by the Indus Valley Civilization; the centralized weight and measure system served the commercial interest of Indus merchants as smaller weight measures were used to measure luxury goods while larger weights were employed for buying bulkier items, such as food grains etc.
Weights existed in categories. Technical standardisation enabled gauging devices to be used in angular measurement and measurement for construction. Uniform units of length were used in the planning of towns such as Lothal, Kalibangan, Dolavira and Mohenjo-daro; the weights and measures of the Indus civilization reached Persia and Central Asia, where they were further modified. Shigeo Iwata describes the excavated weights unearthed from the Indus civilization: A total of 558 weights were excavated from Mohenjodaro and Chanhu-daro, not including defective weights, they did not find statistically significant differences between weights that were excavated from five different layers, each measuring about 1.5 m in depth. This was evidence; the 13.7-g weight seems to be one of the units used in the Indus valley. The notation was based on decimal systems. 83% of the weights which were excavated from the above three cities were cubic, 68% were made of chert. The implementation of standards in industry and commerce became important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts.
Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800. This allowed for the standardisation of screw thread sizes for the first time and paved the way for the practical application of interchangeability to nuts and bolts. Before this, screw threads were made by chipping and filing. Nuts were rare. Metal bolts passing through wood framing to a metal fastening on the other side were fastened in non-threaded ways. Maudslay standardized the screw threads used in his workshop and produced sets of taps and dies that would make nuts and bolts to those standards, so that any bolt of the appropriate size would fit any nut of the same size; this was a major advance in workshop technology. Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization. Joseph Whitworth's screw thread measurements were adopted as the first national standard by companies around the country in 1841, it came to be known as the British Standard Whitworth, was adopted in other countries.
This new standard specified a 55° thread angle and a thread depth of 0.640327p and a radius of 0.137329p, where p is the pitch. The thread pitch increased with diameter in steps specified on a chart. An example of the use of the Whitworth thread is the Royal Navy's Crimean War gunboats; these were the first instance of "mass-production" techniques being applied to marine engineering. With the adoption of BSW by British railway lines, many of which had used their own standard both for threads and for bolt head and nut profiles, improving manufacturing techniques, it came to dominate British manufacturing. American Unified Coarse was based on the same imperial fractions; the Unified thread angle has flattened crests. Thread pitch is the same in both systems except that the thread pitch for the 1⁄2 in bolt is 12 threads per inch in BSW versus 13 tpi in the UNC. By the end of the 19th century, differences in standards between companies, was making trade difficult and strained. For instance, an iron and steel dealer recorded his displeasure in The Times: "Architects and engineers specify such unnecessarily diverse types of sectional material or given work that anything like economical and continuous manufacture becomes impossible.
In this country no two professional men are agreed upon the size and weight of a girder to employ for given work." The Engineering Standards Committee was established in London in 1901 as the world's first national standards body. It subsequently extended its standardization work and became the British Engineering Standards Association in 1918, adopting the name British Standards Institution in 1931 after receiving its Royal Charter in 1929; the national standards were adopted universally throughout the country, enabled the markets to act more rationally and efficiently, with an increased level of cooperation. After the First World War, similar national bodies were established in other countries; the Deutsches Institut für Normung was set up in Germany in 1917, followed by its counterparts, the American National Standard Institute and the French Commissi
Deutsches Institut für Normung
Deutsches Institut für Normung e. V. is the German ISO member body. DIN is a German Registered Association headquartered in Berlin. There are around thirty thousand DIN Standards, covering nearly every field of technology. Founded in 1917 as the Normenausschuß der deutschen Industrie, the NADI was renamed Deutscher Normenausschuß in 1926 to reflect that the organization now dealt with standardization issues in many fields. In 1975 it was renamed again to Deutsches Institut für Normung, or'DIN' and is recognized by the German government as the official national-standards body, representing German interests at the international and European levels; the acronym,'DIN' is incorrectly expanded as Deutsche Industrienorm. This is due to the historic origin of the DIN as "NADI"; the NADI indeed published their standards as DI-Norm. For example, the first published standard was'DI-Norm 1' in 1918. Many people still mistakenly associate DIN with the old DI-Norm naming convention. One of the earliest, the best known, is DIN 476 — the standard that introduced the A-series paper sizes in 1922 — adopted in 1975 as International Standard ISO 216.
Common examples in modern technology include DIN and mini-DIN connectors for electronics, the DIN rail. The designation of a DIN standard shows its origin: DIN # is used for German standards with domestic significance or designed as a first step toward international status. E DIN # is a draft standard and DIN V # is a preliminary standard. DIN EN # is used for the German edition of European standards. DIN ISO # is used for the German edition of ISO standards. DIN EN ISO # is used if the standard has been adopted as a European standard. DIN 476: international paper sizes DIN 1451: typeface used by German railways and on traffic signs DIN 31635: transliteration of the Arabic language DIN 72552: electric terminal numbers in automobiles Austrian Standards Institute Swiss Association for Standardization Die Brücke, an earlier German institute aiming to set standard paper sizes DIN film speed DIN connector DQS - Deutsche Gesellschaft zur Zertifizierung von Managementsystemen, a subsidiary of DIN DGQ - Deutsche Gesellschaft für Qualität, founded DQS in 1985 together with DIN DIN home page DIN home page DIN online dictionary of classes and units of measure DQS Holding GmbH DQS HK
In electronics, a wafer is a thin slice of semiconductor, such as a crystalline silicon, used for the fabrication of integrated circuits and, in photovoltaics, to manufacture solar cells. The wafer serves as the substrate for microelectronic devices built upon the wafer, it and undergoes many microfabrication processes, such as doping, ion implantation, thin-film deposition of various materials, photolithographic patterning. The individual microcircuits are separated by wafer dicing and packaged as an integrated circuit. By 1960, silicon wafers were being manufactured in the U. S. by companies such as MEMC/SunEdison. In 1965, American engineers Eric O. Ernst, Donald J. Hurd, Gerard Seeley, while working under IBM, filed Patent US3423629A for the first high-capacity epitaxial apparatus. Wafers are formed of pure, nearly defect-free single crystalline material. One process for forming crystalline wafers is known as Czochralski growth invented by the Polish chemist Jan Czochralski. In this process, a cylindrical ingot of high purity monocrystalline semiconductor, such as silicon or germanium, called a boule, is formed by pulling a seed crystal from a'melt'.
Donor impurity atoms, such as boron or phosphorus in the case of silicon, can be added to the molten intrinsic material in precise amounts in order to dope the crystal, thus changing it into n-type or p-type extrinsic semiconductor. The boule is sliced with a wafer saw and polished to form wafers; the size of wafers for photovoltaics is 100–200 mm square and the thickness is 200–300 μm. In the future, 160 μm will be the standard. Electronics use wafer sizes from 100–450 mm diameter; the largest wafers made are not yet in general use. Wafers are cleaned with weak acids to remove unwanted particles, or repair damage caused during the sawing process; when used for solar cells, the wafers are textured to create a rough surface to increase their efficiency. The generated PSG is removed from the edge of the wafer in the etching. Silicon wafers are available in a variety of diameters from 25.4 mm to 300 mm. Semiconductor fabrication plants, colloquially known as fabs, are defined by the diameter of wafers that they are tooled to produce.
The diameter has increased to improve throughput and reduce cost with the current state-of-the-art fab using 300 mm, with a proposal to adopt 450 mm. Intel, TSMC and Samsung are separately conducting research to the advent of 450 mm "prototype" fabs, though serious hurdles remain. 1-inch 2-inch with thickness 275 µm. 3-inch with thickness 375 µm. 4-inch with thickness 525 µm. Or 4.9 inch with thickness 625 µm. 150 mm with thickness 675 µm. 200 mm with thickness 725 µm. 300 mm with thickness 775 µm. 450 mm with thickness 925 µm. 675-millimetre Unknown thickness.. Wafers grown using materials other than silicon will have different thicknesses than a silicon wafer of the same diameter. Wafer thickness is determined by the mechanical strength of the material used. A unit wafer fabrication step, such as an etch step, can produce more chips proportional to the increase in wafer area, while the cost of the unit fabrication step goes up more than the wafer area; this was the cost basis for increasing wafer size.
Conversion to 300 mm wafers from 200 mm wafers began in earnest in 2000, reduced the price per die about 30-40%. There is considerable resistance to the 450 mm transition despite the possible productivity improvement, because of concern about insufficient return on investment. Higher cost semiconductor fabrication equipment for larger wafers increases the cost of 450 mm fabs. Lithographer Chris Mack claimed in 2012 that the overall price per die for 450 mm wafers would be reduced by only 10–20% compared to 300 mm wafers, because over 50% of total wafer processing costs are lithography-related. Converting to larger 450 mm wafers would reduce price per die only for process operations such as etch where cost is related to wafer count, not wafer area. Cost for processes such as lithography is proportional to wafer area, larger wafers would not reduce the lithography contribution to die cost. Nikon planned to deliver 450-mm lithography equipment in 2015, with volume production in 2017. In November 2013 ASML paused development of 450-mm lithography equipment, citing uncertain timing of chipmaker demand.
The timeline for 450 mm has not been fixed. Mark Durcan CEO of Micron Technology, said in February 2014 that he expects 450 mm adoption to be delayed indefinitely or discontinued. “I am not convinced that 450mm will happen but, to the extent that it does, it’s a long way out in the future. There is not a lot of necessity for Micron, at least over the next five years, to be spending a lot of money on 450mm. There is a lot of investment, and the value at the end of the day – so that customers would buy that equipment – I think is dubious.” As of March 2014, Intel Corporation expected 450 mm deployment by 2020. Mark LaPedus of semiengineering.com reported in mid-2014 that chipmakers had delayed adoption of 450 mm “for the foreseeable future.” According to this report some observers expected 2018 to 2020, while G. Dan Hutcheson, chief executive of VLSI Research, didn’t see 450mm fabs moving into production until 2020 to 2025. Th
Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of measurement are dependent on the discipline. In the natural sciences and engineering, measurements do not apply to nominal properties of objects or events, consistent with the guidelines of the International vocabulary of metrology published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioral sciences, measurements can have multiple levels, which would include nominal, ordinal and ratio scales. Measurement is a cornerstone of trade, science and quantitative research in many disciplines. Many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields; these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying accepted standards that resulted in the modern International System of Units.
This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology; the measurement of a property may be categorized by the following criteria: type, magnitude and uncertainty. They enable unambiguous comparisons between measurements; the level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by difference, or ordinal preference; the type is not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the numerical value of the characterization obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude, derived as a ratio to the property of an artifact used as standard or a natural physical quantity. An uncertainty represents the systemic errors of the measurement procedure. Errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument.
Measurements most use the International System of Units as a comparison framework. The system defines seven fundamental units: kilogram, candela, ampere and mole. Six of these units are defined without reference to a particular physical object which serves as a standard, while the kilogram is still embodied in an artifact which rests at the headquarters of the International Bureau of Weights and Measures in Sèvres near Paris. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable phenomena in nature, in contrast to standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only change through increased accuracy in determining the value of the constant it is tied to; the first proposal to tie an SI base unit to an experimental standard independent of fiat was by Charles Sanders Peirce, who proposed to define the metre in terms of the wavelength of a spectral line. This directly influenced the Michelson–Morley experiment.
With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, first for convenience and for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were developed to prevent fraud in commerce. Units of measurement are defined on a scientific basis, overseen by governmental or independent agencies, established in international treaties, pre-eminent of, the General Conference on Weights and Measures, established in 1875 by the Metre Convention, overseeing the International System of Units and having custody of the International Prototype Kilogram; the metre, for example, was redefined in 1983 by the CGPM in terms of light speed, while in 1960 the international yard was defined by the governments of the United States, United Kingdom and South Africa as being 0.9144 metres.
In the United States, the National Institute of Standards and Technology, a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory, in Australia by the National Measurement Institute, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India. Before SI units were adopted around the world, the British systems of English units and imperial units were used in Britain, the Commonwealth and the United States; the system came to be known as U. S. is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called foot-pound-second systems after the Imperial units for length and time though the tons, hundredweights and nautical miles, for example, are different for the U. S. units. Many Imperial units remain in use in Britain, which has switched to the SI system—with a few exceptions such as road signs, which are still in miles.
Draught beer and cider must be sold by the imperial pint, milk in returnable bottles can be sold by the imperial pint. Many people meas
Zirconium is a chemical element with symbol Zr and atomic number 40. The name zirconium is taken from the name of the mineral zircon, the most important source of zirconium, it is a lustrous, grey-white, strong transition metal that resembles hafnium and, to a lesser extent, titanium. Zirconium is used as a refractory and opacifier, although small amounts are used as an alloying agent for its strong resistance to corrosion. Zirconium forms a variety of inorganic and organometallic compounds such as zirconium dioxide and zirconocene dichloride, respectively. Five isotopes occur three of which are stable. Zirconium compounds have no known biological role. Zirconium is a lustrous, greyish-white, ductile, malleable metal, solid at room temperature, though it is hard and brittle at lesser purities. In powder form, zirconium is flammable, but the solid form is much less prone to ignition. Zirconium is resistant to corrosion by alkalis, salt water and other agents. However, it will dissolve in hydrochloric and sulfuric acid when fluorine is present.
Alloys with zinc are magnetic at less than 35 K. The melting point of zirconium is 1855 °C, the boiling point is 4371 °C. Zirconium has an electronegativity of 1.33 on the Pauling scale. Of the elements within the d-block with known electronegativities, zirconium has the fifth lowest electronegativity after hafnium, yttrium and actinium. At room temperature zirconium exhibits a hexagonally close-packed crystal structure, α-Zr, which changes to β-Zr, a body-centered cubic crystal structure, at 863 °C. Zirconium exists in the β-phase until the melting point. Occurring zirconium is composed of five isotopes. 90Zr, 91Zr, 92Zr and 94Zr are stable, although 94Zr is predicted to undergo double beta decay with a half-life of more than 1.10×1017 years. 96Zr has a half-life of 2.4×1019 years, is the longest-lived radioisotope of zirconium. Of these natural isotopes, 90Zr is the most common. 96Zr is the least common, comprising only 2.80% of zirconium. Twenty-eight artificial isotopes of zirconium have been synthesized, ranging in atomic mass from 78 to 110.
93Zr is the longest-lived artificial isotope, with a half-life of 1.53×106 years. 110Zr, the heaviest isotope of zirconium, is the most radioactive, with an estimated half-life of 30 milliseconds. Radioactive isotopes at or above mass number 93 decay by electron emission, whereas those at or below 89 decay by positron emission; the only exception is 88Zr. Five isotopes of zirconium exist as metastable isomers: 83mZr, 85mZr, 89mZr, 90m1Zr, 90m2Zr and 91mZr. Of these, 90m2Zr has the shortest half-life at 131 nanoseconds. 89mZr is the longest lived with a half-life of 4.161 minutes. Zirconium has a concentration of about 130 mg/kg within the Earth's crust and about 0.026 μg/L in sea water. It is not found in nature as a native metal, reflecting its intrinsic instability with respect to water; the principal commercial source of zirconium is zircon, a silicate mineral, found in Australia, India, South Africa and the United States, as well as in smaller deposits around the world. As of 2013, two-thirds of zircon mining occurs in South Africa.
Zircon resources exceed 60 million tonnes worldwide and annual worldwide zirconium production is 900,000 tonnes. Zirconium occurs in more than 140 other minerals, including the commercially useful ores baddeleyite and kosnarite. Zirconium is abundant in S-type stars, it has been detected in the sun and in meteorites. Lunar rock samples brought back from several Apollo missions to the moon have a high zirconium oxide content relative to terrestrial rocks. Zirconium is a by-product of the mining and processing of the titanium minerals ilmenite and rutile, as well as tin mining. From 2003 to 2007, while prices for the mineral zircon increased from $360 to $840 per tonne, the price for unwrought zirconium metal decreased from $39,900 to $22,700 per ton. Zirconium metal is much higher priced than zircon. Collected from coastal waters, zircon-bearing sand is purified by spiral concentrators to remove lighter materials, which are returned to the water because they are natural components of beach sand.
Using magnetic separation, the titanium ores ilmenite and rutile are removed. Most zircon is used directly in commercial applications, but a small percentage is converted to the metal. Most Zr metal is produced by the reduction of the zirconium chloride with magnesium metal in the Kroll process; the resulting metal is sintered until sufficiently ductile for metalworking. Commercial zirconium metal contains 1–3% of hafnium, not problematic because the chemical properties of hafnium and zirconium are similar, their neutron-absorbing properties differ however, necessitating the separation of hafnium from zirconium for nuclear reactors. Several separation schemes are in use; the liquid-liquid extraction of the thiocyanate-oxide derivatives exploits the fact that the hafnium derivative is more soluble in methyl isobutyl ketone than in water. This method is used in United States. Zr and Hf can be separated by fractional crystallization of potassium hexafluorozirconate, less soluble in water than the analogous hafnium derivative.
Fractional distillation of the tetrachlorides called extractive distillation, is used in Europe. The product of a quadruple VAM process, combined with hot extruding and different rolling application