Bionics or biologically inspired engineering is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology. The word bionic was coined by Jack E. Steele in August 1958, being formed as a portmanteau from biology and electronics, it was popularized by the 1970s U. S. television series The Six Million Dollar Man and The Bionic Woman, both based upon the novel Cyborg by Martin Caidin, itself influenced by Steele's work. All feature humans given superhuman powers by electromechanical implants; the transfer of technology between lifeforms and manufactured objects is, according to proponents of bionic technology, desirable because evolutionary pressure forces living organisms, including fauna and flora, to become optimized and efficient. A classical example is the development of dirt- and water-repellent paint from the observation that nothing sticks to the surface of the lotus flower plant.. The term "biomimetic" is preferred.
In that domain, biomimetic chemistry refers to reactions that, in nature, involve biological macromolecules whose chemistry can be replicated in vitro using much smaller molecules. Examples of bionics in engineering include the hulls of boats imitating the thick skin of dolphins. In the field of computer science, the study of bionics has produced artificial neurons, artificial neural networks, swarm intelligence. Evolutionary computation was motivated by bionics ideas but it took the idea further by simulating evolution in silico and producing well-optimized solutions that had never appeared in nature, it is estimated by Julian Vincent, professor of biomimetics at the University of Bath's Department of Mechanical Engineering, that "at present there is only a 12% overlap between biology and technology in terms of the mechanisms used". The name biomimetics was coined by Otto Schmitt in the 1950s; the term bionics was coined by Jack E. Steele in August 1958 while working at the Aeronautics Division House at Wright-Patterson Air Force Base in Dayton, Ohio.
However, terms like biomimicry or biomimetics are more preferred in the technology world in efforts to avoid confusion between the medical term bionics. Coincidentally, Martin Caidin used the word for his 1972 novel Cyborg, which inspired the series The Six Million Dollar Man. Caidin was a long-time aviation industry writer before turning to fiction full-time; the study of bionics emphasizes implementing a function found in nature rather than imitating biological structures. For example, in computer science, cybernetics tries to model the feedback and control mechanisms that are inherent in intelligent behavior, while artificial intelligence tries to model the intelligent function regardless of the particular way it can be achieved; the conscious copying of examples and mechanisms from natural organisms and ecologies is a form of applied case-based reasoning, treating nature itself as a database of solutions that work. Proponents argue that the selective pressure placed on all natural life forms minimizes and removes failures.
Although all engineering could be said to be a form of biomimicry, the modern origins of this field are attributed to Buckminster Fuller and its codification as a house or field of study to Janine Benyus. There are three biological levels in the fauna or flora, after which technology can be modeled: Mimicking natural methods of manufacture Imitating mechanisms found in nature Studying organizational principles from the social behaviour of organisms, such as the flocking behaviour of birds, optimization of ant foraging and bee foraging, the swarm intelligence -based behaviour of a school of fish. In robotics and biomimetics are used to apply the way animals move to the design of robots. BionicKangaroo was based on the movements and physiology of kangaroos. Velcro is the most famous example of biomimetics. In 1948, the Swiss engineer George de Mestral was cleaning his dog of burrs picked up on a walk when he realized how the hooks of the burrs clung to the fur; the horn-shaped, saw-tooth design for lumberjack blades used at the turn of the 19th century to cut down trees when it was still done by hand was modeled after observations of a wood-burrowing beetle.
It revolutionized the industry. Cat's eye reflectors were invented by Percy Shaw in 1935 after studying the mechanism of cat eyes, he had found that cats had a system of reflecting cells, known as tapetum lucidum, capable of reflecting the tiniest bit of light. Leonardo da Vinci's flying machines and ships are early examples of drawing from nature in engineering. Resilin is a replacement for rubber, created by studying the material found in arthropods. Julian Vincent drew from the study of pinecones when he developed in 2004 "smart" clothing that adapts to changing temperatures. "I wanted a nonliving system which would respond to changes in moisture by changing shape", he said. "There are several such systems in plants, but most are small — the pinecone is the largest and therefore the easiest to work on". Pinecones respond to higher humidity by opening their scales; the "smart" fabric does the same thing, opening up when the wearer is warm and sweating, shutting tight when cold. "Morphing aircraft wings" that change shape according to the speed and duration of flight were designed in 2004 by biomimetic scientists from Penn State University.
The morphing wings were inspired by different bird species that have differently shaped wings according to the s
Silicones known as polysiloxanes, are polymers that include any synthetic compound made up of repeating units of siloxane, a chain of alternating silicon atoms and oxygen atoms, combined with carbon and sometimes other elements. They are heat-resistant and either liquid or rubber-like, are used in sealants, lubricants, cooking utensils, thermal and electrical insulation; some common forms include silicone oil, silicone grease, silicone rubber, silicone resin, silicone caulk. More called polymerized siloxanes or polysiloxanes, silicones consist of an inorganic silicon-oxygen backbone chain with organic side groups attached to the silicon atoms; these silicon atoms are tetravalent. So, silicones are polymers constructed from inorganic-organic monomers. Silicones have in general the chemical formula n, where R is an organic group such as an alkyl or phenyl group. In some cases, organic side groups can be used to link two or more of these -Si-O- backbones together. By varying the -Si-O- chain lengths, side groups, crosslinking, silicones can be synthesized with a wide variety of properties and compositions.
They can vary in consistency from liquid to gel to rubber to hard plastic. The most common siloxane is a silicone oil; the second largest group of silicone materials is based on silicone resins, which are formed by branched and cage-like oligosiloxanes. F. S. Kipping coined the word silicone in 1901 to describe polydiphenylsiloxane by analogy of its formula, Ph2SiO, with the formula of the ketone benzophenone, Ph2CO. Kipping was well aware that polydiphenylsiloxane is polymeric whereas benzophenone is monomeric and noted that Ph2SiO and Ph2CO had different chemistry; the discovery of the structural differences between Kipping's molecules and the ketones means that silicone is no longer the correct term and that the term siloxanes is correct according to the nomenclature of modern chemistry. Silicone is confused with silicon, but they are distinct substances. Silicon is a chemical element, a hard dark-grey semiconducting metalloid which in its crystalline form is used to make integrated circuits and solar cells.
Silicones are compounds that contain silicon, hydrogen and other kinds of atoms as well, have different physical and chemical properties. Compounds containing silicon-oxygen double bonds, now called silanones but which could deserve the name "silicone", have long been identified as intermediates in gas-phase processes such as chemical vapor deposition in microelectronics production, in the formation of ceramics by combustion; however they have a strong tendency to polymerize into siloxanes. The first stable silanone was obtained in 2014 by others. Most common are materials based on polydimethylsiloxane, derived by hydrolysis of dimethyldichlorosilane; this dichloride reacts with water as follows: n Si2Cl2 + n H2O → n + 2n HClThe polymerization produces linear chains capped with Si-Cl or Si-OH groups. Under different conditions the polymer is a cyclic, not a chain. For consumer applications such as caulks silyl acetates are used instead of silyl chlorides; the hydrolysis of the acetates produce the less dangerous acetic acid as the reaction product of a much slower curing process.
This chemistry is used in many consumer applications, such as adhesives. Branches or cross-links in the polymer chain can be introduced by using organosilicone precursors with fewer alkyl groups, such as methyltrichlorosilane and methyltrimethoxysilane. Ideally, each molecule of such a compound becomes a branch point; this process can be used to produce hard silicone resins. Precursors with three methyl groups can be used to limit molecular weight, since each such molecule has only one reactive site and so forms the end of a siloxane chain; when silicone is burned in air or oxygen, it forms solid silica as a white powder and various gases. The dispersed powder is sometimes called silica fume. Silicones exhibit many useful characteristics, including: Low thermal conductivity Low chemical reactivity Low toxicity Thermal stability; the ability to repel water and form watertight seals. Does not stick to many substrates, but adheres well to others, e.g. glass. Does not support microbiological growth.
Resistance to oxygen and ultraviolet light. This property has led to widespread use of silicones in the construction industry and the automotive industry. Electrical insulation properties; because silicone can be formulated to be electrically insulative or conductive, it is suitable for a wide range of electrical applications. High gas permeability: at room temperature, the permeability of silicone rubber for such gases as oxygen is 400 times that of butyl rubber, making silicone useful for medical applications in which increased aeration is desired. Conversely, silicone rubbers can not be used. Silicone can be developed into rubber sheeting, where it has other properties, such as being FDA compliant; this extends the uses of silicone sheeting to industries that demand hygiene, for example and beverage and pharmaceutical. Silicones are used in many products. Ullmann's Encyclopedia of Industrial Chemistry lists the following major categories of application: Electrical, elec
Fluorocarbons, sometimes referred to as perfluorocarbons or PFCs, are speaking, organofluorine compounds with the formula CxFy, i.e. they contain only carbon and fluorine, though the terminology is not followed. Compounds with the prefix perfluoro- are hydrocarbons, including those with heteroatoms, wherein all C-H bonds have been replaced by C-F bonds. Fluorocarbons can be perfluoroalkanes and fluoroalkynes and perfluoroaromatic compounds. Fluorocarbons and their derivatives are used as fluoropolymers, refrigerants and anesthetics. Perfluorocarbons or PFCs are speaking, organofluorine compounds with the formula CxFy, i.e. they contain only carbon and fluorine, though the terminology is not followed. Compounds with the prefix perfluoro- are hydrocarbons, including those with heteroatoms, wherein all C-H bonds have been replaced by C-F bonds. Fluorocarbons can be divided into perfluoroalkanes and fluoroalkynes or perfluoroaromatic compounds. Perfluoroalkanes are stable because of the strength of the carbon–fluorine bond, one of the strongest in organic chemistry.
Its strength is a result of the electronegativity of fluorine imparting partial ionic character through partial charges on the carbon and fluorine atoms, which shorten and strengthen the bond through favorable covalent interactions. Additionally, multiple carbon–fluorine bonds increase the strength and stability of other nearby carbon–fluorine bonds on the same geminal carbon, as the carbon has a higher positive partial charge. Furthermore, multiple carbon–fluorine bonds strengthen the "skeletal" carbon–carbon bonds from the inductive effect. Therefore, saturated fluorocarbons are more chemically and thermally stable than their corresponding hydrocarbon counterparts, indeed any other organic compound, they are susceptible to attack by strong reductants, e.g. Birch reduction and specialized organometallic complexes. Fluorocarbons have high density, up to over twice that of water, they are miscible with some hydrocarbons. They have low solubility in water, water has a low solubility in them, they have low refractive indices.
As the high electronegativity of fluorine reduces the polarizability of the atom, fluorocarbons are only weakly susceptible to the fleeting dipoles that form the basis of the London dispersion force. As a result, fluorocarbons have low intermolecular attractive forces and are lipophobic in addition to being hydrophobic and non-polar. Reflecting the weak intermolecular forces these compounds exhibit low viscosities when compared to liquids of similar boiling points, low surface tension and low heats of vaporization; the low attractive forces in fluorocarbon liquids make them compressible and able to dissolve gas well. Smaller fluorocarbons are volatile. There are five perfluoroalkane gases: tetrafluoromethane, octafluoropropane, perfluoro-n-butane and perfluoro-iso-butane. Nearly all other fluoroalkanes are liquids. Fluorocarbons have low surface energies and high dielectric strengths. Perfluoroalkanes In the 1960s there was a lot of interest in fluorocarbons as anesthetics; the research did not come to anything, but a lot of effort was expended on the vital issue of flammability, showed that the tested fluorocarbons were not flammable in air in any proportion, though most are in neat oxygen and neat nitrous oxide.
In 1993, 3M considered fluorocarbons as fire extinguishants to replace CFCs. This extinguishing effect has been attributed to their high heat capacity, which takes heat away from the fire, it has been suggested that an atmosphere containing a significant percentage of perfluorocarbons on a space station or similar would prevent fires altogether. When combustion does occur, toxic fumes result, including carbonyl fluoride, carbon monoxide, hydrogen fluoride. Perfluorocarbons dissolve high volumes of gases; the high solubility of gases is attributed to the weak intermolecular interactions in these fluorocarbon fluids. The table shows values for the mole fraction, x1, of nitrogen dissolved, calculated from the Ostwald coefficient, at 298.15 K, 0.101325 M Pa. The development of the fluorocarbon industry coincided with World War II. Prior to that, fluorocarbons were prepared by reaction of fluorine with the hydrocarbon, i.e. direct fluorination. Because C-C bonds are cleaved by fluorine, direct fluorination affords smaller perfluorocarbons, such as tetrafluoromethane and octafluoropropane.
A major breakthrough that allowed the large scale manufacture of fluorocarbons was the Fowler process. In this process, cobalt trifluoride is used as the source of fluorine. Illustrative is the synthesis of perfluorohexane: C6H14 + 28 CoF3 → C6F14 + 14 HF + 28 CoF2The resulting cobalt difluoride is regenerated, sometimes in a separate reactor: 2 CoF2 + F2 → 2 CoF3Industrially, both steps are combined, for example in the manufacture of the Flutec range of fluorocarbons by F2 chemicals Ltd, using a vertical stirred bed reactor, with hydrocarbon introduced at the bottom, fluorine introduced halfway up the reactor; the fluorocarbon vapor is recovered from the top. Electrochemical fluorination involves electrolysis of a substrate dissolved in hydrogen fluoride; as fluorine is itself manufactured by the electrolysis of hydrogen fluoride, ECF is a rather more direct route to fluorocarbons
Dew is water in the form of droplets that appears on thin, exposed objects in the morning or evening due to condensation. As the exposed surface cools by radiating its heat, atmospheric moisture condenses at a rate greater than that at which it can evaporate, resulting in the formation of water droplets; when temperatures are low enough, dew takes the form of ice. Because dew is related to the temperature of surfaces, in late summer it forms most on surfaces that are not warmed by conducted heat from deep ground, such as grass, railings, car roofs, bridges. Dew should not be confused with guttation, the process by which plants release excess water from the tips of their leaves. Water vapour will condense into droplets depending on the temperature; the temperature at which droplets form is called the dew point. When surface temperature drops reaching the dew point, atmospheric water vapor condenses to form small droplets on the surface; this process distinguishes dew from those hydrometeors, which form directly in air that has cooled to its dew point, such as fog or clouds.
The thermodynamic principles of formation, are the same. Dew is formed at night. Adequate cooling of the surface takes place when it loses more energy by infrared radiation than it receives as solar radiation from the sun, the case on clear nights. Poor thermal conductivity restricts the replacement of such losses from deeper ground layers, which are warmer at night. Preferred objects of dew formation are thus poor conducting or well isolated from the ground, non-metallic, while shiny metal coated surfaces are poor infrared radiators. Preferred weather conditions include the absence of clouds and little water vapor in the higher atmosphere to minimize greenhouse effects and sufficient humidity of the air near the ground. Typical dew nights are classically considered calm, because the wind transports warmer air from higher levels to the cold surface. However, if the atmosphere is the major source of moisture, a certain amount of ventilation is needed to replace the vapor, condensed; the highest optimum wind speeds could be found on arid islands.
If the wet soil beneath is the major source of vapor, wind always seems adverse. The processes of dew formation do not restrict its occurrence to the outdoors, they are working when eyeglasses get steamy in a warm, wet room or in industrial processes. However, the term condensation is preferred in these cases. A classical device for dew measurement is the drosometer. A small, condenser surface is suspended from an arm attached to a pointer or a pen that records the weight changes of the condenser on a drum. Besides being wind sensitive, this, like all artificial surface devices, only provides a measure of the meteorological potential for dew formation; the actual amount of dew in a specific place is dependent on surface properties. For its measurement, leaves, or whole soil columns are placed on a balance with their surface at the same height and in the same surroundings as would occur thus providing a small lysimeter. Further methods include estimation by means of comparing the droplets to standardized photographs, or volumetric measurement of the amount of water wiped from the surface.
Some of these methods include guttation. Due to its dependence on radiation balance, dew amounts can reach a theoretical maximum of about 0.8 mm per night. In most climates of the world, the annual average is too small to compete with rain. In regions with considerable dry seasons, adapted plants like lichen or pine seedlings benefit from dew. Large-scale, natural irrigation without rainfall, such as in the Atacama Desert and Namib desert, however, is attributed to fog water. In the Negev Desert in Israel, dew has been found to account for half of the water found in three dominant desert species, Salsola inermis, Artemisia sieberi and Haloxylon scoparium. Another effect of dew is its hydration of fungal substrates and the mycelia of species such as Pleated Inkcaps on lawns and Phytophthora infestans which causes blight on potato plants; the book De Mundo described: Dew is moisture minute in composition falling from a clear sky. In Greek mythology, Ersa is the goddess of dew. Dew, known in Hebrew as טל, is significant in the Jewish religion for agricultural and theological purposes.
On the first day of Passover, the Chazan, dressed in a white kittel, leads a service in which he prays for dew between that point and Sukkot. During the rainy season between December and Passover there are additions in the Amidah for blessed dew to come together with rain. There are many midrashim. In the Biblical Old Testament dew is used symbolically in Deuteronomy 32:2: "My doctrine shall drop as the rain, my speech shall distill as the dew, as the small rain upon the tender herb, as the showers upon the grass." Several man-made devices such as antique, big stone piles in Ukraine, medieval "dew ponds" in southern England, or volcanic stone covers on the fields of Lanzarote have been thought to be dew-catching devices, but could be shown to work on other principles. At present, the International Organisation for Dew Utilization is working on effe
An oil is any nonpolar chemical substance, a viscous liquid at ambient temperatures and is both hydrophobic and lipophilic. Oils have a high carbon and hydrogen content and are flammable and surface active; the general definition of oil includes classes of chemical compounds that may be otherwise unrelated in structure and uses. Oils may be animal, vegetable, or petrochemical in origin, may be volatile or non-volatile, they are used for food, medical purposes and the manufacture of many types of paints and other materials. Specially prepared oils are used in some religious rituals as purifying agents. First attested in English 1176, the word oil comes from Old French oile, from Latin oleum, which in turn comes from the Greek ἔλαιον, "olive oil, oil" and that from ἐλαία, "olive tree", "olive fruit"; the earliest attested forms of the word are the Mycenaean Greek, e-ra-wo and, e-rai-wo, written in the Linear B syllabic script. Organic oils are produced in remarkable diversity by plants and other organisms through natural metabolic processes.
Lipid is the scientific term for the fatty acids and similar chemicals found in the oils produced by living things, while oil refers to an overall mixture of chemicals. Organic oils may contain chemicals other than lipids, including proteins and alkaloids. Lipids can be classified by the way that they are made by an organism, their chemical structure and their limited solubility in water compared to oils, they have a high carbon and hydrogen content and are lacking in oxygen compared to other organic compounds and minerals. Crude oil, or petroleum, its refined components, collectively termed petrochemicals, are crucial resources in the modern economy. Crude oil originates from ancient fossilized organic materials, such as zooplankton and algae, which geochemical processes convert into oil; the name "mineral oil" is a misnomer, in that minerals are not the source of the oil—ancient plants and animals are. Mineral oil is organic. However, it is classified as "mineral oil" instead of as "organic oil" because its organic origin is remote, because it is obtained in the vicinity of rocks, underground traps, sands.
Mineral oil refers to several specific distillates of crude oil. Several edible vegetable and animal oils, fats, are used for various purposes in cooking and food preparation. In particular, many foods are fried in oil much hotter than boiling water. Oils are used for flavoring and for modifying the texture of foods. Cooking oils are derived either from animal fat, as butter and other types, or plant oils from the olive, maize and many other species. Oils are applied to hair to give it a lustrous look, to prevent tangles and roughness and to stabilize the hair to promote growth. See hair conditioner. Oil has been used throughout history as a religious medium, it is considered a spiritually purifying agent and is used for anointing purposes. As a particular example, holy anointing oil has been an important ritual liquid for Judaism and Christianity. Color pigments are suspended in oil, making it suitable as a supporting medium for paints; the oldest known extant oil paintings date from 650 AD. Oils are used for instance in electric transformers.
Heat transfer oils are used both as coolants, for heating and in other applications of heat transfer. Given that they are non-polar, oils do not adhere to other substances; this makes them useful as lubricants for various engineering purposes. Mineral oils are more used as machine lubricants than biological oils are. Whale oil is preferred for lubricating clocks, because it does not evaporate, leaving dust, although its use was banned in the USA in 1980, it is a long-running myth that spermaceti from whales has still been used in NASA projects such as the Hubble Telescope and the Voyager probe because of its low freezing temperature. Spermaceti is not an oil, but a mixture of wax esters, there is no evidence that NASA has used whale oil; some oils burn in liquid or aerosol form, generating light, heat which can be used directly or converted into other forms of energy such as electricity or mechanical work. To obtain many fuel oils, crude oil is pumped from the ground and is shipped via oil tanker or a pipeline to an oil refinery.
There, it is converted from crude oil to diesel fuel, fuel oils, jet fuel, kerosene and liquefied petroleum gas. A 42-US-gallon barrel of crude oil produces 10 US gallons of diesel, 4 US gallons of jet fuel, 19 US gallons of gasoline, 7 US gallons of other products, 3 US gallons split between heavy fuel oil and liquified petroleum gases, 2 US gallons of heating oil; the total production of a barrel of crude into various products results in an increase to 45 US gallons. Not all oils used as fuels are mineral oils, see biodiesel and vegetable oil fuel. In the 18th and 19th cent
In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is related to the number Ω of microscopic configurations that are consistent with the macroscopic quantities that characterize the system. Under the assumption that each microstate is probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant kB. Formally, S = k B ln Ω. Macroscopic systems have a large number Ω of possible microscopic configurations. For example, the entropy of an ideal gas is proportional to the number of gas molecules N. Twenty liters of gas at room temperature and atmospheric pressure has N ≈ 6×1023. At equilibrium, each of the Ω ≈ eN configurations can be regarded as random and likely; the second law of thermodynamics states. Such systems spontaneously evolve towards the state with maximum entropy. Non-isolated systems may lose entropy, provided their environment's entropy increases by at least that amount so that the total entropy increases.
Entropy is a function of the state of the system, so the change in entropy of a system is determined by its initial and final states. In the idealization that a process is reversible, the entropy does not change, while irreversible processes always increase the total entropy; because it is determined by the number of random microstates, entropy is related to the amount of additional information needed to specify the exact physical state of a system, given its macroscopic specification. For this reason, it is said that entropy is an expression of the disorder, or randomness of a system, or of the lack of information about it; the concept of entropy plays a central role in information theory. Boltzmann's constant, therefore entropy, have dimensions of energy divided by temperature, which has a unit of joules per kelvin in the International System of Units; the entropy of a substance is given as an intensive property—either entropy per unit mass or entropy per unit amount of substance. The French mathematician Lazare Carnot proposed in his 1803 paper Fundamental Principles of Equilibrium and Movement that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity.
In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines, whenever "caloric" falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body, he made the analogy with that of. This was an early insight into the second law of thermodynamics. Carnot based his views of heat on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, on the contemporary views of Count Rumford who showed that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body".
The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy, its conservation in all processes. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the natural logarithm of the number of microstates such a gas could occupy.
Henceforth, the essential problem in statistical thermodynamics, i.e. according to Erwin Schrödinger, has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. There are two related definitions of entropy: the thermodynamic definition and the statistical mechanics definition; the classical thermodynamics definition developed first. In the classical thermodynamics viewpoint, the system is composed of large numbers of constituents and the state of the system is described by the average thermodynamic properties of those constituents.
A goniometer is an instrument that either measures an angle or allows an object to be rotated to a precise angular position. The term goniometry is derived from two Greek words, gōnia, meaning angle, metron, meaning measure; the first description of a goniometer, derived from the astrolabe, was in 1538, by Gemma Frisius. Prior to the invention of the theodolite, the goniometer was used in surveying; the application of triangulation to geodesy was described in the second edition of Cosmograficus liber by Petri Appiani as a 16-page appendix by Frisius entitled Libellus de locorum describendorum ratione. The Bellini–Tosi direction finder was a type of radio direction finder, used from World War I to World War II, it used the signals from two crossed antennas, or four individual antennas simulating two crossed ones, to re-create the radio signal in a small area between two loops of wire. The operator could measure the angle to the target radio source by performing direction finding within this small area.
The advantage to the Bellini–Tosi system is that the antennas do not move, allowing them to be built at any required size. The basic technique remains in use. Goniometers are used for military and civil purposes, e.g. interception of satellite and naval communications on the French warship Dupuy de Lôme uses multiple goniometers. In crystallography, goniometers are used for measuring angles between crystal faces, they are used in X-ray diffraction to rotate the samples. The groundbreaking investigations of physicist Max von Laue and colleagues into the atomic structure of crystals in 1912 involved a goniometer. Goniophotometers measure the spatial distribution of light visible to the human eye at a specific angular position. A goniometer is used to document initial and subsequent range of motion, at the visits for occupational injuries, by disability evaluators to determine a permanent disability; this is to evaluate progress, for medico-legal purposes. It is a tool to evaluate Waddell's signs In physical therapy, occupational therapy and athletic training, a goniometer is an instrument which measures range of motion joint angles of the body.
This measurement instrument is a helpful, clinical tool that allows for objective measurements in order to track progress in a rehabilitation program. When a patient has a decreased range of motion, a therapist will assess the joint before performing an intervention and will continue to use the tool to make sure that progress is made; these range of motion measurements can be taken at any joint and they involve some knowledge about the anatomy of the body bony landmarks. For example, when measuring the knee joint, the axis would be placed on the lateral epicondyle of the femur, while the stationary arm would be lined up with the greater trochanter of the femur; the moveable arm of the goniometer would be lined up with the lateral malleolus of the fibula and a measurement will be taken using the degree scale on the circular portion of the tool. The only problem with goniometers is the accuracy of the reading. Issues with the intra-measure and inter-tester reliability seem to increase as the experience of the examiner decreases.
Some studies suggest that these errors can be anywhere between 5 and 10 degrees when completing repeated measures. These goniometers do come in different forms that some would argue will increase the reliability of the tool; the universal standard goniometer is a metal tool with 1 degree increments. The arms are not longer than 12-inches so it can be hard to pinpoint the exact landmark needed for measurement. A more reliable goniometer would be the telescopic-armed goniometer. There is a plastic circular axis as a classic goniometer but the arms extend out to as long as two feet in either direction. More in the twenty-first century, smartphone application developers have created mobile applications that are intended to perform like a goniometer; these applications use the accelerometers in phones to calculate the angles of the joints measured. There has been a lot of research that supports these applications and their devices as reliable and valid tools that have just as much accuracy as a universal goniometer.
Modern rehabilitative therapy motion capture systems perform goniometry at the least measuring active range of motion. While in some cases accuracy may be inferior to using a goniometer, measuring angles using a motion capture system is superior in providing measurement during dynamic, as opposed to static situations. Furthermore, use of a traditional goniometer takes valuable time. In the clinical context, performing manual measurements takes valuable time and may not be practical. In surface science, an instrument called a contact angle goniometer or tensiometer is used to measure the static contact angle, advancing & receding contact angles, in some cases surface tension; the first contact angle goniometer was designed by Dr. William Zisman of the United States Naval Research Laboratory in Washington, D. C. and manufactured by ramé-hart, New Jersey, USA. The original manual contact angle goniometer used an eyepiece with microscope. Today's contact angle goniometer uses a camera and software to capture and analyze the drop shape and is better suited for dynamic and advanced studies.
Contact angle goniometers can determine the surface tension for any liquid in gas or the interfacial tension between any two liquids. If the difference in densities between the two fluid