1.
Density
–
The density, or more precisely, the volumetric mass density, of a substance is its mass per unit volume. The symbol most often used for density is ρ, although the Latin letter D can also be used. Mathematically, density is defined as mass divided by volume, ρ = m V, where ρ is the density, m is the mass, and V is the volume. In some cases, density is defined as its weight per unit volume. For a pure substance the density has the numerical value as its mass concentration. Different materials usually have different densities, and density may be relevant to buoyancy, purity, osmium and iridium are the densest known elements at standard conditions for temperature and pressure but certain chemical compounds may be denser. Thus a relative density less than one means that the floats in water. The density of a material varies with temperature and pressure and this variation is typically small for solids and liquids but much greater for gases. Increasing the pressure on an object decreases the volume of the object, increasing the temperature of a substance decreases its density by increasing its volume. In most materials, heating the bottom of a results in convection of the heat from the bottom to the top. This causes it to rise relative to more dense unheated material, the reciprocal of the density of a substance is occasionally called its specific volume, a term sometimes used in thermodynamics. Density is a property in that increasing the amount of a substance does not increase its density. Archimedes knew that the irregularly shaped wreath could be crushed into a cube whose volume could be calculated easily and compared with the mass, upon this discovery, he leapt from his bath and ran naked through the streets shouting, Eureka. As a result, the term eureka entered common parlance and is used today to indicate a moment of enlightenment, the story first appeared in written form in Vitruvius books of architecture, two centuries after it supposedly took place. Some scholars have doubted the accuracy of this tale, saying among other things that the method would have required precise measurements that would have been difficult to make at the time, from the equation for density, mass density has units of mass divided by volume. As there are units of mass and volume covering many different magnitudes there are a large number of units for mass density in use. The SI unit of kilogram per metre and the cgs unit of gram per cubic centimetre are probably the most commonly used units for density.1,000 kg/m3 equals 1 g/cm3. In industry, other larger or smaller units of mass and or volume are often more practical, see below for a list of some of the most common units of density
2.
Liquid
–
A liquid is a nearly incompressible fluid that conforms to the shape of its container but retains a constant volume independent of pressure. As such, it is one of the four states of matter. A liquid is made up of tiny vibrating particles of matter, such as atoms, water is, by far, the most common liquid on Earth. Like a gas, a liquid is able to flow and take the shape of a container, most liquids resist compression, although others can be compressed. Unlike a gas, a liquid does not disperse to fill every space of a container, a distinctive property of the liquid state is surface tension, leading to wetting phenomena. The density of a liquid is usually close to that of a solid, therefore, liquid and solid are both termed condensed matter. On the other hand, as liquids and gases share the ability to flow, although liquid water is abundant on Earth, this state of matter is actually the least common in the known universe, because liquids require a relatively narrow temperature/pressure range to exist. Most known matter in the universe is in form as interstellar clouds or in plasma form within stars. Liquid is one of the four states of matter, with the others being solid, gas. Unlike a solid, the molecules in a liquid have a greater freedom to move. The forces that bind the molecules together in a solid are only temporary in a liquid, a liquid, like a gas, displays the properties of a fluid. A liquid can flow, assume the shape of a container, if liquid is placed in a bag, it can be squeezed into any shape. These properties make a suitable for applications such as hydraulics. Liquid particles are bound firmly but not rigidly and they are able to move around one another freely, resulting in a limited degree of particle mobility. As the temperature increases, the vibrations of the molecules causes distances between the molecules to increase. When a liquid reaches its point, the cohesive forces that bind the molecules closely together break. If the temperature is decreased, the distances between the molecules become smaller, only two elements are liquid at standard conditions for temperature and pressure, mercury and bromine. Four more elements have melting points slightly above room temperature, francium, caesium, gallium and rubidium, metal alloys that are liquid at room temperature include NaK, a sodium-potassium metal alloy, galinstan, a fusible alloy liquid, and some amalgams
3.
Gas
–
Gas is one of the four fundamental states of matter. A pure gas may be made up of atoms, elemental molecules made from one type of atom. A gas mixture would contain a variety of pure gases much like the air, what distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer, the interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image. One type of commonly known gas is steam, the gaseous state of matter is found between the liquid and plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increasing attention, high-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these states of matter see list of states of matter. The only chemical elements which are stable multi atom homonuclear molecules at temperature and pressure, are hydrogen, nitrogen and oxygen. These gases, when grouped together with the noble gases. Alternatively they are known as molecular gases to distinguish them from molecules that are also chemical compounds. The word gas is a neologism first used by the early 17th-century Flemish chemist J. B. van Helmont, according to Paracelsuss terminology, chaos meant something like ultra-rarefied water. An alternative story is that Van Helmonts word is corrupted from gahst and these four characteristics were repeatedly observed by scientists such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in various settings. Their detailed studies ultimately led to a relationship among these properties expressed by the ideal gas law. Gas particles are separated from one another, and consequently have weaker intermolecular bonds than liquids or solids. These intermolecular forces result from interactions between gas particles. Like-charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another, transient, randomly induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as Van der Waals forces. The interaction of these forces varies within a substance which determines many of the physical properties unique to each gas. A comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion, the drifting smoke particles in the image provides some insight into low pressure gas behavior
4.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
5.
Eigenvalues and eigenvectors
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes
6.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
7.
Volume
–
Volume is the quantity of three-dimensional space enclosed by a closed surface, for example, the space that a substance or shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre, three dimensional mathematical shapes are also assigned volumes. Volumes of some simple shapes, such as regular, straight-edged, Volumes of a complicated shape can be calculated by integral calculus if a formula exists for the shapes boundary. Where a variance in shape and volume occurs, such as those that exist between different human beings, these can be calculated using techniques such as the Body Volume Index. One-dimensional figures and two-dimensional shapes are assigned zero volume in the three-dimensional space, the volume of a solid can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas, the combined volume of two substances is usually greater than the volume of one of the substances. However, sometimes one substance dissolves in the other and the volume is not additive. In differential geometry, volume is expressed by means of the volume form, in thermodynamics, volume is a fundamental parameter, and is a conjugate variable to pressure. Any unit of length gives a unit of volume, the volume of a cube whose sides have the given length. For example, a cubic centimetre is the volume of a cube whose sides are one centimetre in length, in the International System of Units, the standard unit of volume is the cubic metre. The metric system also includes the litre as a unit of volume, thus 1 litre =3 =1000 cubic centimetres =0.001 cubic metres, so 1 cubic metre =1000 litres. Small amounts of liquid are often measured in millilitres, where 1 millilitre =0.001 litres =1 cubic centimetre. Capacity is defined by the Oxford English Dictionary as the applied to the content of a vessel, and to liquids, grain, or the like. Capacity is not identical in meaning to volume, though closely related, Units of capacity are the SI litre and its derived units, and Imperial units such as gill, pint, gallon, and others. Units of volume are the cubes of units of length, in SI the units of volume and capacity are closely related, one litre is exactly 1 cubic decimetre, the capacity of a cube with a 10 cm side. In other systems the conversion is not trivial, the capacity of a fuel tank is rarely stated in cubic feet, for example. The density of an object is defined as the ratio of the mass to the volume, the inverse of density is specific volume which is defined as volume divided by mass. Specific volume is an important in thermodynamics where the volume of a working fluid is often an important parameter of a system being studied
8.
Oscillation
–
Oscillation is the repetitive variation, typically in time, of some measure about a central value or between two or more different states. The term vibration is used to describe mechanical oscillation. Familiar examples of oscillation include a swinging pendulum and alternating current power, the simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air table or ice surface, the system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position. If a constant force such as gravity is added to the system, the time taken for an oscillation to occur is often referred to as the oscillatory period. All real-world oscillator systems are thermodynamically irreversible and this means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. Thus, oscillations tend to decay with time there is some net source of energy into the system. The simplest description of this process can be illustrated by oscillation decay of the harmonic oscillator. In addition, a system may be subject to some external force. In this case the oscillation is said to be driven, some systems can be excited by energy transfer from the environment. This transfer typically occurs where systems are embedded in some fluid flow, at sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. The harmonic oscillator and the systems it models have a degree of freedom. More complicated systems have more degrees of freedom, for two masses and three springs. In such cases, the behavior of each variable influences that of the others and this leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks mounted on a wall will tend to synchronise. This phenomenon was first observed by Christiaan Huygens in 1665, more special cases are the coupled oscillators where energy alternates between two forms of oscillation
9.
Voltage
–
Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential energy between two points per unit electric charge. The voltage between two points is equal to the work done per unit of charge against an electric field to move the test charge between two points. This is measured in units of volts, voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three. A voltmeter can be used to measure the voltage between two points in a system, often a reference potential such as the ground of the system is used as one of the points. A voltage may represent either a source of energy or lost, used, given two points in space, x A and x B, voltage is the difference in electric potential between those two points. Electric potential must be distinguished from electric energy by noting that the potential is a per-unit-charge quantity. Like mechanical potential energy, the zero of electric potential can be chosen at any point, so the difference in potential, i. e. the voltage, is the quantity which is physically meaningful. The voltage between point A to point B is equal to the work which would have to be done, per unit charge, against or by the electric field to move the charge from A to B. The voltage between the two ends of a path is the energy required to move a small electric charge along that path. Mathematically this is expressed as the integral of the electric field. In the general case, both an electric field and a dynamic electromagnetic field must be included in determining the voltage between two points. Historically this quantity has also called tension and pressure. Pressure is now obsolete but tension is used, for example within the phrase high tension which is commonly used in thermionic valve based electronics. Voltage is defined so that negatively charged objects are pulled towards higher voltages, therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Current can flow from lower voltage to higher voltage, but only when a source of energy is present to push it against the electric field. This is the case within any electric power source, for example, inside a battery, chemical reactions provide the energy needed for ion current to flow from the negative to the positive terminal. The electric field is not the only factor determining charge flow in a material, the electric potential of a material is not even a well defined quantity, since it varies on the subatomic scale. A more convenient definition of voltage can be found instead in the concept of Fermi level, in this case the voltage between two bodies is the thermodynamic work required to move a unit of charge between them
10.
Calibrating
–
Calibration in measurement technology and metrology is the comparison of measurement values delivered by a device under test with those of a calibration standard of known accuracy. Such a standard could be another measurement device of known accuracy, strictly, the term calibration means just the act of comparison, and does not include any subsequent adjustment. The calibration standard is normally traceable to a national standard held by a National Metrological Institute and this definition states that the calibration process is purely a comparison, but introduces the concept of Measurement uncertainty in relating the accuracies of the device under test and the standard. The increasing need for accuracy and uncertainty and the need to have consistent. In many countries a National Metrology Institute will exist which will maintain primary standards of measurement which will be used to provide traceability to customers instruments by calibration. The NMI supports the metrological infrastructure in that country by establishing an unbroken chain, examples of National Metrology Institutes are NPL in the UK, NIST in the United States, PTB in Germany and many others. This may be done by national standards laboratories operated by the government or by private firms offering metrology services, quality management systems call for an effective metrology system which includes formal, periodic, and documented calibration of all measuring instruments. ISO9000 and ISO17025 standards require that these actions are to a high level. To communicate the quality of a calibration the calibration value is often accompanied by a traceable uncertainty statement to a confidence level. This is evaluated through careful uncertainty analysis, some times a DFS is required to operate machinery in a degraded state. Whenever this does happen, it must be in writing and authorized by a manager with the assistance of a calibration technician. Measuring devices and instruments are categorized according to the quantities they are designed to measure. These vary internationally, e. g. NIST 150-2G in the U. S. the standard instrument for each test device varies accordingly, e. g. a dead weight tester for pressure gauge calibration and a dry block temperature tester for temperature gauge calibration. g. This is the perception of the instruments end-user, however, very few instruments can be adjusted to exactly match the standards they are compared to. For the vast majority of calibrations, the process is actually the comparison of an unknown to a known. The calibration process begins with the design of the instrument that needs to be calibrated. The design has to be able to hold a calibration through its calibration interval, in other words, the design has to be capable of measurements that are within engineering tolerance when used within the stated environmental conditions over some reasonable period of time. Having a design with these characteristics increases the likelihood of the measuring instruments performing as expected
11.
Atmosphere of Earth
–
The atmosphere of Earth is the layer of gases, commonly known as air, that surrounds the planet Earth and is retained by Earths gravity. The atmosphere of Earth protects life on Earth by absorbing solar radiation, warming the surface through heat retention. By volume, dry air contains 78. 09% nitrogen,20. 95% oxygen,0. 93% argon,0. 04% carbon dioxide, and small amounts of other gases. Air also contains an amount of water vapor, on average around 1% at sea level. The atmosphere has a mass of about 5. 15×1018 kg, the atmosphere becomes thinner and thinner with increasing altitude, with no definite boundary between the atmosphere and outer space. The Kármán line, at 100 km, or 1. 57% of Earths radius, is used as the border between the atmosphere and outer space. Atmospheric effects become noticeable during atmospheric reentry of spacecraft at an altitude of around 120 km, several layers can be distinguished in the atmosphere, based on characteristics such as temperature and composition. The study of Earths atmosphere and its processes is called atmospheric science, early pioneers in the field include Léon Teisserenc de Bort and Richard Assmann. The three major constituents of air, and therefore of Earths atmosphere, are nitrogen, oxygen, water vapor accounts for roughly 0. 25% of the atmosphere by mass. The remaining gases are often referred to as gases, among which are the greenhouse gases, principally carbon dioxide, methane, nitrous oxide. Filtered air includes trace amounts of other chemical compounds. Various industrial pollutants also may be present as gases or aerosols, such as chlorine, fluorine compounds, sulfur compounds such as hydrogen sulfide and sulfur dioxide may be derived from natural sources or from industrial air pollution. In general, air pressure and density decrease with altitude in the atmosphere, however, temperature has a more complicated profile with altitude, and may remain relatively constant or even increase with altitude in some regions. In this way, Earths atmosphere can be divided into five main layers, excluding the exosphere, Earth has four primary layers, which are the troposphere, stratosphere, mesosphere, and thermosphere. It extends from the exobase, which is located at the top of the thermosphere at an altitude of about 700 km above sea level, to about 10,000 km where it merges into the solar wind. This layer is composed of extremely low densities of hydrogen, helium and several heavier molecules including nitrogen, oxygen. The atoms and molecules are so far apart that they can travel hundreds of kilometers without colliding with one another, thus, the exosphere no longer behaves like a gas, and the particles constantly escape into space. These free-moving particles follow ballistic trajectories and may migrate in and out of the magnetosphere or the solar wind, the exosphere is located too far above Earth for any meteorological phenomena to be possible
12.
Water
–
Water is a transparent and nearly colorless chemical substance that is the main constituent of Earths streams, lakes, and oceans, and the fluids of most living organisms. Its chemical formula is H2O, meaning that its molecule contains one oxygen, Water strictly refers to the liquid state of that substance, that prevails at standard ambient temperature and pressure, but it often refers also to its solid state or its gaseous state. It also occurs in nature as snow, glaciers, ice packs and icebergs, clouds, fog, dew, aquifers, Water covers 71% of the Earths surface. It is vital for all forms of life. Only 2. 5% of this water is freshwater, and 98. 8% of that water is in ice and groundwater. Less than 0. 3% of all freshwater is in rivers, lakes, and the atmosphere, a greater quantity of water is found in the earths interior. Water on Earth moves continually through the cycle of evaporation and transpiration, condensation, precipitation. Evaporation and transpiration contribute to the precipitation over land, large amounts of water are also chemically combined or adsorbed in hydrated minerals. Safe drinking water is essential to humans and other even though it provides no calories or organic nutrients. There is a correlation between access to safe water and gross domestic product per capita. However, some observers have estimated that by 2025 more than half of the population will be facing water-based vulnerability. A report, issued in November 2009, suggests that by 2030, in developing regions of the world. Water plays an important role in the world economy, approximately 70% of the freshwater used by humans goes to agriculture. Fishing in salt and fresh water bodies is a source of food for many parts of the world. Much of long-distance trade of commodities and manufactured products is transported by boats through seas, rivers, lakes, large quantities of water, ice, and steam are used for cooling and heating, in industry and homes. Water is an excellent solvent for a variety of chemical substances, as such it is widely used in industrial processes. Water is also central to many sports and other forms of entertainment, such as swimming, pleasure boating, boat racing, surfing, sport fishing, Water is a liquid at the temperatures and pressures that are most adequate for life. Specifically, at atmospheric pressure of 1 bar, water is a liquid between the temperatures of 273.15 K and 373.15 K
13.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
14.
International Organization for Standardization
–
The International Organization for Standardization is an international standard-setting body composed of representatives from various national standards organizations. Founded on 23 February 1947, the organization promotes worldwide proprietary and it is headquartered in Geneva, Switzerland, and as of March 2017 works in 162 countries. It was one of the first organizations granted general consultative status with the United Nations Economic, ISO, the International Organization for Standardization, is an independent, non-governmental organization, the members of which are the standards organizations of the 162 member countries. It is the worlds largest developer of international standards and facilitates world trade by providing common standards between nations. Nearly twenty thousand standards have been set covering everything from manufactured products and technology to food safety, use of the standards aids in the creation of products and services that are safe, reliable and of good quality. The standards help businesses increase productivity while minimizing errors and waste, by enabling products from different markets to be directly compared, they facilitate companies in entering new markets and assist in the development of global trade on a fair basis. The standards also serve to safeguard consumers and the end-users of products and services, the three official languages of the ISO are English, French, and Russian. The name of the organization in French is Organisation internationale de normalisation, according to the ISO, as its name in different languages would have different abbreviations, the organization adopted ISO as its abbreviated name in reference to the Greek word isos. However, during the meetings of the new organization, this Greek word was not invoked. Both the name ISO and the logo are registered trademarks, the organization today known as ISO began in 1926 as the International Federation of the National Standardizing Associations. ISO is an organization whose members are recognized authorities on standards. Members meet annually at a General Assembly to discuss ISOs strategic objectives, the organization is coordinated by a Central Secretariat based in Geneva. A Council with a membership of 20 member bodies provides guidance and governance. The Technical Management Board is responsible for over 250 technical committees, ISO has formed joint committees with the International Electrotechnical Commission to develop standards and terminology in the areas of electrical and electronic related technologies. Information technology ISO/IEC Joint Technical Committee 1 was created in 1987 to evelop, maintain, ISO has three membership categories, Member bodies are national bodies considered the most representative standards body in each country. These are the members of ISO that have voting rights. Correspondent members are countries that do not have their own standards organization and these members are informed about ISOs work, but do not participate in standards promulgation. Subscriber members are countries with small economies and they pay reduced membership fees, but can follow the development of standards
15.
Sensor
–
A sensor is always used with other electronics, whether as simple as a light or as complex as a computer. Moreover, analog sensors such as potentiometers and force-sensing resistors are still widely used, applications include manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics and many other aspects of our day-to-day life. A sensors sensitivity indicates how much the output changes when the input quantity being measured changes. For instance, if the mercury in a thermometer moves 1 cm when the changes by 1 °C. Some sensors can also affect what they measure, for instance, Sensors are usually designed to have a small effect on what is measured, making the sensor smaller often improves this and may introduce other advantages. Technological progress allows more and more sensors to be manufactured on a scale as microsensors using MEMS technology. In most cases, a microsensor reaches a higher speed. Most sensors have a transfer function. The sensitivity is defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is the slope of the transfer function. Converting the sensors electrical output to the measured units requires dividing the output by the slope. In addition, an offset is added or subtracted. For example -40 must be added to the output if 0 V output corresponds to -40 C input, for an analog sensor signal to be processed, or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter. The full scale range defines the maximum and minimum values of the measured property, the sensitivity may in practice differ from the value specified. This is called a sensitivity error and this is an error in the slope of a linear transfer function. If the output differs from the correct value by a constant. This is an error in the y-intercept of a transfer function. Nonlinearity is deviation of a transfer function from a straight line transfer function
16.
Geophone
–
A geophone is a device that converts ground movement into voltage, which may be recorded at a recording station. The deviation of this voltage from the base line is called the seismic response and is analyzed for structure of the earth. The term geophone derives from the Greek word γῆ meaning earth, geophones have historically been passive analog devices and typically comprise a spring-mounted magnetic mass moving within a wire coil to generate an electrical signal. The response of a geophone is proportional to ground velocity. MEMS have a higher noise level than geophones and can only be used in strong motion or active seismic applications. The frequency response of a geophone is that of an oscillator, fully determined by corner frequency. Since the corner frequency is proportional to the root of the moving mass. It is possible to lower the corner electronically, at the price of higher noise. Although waves passing through the earth have a nature, geophones are normally constrained to respond to single dimension - usually the vertical. However, some require the full wave to be used. In analog devices, three moving coil elements are mounted in an arrangement within a single case. The majority of geophones are used in seismology to record the energy waves reflected by the subsurface geology. In this case the primary interest is in the motion of the Earths surface. However, not all the waves are upwards travelling, a strong, horizontally transmitted wave known as ground-roll also generates vertical motion that can obliterate the weaker vertical signals. By using large areal arrays tuned to the wavelength of the ground-roll the dominant noise signals can be attenuated, analog geophones are very sensitive devices which can respond to very distant tremors. These small signals can be drowned by larger signals from local sources and it is possible though to recover the small signals caused by large but distant events by correlating signals from several geophones deployed in an array. Signals which are registered only at one or few geophones can be attributed to unwanted, local events and it can be assumed that small signals that register uniformly at all geophones in an array can be attributed to a distant and therefore significant event. The sensitivity of passive geophones is typically 30 Volts/, so they are in general not a replacement for broadband seismometers, conversely, some applications of geophones are interested only in very local events
17.
Hydrophone
–
A hydrophone is a microphone designed to be used underwater for recording or listening to underwater sound. Most hydrophones are based on a piezoelectric transducer that generates electricity when subjected to a pressure change, such piezoelectric materials, or transducers, can convert a sound signal into an electrical signal since sound is a pressure wave. Some transducers can also serve as a projector, but not all have this capability, a hydrophone can listen to sound in air but will be less sensitive due to its design as having a good acoustic impedance match to water, which is a denser fluid than air. The earliest widely used design was the Fessenden oscillator, an electrodynamically driven clamped-edge circular plate transducer operating at 500,1000, ernest Rutherford, in England, led pioneer research in hydrophones using piezoelectric devices, and his only patent was for a hydrophone device. The acoustic impedance of piezoelectric materials facilitated their use as underwater transducers, the piezoelectric hydrophone was used late in World War I, by convoy escorts detecting U-boats, greatly impacting the effectiveness of submarines. From late in World War I until the introduction of active sonar, hydrophones were the method for submarines to detect targets while submerged. A small single cylindrical ceramic transducer can achieve near perfect omnidirectional reception and this type of hydrophone can be produced from a low-cost omnidirectional type, but must be used while stationary, as the reflector impedes its movement through water. A new way to direct is to use a body around the hydrophone. The array may be steered using a beamformer, most commonly, hydrophones are arranged in a line array but may be in two- or three-dimensional arrangements. These are capable of clearly recording extremely low frequency infrasound, including many unexplained ocean sounds, communication with submarines Underwater acoustics Sonar Reflection seismology Pike, John. How to build & use low-cost hydrophones, dOSITS—Hydrophone introduction at Discovery of Sound in the Sea orcasound. High Quality Hydrophones— High quality manufacturer of Hydrophones
18.
Microphone
–
A microphone, colloquially nicknamed mic or mike, is a transducer that converts sound into an electrical signal. Several different types of microphone are in use, which employ different methods to convert the air pressure variations of a wave to an electrical signal. Microphones typically need to be connected to a preamplifier before the signal can be recorded or reproduced, in order to speak to larger groups of people, a need arose to increase the volume of the human voice. The earliest devices used to achieve this were acoustic megaphones, some of the first examples, from fifth century BC Greece, were theater masks with horn-shaped mouth openings that acoustically amplified the voice of actors in amphitheatres. In 1665, the English physicist Robert Hooke was the first to experiment with an other than air with the invention of the lovers telephone made of stretched wire with a cup attached at each end. German inventor Johann Philipp Reis designed an early sound transmitter that used a strip attached to a vibrating membrane that would produce intermittent current. Better results were achieved with the transmitter design in Scottish-American Alexander Graham Bells telephone of 1876 – the diaphragm was attached to a conductive rod in an acid solution. These systems, however, gave a poor sound quality. The first microphone that enabled proper voice telephony was the carbon microphone and this was independently developed by David Edward Hughes in England and Emile Berliner and Thomas Edison in the US. Although Edison was awarded the first patent in mid-1877, Hughes had demonstrated his working device in front of many witnesses some years earlier, the carbon microphone is the direct prototype of todays microphones and was critical in the development of telephony, broadcasting and the recording industries. Thomas Edison refined the carbon microphone into his carbon-button transmitter of 1886 and this microphone was employed at the first ever radio broadcast, a performance at the New York Metropolitan Opera House in 1910. In 1916, E. C. Wente of Western Electric developed the next breakthrough with the first condenser microphone, in 1923, the first practical moving coil microphone was built. The Marconi Skykes or magnetophon, developed by Captain H. J. Round, was the standard for BBC studios in London and this was improved in 1930 by Alan Blumlein and Herbert Holman who released the HB1A and was the best standard of the day. Also in 1923, the microphone was introduced, another electromagnetic type, believed to have been developed by Harry F. Olson. Over the years these microphones were developed by companies, most notably RCA that made large advancements in pattern control. With television and film technology booming there was demand for high fidelity microphones, electro-Voice responded with their Academy Award-winning shotgun microphone in 1963. During the second half of 20th century development advanced quickly with the Shure Brothers bringing out the SM58, digital was pioneered by Milab in 1999 with the DM-1001. The latest research developments include the use of optics, lasers and interferometers
19.
Seismometer
–
Seismometers are instruments that measure motion of the ground, including those of seismic waves generated by earthquakes, volcanic eruptions, and other seismic sources. Records of seismic waves allow seismologists to map the interior of the Earth, the seismometer was invented by the Chinese polymath Zhang Heng in AD132 during the Han dynasty. The first Western description of a comes from the French physicist and priest Jean de Hautefeuille in 1703. The modern seismometer was developed in the 19th century by John Milne, James Alfred Ewing, seismograph is another Greek term from seismós and γράφω, gráphō, to draw. The concerning technical discipline is called seismometry, a branch of seismology, a simple seismometer that is sensitive to up-down motions of the earth can be understood by visualizing a weight hanging on a spring. The spring and weight are suspended from a frame that moves along with the earthʼs surface, as the earth moves, the relative motion between the weight and the earth provides a measure of the vertical ground motion. Any movement of the moves the frame. The mass tends not to move because of its inertia, early seismometers used optical levers or mechanical linkages to amplify the small motions involved, recording on soot-covered paper or photographic paper. In some systems, the mass is held nearly motionless relative to the frame by a negative feedback loop. The motion of the relative to the frame is measured. The voltage needed to produce this force is the output of the seismometer, in other systems the weight is allowed to move, and its motion produces a voltage in a coil attached to the mass and moving through the magnetic field of a magnet attached to the frame. This design is used in the geophones used in seismic surveys for oil. Professional seismic observatories usually have instruments measuring three axes, north-south, east-west, and the vertical, if only one axis is measured, this is usually the vertical because it is less noisy and gives better records of some seismic waves. The foundation of a station is critical. A professional station is mounted on bedrock. The best mountings may be in deep boreholes, which avoid thermal effects, ground noise and tilting from weather, other instruments are often mounted in insulated enclosures on small buried piers of unreinforced concrete. Reinforcing rods and aggregates would distort the pier as the temperature changes, a site is always surveyed for ground noise with a temporary installation before pouring the pier and laying conduit. Originally, European seismographs were placed in an area after a destructive earthquake
20.
Blind spot monitor
–
The blind spot monitor is a vehicle-based sensor device that detects other vehicles located to the driver’s side and rear. Warnings can be visual, audible, vibrating, or tactile, however, blind spot monitors are an option that may do more than monitor the sides and rear of the vehicle. They may also include Cross Traffic Alert, which alerts drivers backing out of a parking space when traffic is approaching from the sides, if side view mirrors are properly adjusted on a car, there is no blind spot on the sides. Platzer received a patent for his blind spot monitor, and it has incorporated into various products associated with Ford Motor Company. The blind zone mirror has been touted as an elegant and relatively inexpensive solution to this recognized problem, Volvo BLIS is an acronym for Blind Spot Information System, a system of protection developed by Volvo. Volvos previous parent, Ford Motor Company, has adapted the system to its Ford, Lincoln. Mazda Mazda was the first Japanese automaker to offer a blind spot monitor and it was initially introduced on the 2008 Mazda CX-9 Grand Touring and remained limited to only that highest trim level through the 2012 model year. For 2013, BSM was standard on both the CX-9 Touring and Grand Touring models, Mazda also added BSM to the redesigned 2009 Mazda 6. Blind spot monitoring was standard equipment on the 6i and 6s Grand Touring trim levels, and was an available option on some lower trim levels. Mazda has since expanded the availability of BSM, having added it to the feature list of the Mazda3, CX-5, MX-5 Miata, Ford Ford uses the acronym BLIS for its blind spot detection. The system is both in drive and neutral transmission gears, and is turned off when in reverse or park gears. On Ford products, the system was first introduced in the spring of 2009, on the 2010 Ford Fusion and Fusion Hybrid,2010 Mercury Milan and Milan Hybrid, Mitsubishi Mitsubishi offers a Blind Spot Warning on the Pajero Sport launched in 2016. Toyota Toyota Motors Safety Sense package, which includes Blind Spot Monitoring, among other features, is standard equipment in multiple models sold in the U. S, in 2010, the Nissan Fuga/Infiniti M for the first time offered counter steering capabilities to keep the vehicle from colliding. Those capabilities have been adopted by competitors, such as Toyota, youTube Ford How-to video, BLIS with Cross-Traffic Alert Google patent search of Blind spot monitoring system The danger of blind zones The area behind your vehicle can be a killing zone
21.
Crankshaft position sensor
–
A crank sensor is an electronic device used in an internal combustion engine, both petrol and diesel, to monitor the position or rotational speed of the crankshaft. This information is used by engine management systems to control the fuel injection or the ignition system timing, before electronic crank sensors were available, the distributor would have to be manually adjusted to a timing mark on petrol engines. This method is used to synchronise a four stroke engine upon starting, allowing the management system to know when to inject the fuel. It is also used as the primary source for the measurement of engine speed in revolutions per minute. Common mounting locations include the main crank pulley, the flywheel and this sensor is the 2nd most important sensor in modern-day engines after the camshaft position sensor. When it fails, there is a probability the engine will not start, there are three main types of sensor commonly in use. The Hall Effect sensor, Optical sensor or the Inductive sensor, some engines, such as GMs Premium V family, use crank position sensors which read a reluctor ring integral to the harmonic balancer. This is a more accurate method of determining the position of the crankshaft. The functional objective for the position sensor is to determine the position and/or rotational speed of the crank. Engine Control Units use the information transmitted by the sensor to control such as ignition timing. In a diesel the sensor will control the fuel injection, the sensor output may also be related to other sensor data including the cam position to derive the current combustion cycle, this is very important for the starting of a four stroke engine. Sometimes, the sensor may become burnt or worn out - or just die of old age at high mileage, one likely cause of crankshaft position sensor failure is exposure to extreme heat. Others are vibration causing a wire to fracture or corrosion on the pins of harness connectors, many modern crankshaft sensors are sealed units and therefore will not be damaged by water or other fluids. When it goes wrong, it stops transmitting the signal contains the vital data for the ignition. A bad crank position sensor can worsen the way the engine idles, if the engine is revved up with a bad or faulty sensor, it may cause misfiring, motor vibration or backfires. Acceleration might be hesitant, and abnormal shaking during engine idle might occur, in the worst case the car may not start. The first sign of crankshaft failure, usually, is the refusal of the engine to start when hot. Another type of sensor is used on bicycles to monitor the position of the crankset
22.
Curb feeler
–
Curb feelers or curb finders are springs or wires installed on a vehicle which act as whiskers to alert drivers when they are at the right distance from the curb while parking. The devices are fitted low on the body, close to the wheels, as the vehicle approaches the curb, the protruding feelers scrape against the curb, making a noise and alerting the driver in time to avoid damaging the wheels or hubcaps. The feelers are manufactured to be flexible and do not easily break, curb feelers are still used on some hot rods when a 1950s look is wanted. They are especially popular for cars with tires, which easily lose their white coating when scraped against the curb. Sometimes curb feelers are found only on the side of the car. Sometimes they are added next to the front wheels. Some curb feelers have a wire or spring, while others have two to increase the area that can be protected. Any particular car may have just one curb feeler installed or more if attached near the front and rear, as well as on both sides of the vehicle. Buses are sometimes fitted with curb feelers, which can assist the driver in ensuring that the bus is close enough to the curb to allow passengers to step to and from the curb easily. This will give a warning nudge to anyone in the danger area, the belting is stiff enough to hold its shape but flexible enough to give if it runs into a miner or vice versa. The flexibility also allows this curb feeler to drag against the rib or be smacked by a car with little or no damage. A little spray from a can of paint will make the belt a visual warning device as well. One or two on each corner will help or put as many as you want, curb feelers based on optical technology are designed to function the same way but work in the proximity of an obstruction rather than having to come into physical contact with it. As described by one United States patent, An electronic curb feeler system uses two pairs of optical sensor units to detect an object located near the front end of a vehicle during parking. Devices such as this, and simpler electronic devices similar to the original wire curb feelers used on cars, are used in the design of various mobile robotic devices, one robotics company that does work for the United States Department of Defense uses laser-assisted curb feeler technology
23.
Defect detector
–
A defect detector is a device used on railroads to detect axle and signal problems in passing trains. The detectors are normally integrated into the tracks and often include sensors to detect different kinds of problems that could occur. The use of defect detectors has since spread to other overseas railroads, to detect hotboxes, i. e. overheating bearings, they would look for oil smoke during the day or a red glow at night. As early as the 1940s, automatic defect detectors were installed to improve upon the manual process, the detectors would transmit their data via wired links to remote read-outs in stations, offices or interlocking towers. A stylus-and-cylinder gauge would record a reading for every axle, if a journal were too hot, or if some other defect were detected, early line-side defect detectors were typically housed in concrete bungalows, roughly every 10–20 miles. The crew stationed at the rear of the train would observe this light and, if a defect were indicated, the first computerized detectors used fixed-display boards. These had a numeric display and a number of indicator lights relating to the nature of defects. A number would be displayed in lights on the board after the train had cleared the detector, the number 000 meant there were no defects, any other number warned of a defect at the corresponding axle. If several were detected, small white lights on the top and bottom could also be displayed to inform the crews of multiple problems. Nevertheless, the conductor was still required to go into the bungalow, seaboard Air Line was the first railroad to install talking defect detectors. Beginning in the 1960s, their crews could hear the results of hotbox and dragging-equipment checks spoken over their radios in the engine cab. Over the years, as the use of this technology accelerated, for example, computerized, talking detectors allowed crews to interact with the detector using a touch tone function on their radios to recall the latest defect report. This eliminated any need for crews to walk to the location to confirm the radio reading. Sometimes the locations ambient temperature and train speed are also noted by the mechanical voice, crews can use their touch-tone hand radios to get the detector to repeat error messages. Defect detectors that are equipped with such a voice are often called talking detectors by railfans. To this day some rail lines, mostly passenger routes with a high traffic density, maintain centralized readout. This is due to the large and confusing volume of traffic a talking detector would generate. When an error signal is received a dispatcher or operator will contact the train via radio manually transmitting the error message, today defect detectors are often incorporated in monitoring platforms that are primarily used by railroads to more closely monitor the status of their trains
24.
Hall effect sensor
–
A Hall effect sensor is a transducer that varies its output voltage in response to a magnetic field. Hall effect sensors are used for proximity switching, positioning, speed detection, in its simplest form, the sensor operates as an analog transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined, using groups of sensors, the relative position of the magnet can be deduced. Frequently, a Hall sensor is combined with threshold detection so that it acts as and is called a switch and they can also be used in computer keyboards applications that require ultra-high reliability. Hall sensors are used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing, tachometers. They are used in brushless DC electric motors to detect the position of the permanent magnet, in the pictured wheel with two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is used to regulate the speed of disk drives. A Hall probe contains a compound semiconductor crystal such as indium antimonide, mounted on an aluminum backing plate. The plane of the crystal is perpendicular to the probe handle, connecting leads from the crystal are brought down through the handle to the circuit box. When the Hall probe is held so that the field lines are passing at right angles through the sensor of the probe. A current is passed through the crystal which, when placed in a field has a Hall effect voltage developed across it. The Hall effect is seen when a conductor is passed through a magnetic field. The natural electron drift of the charge causes the magnetic field to apply a Lorentz force to these charge carriers. The result is what is seen as a separation, with a buildup of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square, the probe handle, being made of a non-ferrous material, has no disturbing effect on the field. A Hall probe should be calibrated against a value of magnetic field strength. For a solenoid the Hall probe is placed in the center, when a beam of charged particles passes through a magnetic field, forces act on the particles and the beam is deflected from a straight path. The flow of electrons through a conductor is known as a beam of charged carriers, when a conductor is placed in a magnetic field perpendicular to the direction of the electrons, they will be deflected from a straight path
25.
Mass flow sensor
–
A mass flow sensor is used to find out the mass flow rate of air entering a fuel-injected internal combustion engine. The air mass information is necessary for the control unit to balance. Air changes its density as it expands and contracts with temperature and pressure, there are two common types of mass airflow sensors in use on automotive engines. These are the meter and the hot wire. Neither design employs technology that measures air mass directly, however, with additional sensors and inputs, an engines ECU can determine the mass flowrate of intake air. Both approaches are used almost exclusively on electronic fuel injection engines, both sensor designs output a 0. 0–5. Vehicles prior to 1996 could have MAF without an IAT, an example is 1994 Infiniti Q45. When a MAF sensor is used in conjunction with an oxygen sensor, the VAF sensor measures the air flow into the engine with a spring-loaded air vane attached to a variable resistor. The vane moves in proportion to the airflow, many VAF sensors have an air-fuel adjustment screw, which opens or closes a small air passage on the side of the VAF sensor. This screw controls the air-fuel mixture by letting a metered amount of air flow past the air flap, by turning the screw clockwise the mixture is enriched and counterclockwise the mixture is leaned. The vane moves because of the force of the air flow against it, it does not measure volume or mass directly. The drag force depends on air density, air velocity and the shape of the vane, some VAF sensors include an additional intake air temperature sensor to allow the engines ECU to calculate the density of the air, and the fuel delivery accordingly. In some manufacturers fuel pump control was part on the VAF internal wiring. A hot wire mass airflow sensor determines the mass of air flowing into the air intake system. The theory of operation of the hot wire mass airflow sensor is similar to that of the hot wire anemometer. This is achieved by heating a wire suspended in the air stream, like a toaster wire. The wires electrical resistance increases as the temperature increases, which varies the electrical current flowing through the circuit. When air flows past the wire, the wire cools, decreasing its resistance, as more current flows, the wire’s temperature increases until the resistance reaches equilibrium again
26.
Omniview technology
–
Omniview technology is a vehicle parking assistant technology that became available in vehicle electronic products since around 2005. It is designed to help drivers in parking a vehicle in small space, early vehicle parking assistant products use radar or a single rear-view camera to get information about obstacles around, and provide drivers with sound alarm or rear-view video. There are some drawbacks about such products, the alarm is not intuitive, however, omniview technology overcomes these problems and has seen increasing applications since about 2005. In a common system, there are four wide-field cameras, one in the front of the vehicle, one in the back of the vehicle, one in the left rear view mirror. The four cameras cover the area around vehicle. The system synthesizes a bird view image in front of the vehicle by distortion correction, projection transformation, the images shown below are input and output of a common omniview product. Omniview suppliers Mobileye Israel Fujitsu Japan Percherry China
27.
Oxygen sensor
–
An oxygen sensor is an electronic device that measures the proportion of oxygen in the gas or liquid being analysed. It was developed by Robert Bosch GmbH during the late 1960s under the supervision of Dr. Günter Bauman. The original sensing element is made with a thimble-shaped zirconia ceramic coated on both the exhaust and reference sides with a layer of platinum and comes in both heated and unheated forms. The planar-style sensor entered the market in 1990, and significantly reduced the mass of the sensing element as well as incorporating the heater within the ceramic structure. This resulted in a sensor that started sooner and responded faster, divers also use a similar device to measure the partial pressure of oxygen in their breathing gas. Scientists use oxygen sensors to measure respiration or production of oxygen, oxygen sensors are used in oxygen analyzers which find a lot of use in medical applications such as anesthesia monitors, respirators and oxygen concentrators. Oxygen sensors are used in hypoxic air fire prevention systems to monitor continuously the oxygen concentration inside the protected volumes. There are many different ways of measuring oxygen and these include such as zirconia, electrochemical, infrared, ultrasonic. Each method has its own advantages and disadvantages, automotive oxygen sensors, colloquially known as O2 sensors, make modern electronic fuel injection and emission control possible. They help determine, in time, if the air–fuel ratio of a combustion engine is rich or lean. Closed loop feedback-controlled fuel injection varies the fuel injector output according to sensor data rather than operating with a predetermined fuel map. In addition to enabling electronic fuel injection to work efficiently, this emissions control technique can reduce the amounts of both unburnt fuel and oxides of nitrogen entering the atmosphere. Volvo was the first automobile manufacturer to employ this technology in the late 1970s, the sensor does not actually measure oxygen concentration, but rather the difference between the amount of oxygen in the exhaust gas and the amount of oxygen in air. Rich mixture causes an oxygen demand and this demand causes a voltage to build up, due to transportation of oxygen ions through the sensor layer. Lean mixture causes low voltage, since there is an oxygen excess, modern spark-ignited combustion engines use oxygen sensors and catalytic converters in order to reduce exhaust emissions. The ECU attempts to maintain, on average, a certain air–fuel ratio by interpreting the information it gains from the oxygen sensor. The primary goal is a compromise between power, fuel economy, and emissions, and in most cases is achieved by an air-fuel ratio close to stoichiometric, for spark-ignition engines, the three types of emissions modern systems are concerned with are, hydrocarbons, carbon monoxide and NOx. Tampering with or modifying the signal that the sensor sends to the engine computer can be detrimental to emissions control
28.
Parking sensor
–
Parking sensors are proximity sensors for road vehicles designed to alert the driver to obstacles while parking. These systems use either electromagnetic or ultrasonic sensors, the sensors emit acoustic pulses, with a control unit measuring the return interval of each reflected signal and calculating object distances. Systems may also include visual aids, such as LED or LCD readouts to indicate object distance, a vehicle may include a vehicle pictogram on the cars infotainment screen, with a representation of the nearby objects as coloured blocks. Rear sensors may be activated when reverse gear is selected and deactivated as soon as any other gear is selected, front sensors may be activated manually and deactivated automatically when the vehicle reaches a pre-determined speed — to avoid subsequent nuisance warnings. Objects with flat surfaces angled from the vertical may deflect return sound waves away from the sensors, the Parking Sensor, originally called, ReverseAid, was a spin-off from the Sonic Pathfinder, an Electronic Guidance Device for the Blind. Both devices were invented in the late 1970s by Tony Heyes while working at the Blind Mobility Research Unit at Nottingham University in the UK, after patenting the device in 1983 Heyes offered it to Jaguar Cars in Coventry. After test driving the prototype on Heyess car they very politely told him that, real people would not want a thing like this. Heyes teamed up with a manufacturer and some 150 units were made and fitted to petrol tankers, trucks. Very few were fitted to private cars since few people wanted to drill holes in their cars, the electromagnetic parking sensor was re-invented and patented in 1992 by Mauro Del Signore. Electromagnetic sensors rely on the vehicle moving slowly and smoothly towards the object to be avoided, once detected, the obstacle, if the vehicle momentarily stops on its approach, the sensor continues to give signal of presence of the obstacle. If the vehicle then resumes its manoeuvre the alarm signal becomes more and more impressive as the obstacle approaches, now they also come equipped with a camera to go with the sensor. By 2018 the US is requiring back up camera with sensors on all cars, Blind spot monitors are an option that may include more than monitoring the sides of the vehicle. It can include Cross Traffic Alert, which alerts drivers backing out of a parking space when traffic is approaching from the sides, already in the 1970s German inventor Rainer Buchmann developed parking sensors
29.
Radar gun
–
A radar speed gun is a device used to measure the speed of moving objects. A radar speed gun is a Doppler radar unit that may be hand-held, vehicle-mounted or static. The radar speed gun was invented by John L. Barker Sr. and Ben Midlock, originally, Automatic Signal was approached by Grumman Aircraft Corporation to solve the specific problem of terrestrial landing gear damage on the now-legendary PBY Catalina amphibious aircraft. Barker and Midlock cobbled a Doppler radar unit from coffee cans soldered shut to make microwave resonators, the unit was installed at the end of the runway, and aimed directly upward to measure the sink rate of landing PBYs. After the war, Barker and Midlock tested radar on the Merritt Parkway, in 1947, the system was tested by the Connecticut State Police in Glastonbury, Connecticut, initially for traffic surveys and issuing warnings to drivers for excessive speed. Starting in February 1949, the police began to issue speeding tickets based on the speed recorded by the radar device. In 1948, radar was used in Garden City, New York. Speed guns use Doppler radar to perform speed measurements, Radar speed guns, like other types of radar, consist of a radio transmitter and receiver. They send out a signal in a narrow beam, then receive the same signal back after it bounces off the target object. From that difference, the radar speed gun can calculate the speed of the object from which the waves have been bounced. After the returning waves are received, a signal with a equal to this difference is created by mixing the received radio signal with a little of the transmitted signal. Since this type of speed gun measures the difference in speed between a target and the gun itself, the gun must be stationary in order to give a correct reading. Instead of comparing the frequency of the signal reflected from the target with the transmitted signal, the frequency difference between these two signals gives the true speed of the target vehicle. Modern radar speed guns normally operate at X, K, Ka, Radar guns that operate using the X band frequency range are becoming less common because they produce a strong and easily detectable beam. Also, most automatic doors utilize radio waves in the X band range, as a result, K band and Ka band are most commonly used by police agencies. For these reasons, hand-held radar typically includes an on-off trigger, Radar detectors are illegal in some areas. Traffic radar comes in many models, hand-held units are mostly battery powered, and for the most part are used as stationary speed enforcement tools. Stationary radar can be mounted in vehicles and may have one or two antennae
30.
Speedometer
–
A speedometer or a speed meter is a gauge that measures and displays the instantaneous speed of a vehicle. Now universally fitted to vehicles, they started to be available as options in the 1900s. Speedometers for other vehicles have specific names and use other means of sensing speed, for a boat, this is a pit log. For an aircraft, this is an airspeed indicator, charles Babbage is credited with creating an early type of a speedometer, which were usually fitted to locomotives. The electric speedometer was invented by the Croatian Josip Belušić in 1888, originally patented by Otto Schultze on October 7,1902, it uses a rotating flexible cable usually driven by gearing linked to the output of the vehicles transmission. The early Volkswagen Beetle and many motorcycles, however, use a cable driven from a front wheel, when the car or motorcycle is in motion, a speedometer gear assembly turns a speedometer cable, which then turns the speedometer mechanism itself. A small permanent magnet affixed to the speedometer cable interacts with an aluminum cup attached to the shaft of the pointer on the analogue speedometer instrument. As the magnet rotates near the cup, the magnetic field produces eddy currents in the cup. The effect is that the magnet exerts a torque on the cup, dragging it, the pointer shaft is held toward zero by a fine torsion spring. The torque on the cup increases with the speed of rotation of the magnet, thus an increase in the speed of the car will twist the cup and speedometer pointer against the spring. The cup and pointer will turn until the torque of the currents on the cup is balanced by the opposing torque of the spring. At a given speed the pointer will remain motionless and pointing to the number on the speedometers dial. The return spring is calibrated such that a given speed of the cable corresponds to a specific speed indication on the speedometer. The sensor is typically a set of one or more magnets mounted on the shaft or differential crownwheel. As the part in question turns, the magnets or teeth pass beneath the sensor, alternatively, in more recent designs, some manufactures rely on pulses coming from the ABS wheel sensors. Most modern electronic speedometers have the additional ability over the current type to show the vehicle speed when moving in reverse gear. A computer converts the pulses to a speed and displays this speed on an electronically controlled, another early form of electronic speedometer relies upon the interaction between a precision watch mechanism and a mechanical pulsator driven by the cars wheel or transmission. The watch mechanism endeavors to push the speedometer pointer toward zero, the position of the speedometer pointer reflects the relative magnitudes of the outputs of the two mechanisms
31.
Throttle position sensor
–
A throttle position sensor is a sensor used to monitor the throttle position of a vehicle. The sensor is located on the butterfly spindle/shaft so that it can directly monitor the position of the throttle. More advanced forms of the sensor are used, for example an extra closed throttle position sensor may be employed to indicate that the throttle is completely closed. Related to the TPS are accelerator pedal sensors, which include a wide open throttle sensor. Modern day sensors are non contact type and these modern non contact TPS include Hall effect sensors, Inductive sensors, magnetoresistive and others. When the magnet/inductive loop mounted on the spindle which is rotated from the lower mechanical stop to WOT, the change in the magnetic field is sensed by the sensor and the voltage generated is given as the input to the ECU. Normally a two pole rare earth magnet is used for the TPS due to their high Curie temperatures required in the vehicle environment. The magnet may be of type, ring type, rectangular or segment type. The magnet is defined to have a magnetic field that does not vary significantly with time or temperature. In case of failure of the TPS operation the CHECK ENGINE light remains illuminated even if there is no problem or error in the ECU and it cannot be corrected by clearing ECU errors by running diagnostic software. In order to rectify the malfunction the TPS needs to be replaced by a new one
32.
Tire-pressure monitoring system
–
A tire-pressure monitoring system is an electronic system designed to monitor the air pressure inside the pneumatic tires on various types of vehicles. TPMS report real-time tire-pressure information to the driver of the vehicle, either via a gauge, TPMS can be divided into two different types – direct and indirect. TPMS are provided both at an OEM level as well as an aftermarket solution, the target of a TPMS is avoiding traffic accidents, poor fuel economy, and increased tire wear due to under-inflated tires through early recognition of a hazardous state of the tires. Some claim the efficiency gains are negligible, the first passenger vehicle to adopt TPM was the Porsche 959 in 1986, using a hollow spoke wheel system developed by PSK. In 1996 Renault used the Michelin PAX system for the Scenic, the following year, Renault launched the Laguna II, the first high volume mid-size passenger vehicle in the world to be equipped with TPM as a standard feature. In the United States, TPM was introduced by General Motors for the 1991 model year for the Corvette in conjunction with Goodyear run-flat tires. The system uses sensors in the wheels and a display which can show tire pressure at any wheel. It has been standard on Corvettes ever since, the Firestone recall in the late 1990s, pushed the United States Congress to legislate the TREAD Act. The Act mandated the use of a suitable TPMS technology in all motor vehicles. This act affects all light motor vehicles sold after September 1,2007, phase-in started in October 2005 at 20%, and reached 100% for models produced after September 2007. In the United States, as of 2008 and the European Union, as of November 1,2012, from November 1,2014, all new passenger cars sold in the European Union must be equipped with a TPMS. For N1 vehicles, TPMS are not mandatory, but if a TPMS is fitted, on January 1,2013 for new models and on June 30,2014 for existing models. Japan is expected to adopt European Union legislation approximately one year after European Union implementation, further countries to make TPMS mandatory include Russia, Indonesia, the Philippines, Israel, Malaysia and Turkey. After the TREAD Act was passed, many companies responded to the opportunity by releasing TPMS products using battery-powered radio transmitter wheel modules. The introduction of run-flat tires and emergency spare tires by several tire, with run-flat tires, the driver will most likely not notice that a tire is running flat, hence the so-called run-flat warning systems were introduced. These are most often first generation, purely roll-radius based iTPMS, the iTPMS market has progressed as well. Indirect TPMS are able to detect under-inflation through combined use of roll radius and spectrum analysis, with this breakthrough, meeting the legal requirements is possible also with iTPMS. Indirect TPMS do not use physical pressure sensors but measure air pressures by monitoring individual wheel rotational speeds, first generation iTPMS systems are based on the principle that under-inflated tires have a slightly smaller diameter than a correctly inflated one
33.
Breathalyzer
–
A breathalyzer or breathalyser is a device for estimating blood alcohol content from a breath sample. Breathalyzer is the name for the instrument that tests the alcohol level developed by inventor Robert Frank Borkenstein. It was registered as a trademark on May 13,1954, also, in 1927 a Chicago chemist, William Duncan McNally, invented a breathalyzer in which the breath moving through chemicals in water would change color. One use for his invention was for housewives to test whether their husbands had been drinking, in late 1927, in a case in Marlborough, England, a Dr. Gorsky, Police Surgeon, asked a suspect to inflate a football bladder with his breath. Since the 2 liters of the mans breath contained 1.5 ml of ethanol, in 1931 the first practical roadside breath-testing device was the drunkometer developed by Rolla Neil Harger of the Indiana University School of Medicine. The drunkometer collected a motorists breath sample directly into a balloon inside the machine, the breath sample was then pumped through an acidified potassium permanganate solution. If there was alcohol in the sample, the solution changed color. The greater the change, the more alcohol there was present in the breath. The drunkometer was manufactured and sold by Stephenson Corporation of Red Bank, in 1954 Robert Frank Borkenstein was a captain with the Indiana State Police and later a professor at Indiana University Bloomington. His Breathalyzer used chemical oxidation and photometry to determine alcohol concentrations, subsequent breath analyzers have converted primarily to infrared spectroscopy. The invention of the Breathalyzer provided law enforcement with a non-invasive test providing immediate results to determine an individuals breath alcohol concentration at the time of testing, in 1967 in Britain, William Bill Ducie and Tom Parry Jones developed and marketed the first electronic breathalyser. They established Lion Laboratories in Cardiff, Bill Ducie was a chartered electrical engineer and Tom Parry Jones was a lecturer at UWIST. Lion Laboratories won the Queens Award for Technological Achievement for the product in 1980, the Alcolyser was superseded by the Lion Intoximeter 3000 in 1983, and later by the Lion Alcolmeter and Lion Intoxilyser. These later models used a fuel cell alcohol sensor rather than crystals, providing a more reliable curbside test, in 1991, Lion Laboratories was sold to the American company MPD, Inc. CH3CH2OH + O2 → CH3COOH + H2O The electric current produced by this reaction is measured by a microprocessor, Breath analyzers do not directly measure blood alcohol content or concentration, which requires the analysis of a blood sample. Instead, they estimate BAC indirectly by measuring the amount of alcohol in ones breath, two breathalyzer technologies are most prevalent. Desktop analyzers generally use infrared technology, electrochemical fuel cell technology. The U. S. National Highway Traffic Safety Administration maintains a Conforming Products List of breath alcohol devices approved for evidentiary use, as well as for preliminary screening use
34.
Carbon monoxide detector
–
A carbon monoxide detector or CO detector is a device that detects the presence of the carbon monoxide gas in order to prevent carbon monoxide poisoning. In the late 1990s Underwriters Laboratories changed their definition of a single station CO detector with a device in it to a carbon monoxide alarm. This applies to all CO safety alarms that meet UL2034, however for passive indicators and system devices that meet UL2075, CO is a colorless, tasteless and odorless compound produced by incomplete combustion of carbon-containing materials. Elevated levels of CO can be dangerous to humans depending on the amount present, smaller concentrations can be harmful over longer periods of time while increasing concentrations require diminishing exposure times to be harmful. Some system-connected detectors also alert a monitoring service that can dispatch emergency services if necessary, while CO detectors do not serve as smoke detectors and vice versa, dual smoke/CO detectors are also sold. In the home, some sources of CO include open flames, space heaters, water heaters. The devices, which retail for $15–$60 USD and are widely available, battery lifetimes have been increasing as the technology has developed and certain battery-powered devices now advertise a battery lifetime of over 6 years. Some smoke detectors are equipped with a rechargeable battery backup that recharges when the detector is receiving AC power. All CO detectors have test buttons like smoke detectors, CO detectors can be placed near the ceiling or near the floor because CO is very close to the same density as air. Since CO is colorless, tasteless and odorless, detection in an environment is impossible without such a warning device. It is a highly toxic inhalant and attaches to the hemoglobin with an affinity 200x stronger than oxygen, in North America, detectors are required in new construction in the U. S. When carbon monoxide detectors were introduced into the market, they had a lifespan of 2 years. However technology developments have increased this and many now advertise up to 10 years, newer models are designed to signal a need to be replaced after that time-span although there are many instances of detectors operating far beyond this point. According to the 2005 edition of the carbon monoxide guidelines, NFPA720, published by the National Fire Protection Association, sections 5.1.1.1 and 5.1.1. ”According to the 2009 edition of the IRC, published by the International Code Council, section R315.1. Manufacturers’ recommendations differ to a certain degree based on research conducted with each one’s specific detector, therefore, make sure to read the provided installation manual for each detector before installing. CO detectors are available as models or system-connected, monitored devices. System-connected detectors, which can be wired to either a security or fire panel, are monitored by a central station, the gas sensors in CO alarms have a limited and indeterminable life span, typically two to five years. The test button on a CO alarm only tests the battery and circuitry, CO alarms should be tested with an external source of calibrated test gas, as recommended by the latest version of NFPA720
35.
Catalytic bead sensor
–
A catalytic bead sensor is a type of sensor that is used for combustible gas detection from the family of gas sensors known as pellistors. The catalytic bead sensor consists of two coils of fine platinum wire each embedded in a bead of alumina, connected electrically in a Wheatstone bridge circuit, one of the pellistors is impregnated with a special catalyst which promotes oxidation whilst the other is treated to inhibit oxidation. Current is passed through the coils so that they reach a temperature at which oxidation of a gas readily occurs at the catalysed bead. Passing combustible gas raises the temperature further which increases the resistance of the coil in the catalysed bead. This output change is linear, for most gases, up to and beyond 100% LEL, response time is a few seconds to detect alarm levels, catalyst poisoning - because of the direct contact of the gas with the catalytic surface it may be deactivated in some circumstances. Sensor drift - Decreased sensitivity may occur depending on operating and ambient conditions, modes of failure - which include poisoning and sinter blockage, they become apparent during routine maintenance checking
36.
Electronic nose
–
An electronic nose is a device intended to detect odors or flavors. Over the last decade, electronic sensing or e-sensing technologies have undergone important developments from a technical and commercial point of view, the expression electronic sensing refers to the capability of reproducing human senses using sensor arrays and pattern recognition systems. Since 1982, research has been conducted to develop technologies, commonly referred to as electronic noses, the stages of the recognition process are similar to human olfaction and are performed for identification, comparison, quantification and other applications, including data storage and retrieval. However, hedonic evaluation is a specificity of the human nose given that it is related to subjective opinions and these devices have undergone much development and are now used to fulfill industrial needs. In all industries, odor assessment is performed by human sensory analysis, by chemosensors. Scientist Alexander Graham Bell popularized the notion that it was difficult to measure a smell, can you tell whether one smell is just twice strong as another. Can you measure the difference between two kinds of smell and another and it is very obvious that we have very many different kinds of smells, all the way from the odour of violets and roses up to asafetida. But until you can measure their likeness and differences, you can have no science of odour, if you are ambitious to find a new science, measure a smell. In the decades since Bell made this observation, no science of odor materialised. The electronic nose was developed in order to mimic human olfaction that functions as a non-separative mechanism, essentially the instrument consists of head space sampling, sensor array, and pattern recognition modules, to generate signal pattern that are used for characterizing odors. Electronic noses include three major parts, a delivery system, a detection system, a computing system. The sample delivery system enables the generation of the headspace of a sample, the system then injects this headspace into the detection system of the electronic nose. The sample delivery system is essential to guarantee constant operating conditions, the detection system, which consists of a sensor set, is the reactive part of the instrument. When in contact with volatile compounds, the sensors react, which means they experience a change of electrical properties, in most electronic noses, each sensor is sensitive to all volatile molecules but each in their specific way. However, in bio-electronic noses, receptor proteins which respond to specific molecules are used. Most electronic noses use sensor arrays that react to volatile compounds on contact, a specific response is recorded by the electronic interface transforming the signal into a digital value. Recorded data are then computed based on statistical models, bio-electronic noses use olfactory receptors - proteins cloned from biological organisms, e. g. humans, that bind to specific odor molecules. One group has developed a bio-electronic nose that mimics the signaling systems used by the nose to perceive odors at a very high sensitivity