1.
System of measurement
–
A system of measurement is a collection of units of measurement and rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science and commerce, systems of measurement in modern use include the metric system, the imperial system, and United States customary units. The French Revolution gave rise to the system, and this has spread around the world. In most systems, length, mass, and time are base quantities, later science developments showed that either electric charge or electric current could be added to extend the set of base quantities by which many other metrological units could be easily defined. Other quantities, such as power and speed, are derived from the set, for example. Such arrangements were satisfactory in their own contexts, the preference for a more universal and consistent system only gradually spread with the growth of science. Changing a measurement system has substantial financial and cultural costs which must be offset against the advantages to be obtained using a more rational system. However pressure built up, including scientists and engineers for conversion to a more rational. The unifying characteristic is that there was some definition based on some standard, eventually cubits and strides gave way to customary units to met the needs of merchants and scientists. In the metric system and other recent systems, a basic unit is used for each base quantity. Often secondary units are derived from the units by multiplying by powers of ten. Thus the basic unit of length is the metre, a distance of 1.234 m is 1,234 millimetres. Metrication is complete or nearly complete in almost all countries, US customary units are heavily used in the United States and to some degree in Liberia. Traditional Burmese units of measurement are used in Burma, U. S. units are used in limited contexts in Canada due to the large volume of trade, there is also considerable use of Imperial weights and measures, despite de jure Canadian conversion to metric. In the United States, metric units are used almost universally in science, widely in the military, and partially in industry, but customary units predominate in household use. At retail stores, the liter is a used unit for volume, especially on bottles of beverages. Some other standard non-SI units are still in use, such as nautical miles and knots in aviation. Metric systems of units have evolved since the adoption of the first well-defined system in France in 1795, during this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries
2.
SI derived unit
–
The International System of Units specifies a set of seven base units from which all other SI units of measurement are derived. Each of these units is either dimensionless or can be expressed as a product of powers of one or more of the base units. For example, the SI derived unit of area is the metre. The degree Celsius has an unclear status, and is arguably an exception to this rule. The names of SI units are written in lowercase, the symbols for units named after persons, however, are always written with an uppercase initial letter. In addition to the two dimensionless derived units radian and steradian,20 other derived units have special names, some other units such as the hour, litre, tonne, bar and electronvolt are not SI units, but are widely used in conjunction with SI units. Until 1995, the SI classified the radian and the steradian as supplementary units, but this designation was abandoned, International System of Quantities International System of Units International Vocabulary of Metrology Metric prefix Metric system Non-SI units mentioned in the SI Planck units SI base unit I. Mills, Tomislav Cvitas, Klaus Homann, Nikola Kallay, IUPAC, Quantities, Units and Symbols in Physical Chemistry. CS1 maint, Multiple names, authors list
3.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics
4.
James Prescott Joule
–
James Prescott Joule FRS HFRSE DCL LLD (/dʒuːl/, was an English physicist and brewer, born in Salford, Lancashire. Joule studied the nature of heat, and discovered its relationship to mechanical work and this led to the law of conservation of energy, which led to the development of the first law of thermodynamics. The SI derived unit of energy, the joule, is named after James Joule and he worked with Lord Kelvin to develop the absolute scale of temperature, which came to be called the Kelvin scale. Joule also made observations of magnetostriction, and he found the relationship between the current through a resistor and the heat dissipated, which is now called Joules first law. The son of Benjamin Joule, a brewer, and his wife. Joule was tutored as a man by the famous scientist John Dalton and was strongly influenced by chemist William Henry and Manchester engineers Peter Ewart. He was fascinated by electricity, and he and his brother experimented by giving electric shocks to each other, as an adult, Joule managed the brewery. Science was merely a serious hobby, sometime around 1840, he started to investigate the feasibility of replacing the brewerys steam engines with the newly invented electric motor. His first scientific papers on the subject were contributed to William Sturgeons Annals of Electricity, Joule was a member of the London Electrical Society, established by Sturgeon and others. Motivated in part by a desire to quantify the economics of the choice. He went on to realize that burning a pound of coal in an engine was more economical than a costly pound of zinc consumed in an electric battery. Joule captured the output of the methods in terms of a common standard, the ability to raise one pound, a height of one foot. However, Joules interest diverted from the financial question to that of how much work could be extracted from a given source. This was a challenge to the caloric theory which held that heat could neither be created or destroyed. Caloric theory had dominated thinking in the science of heat since introduced by Antoine Lavoisier in 1783, supporters of the caloric theory readily pointed to the symmetry of the Peltier-Seebeck effect to claim that heat and current were convertible in an, at least approximately, reversible process. He announced his results at a meeting of the section of the British Association for the Advancement of Science in Cork in August 1843 and was met by silence. Joule was undaunted and started to seek a purely mechanical demonstration of the conversion of work into heat, by forcing water through a perforated cylinder, he could measure the slight viscous heating of the fluid. He obtained a mechanical equivalent of 770 ft·lbf/Btu, Joule now tried a third route
5.
SI base unit
–
The International System of Units defines seven units of measure as a basic set from which all other SI units can be derived. The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science, thus, the kelvin, named after Lord Kelvin, has the symbol K and the ampere, named after André-Marie Ampère, has the symbol A. Many other units, such as the litre, are not part of the SI. The definitions of the units have been modified several times since the Metre Convention in 1875. Since the redefinition of the metre in 1960, the kilogram is the unit that is directly defined in terms of a physical artifact. However, the mole, the ampere, and the candela are linked through their definitions to the mass of the platinum–iridium cylinder stored in a vault near Paris. It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, two possibilities have attracted particular attention, the Planck constant and the Avogadro constant. The 23rd CGPM decided to postpone any formal change until the next General Conference in 2011
6.
Kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
7.
Metre
–
The metre or meter, is the base unit of length in the International System of Units. The metre is defined as the length of the path travelled by light in a vacuum in 1/299792458 seconds, the metre was originally defined in 1793 as one ten-millionth of the distance from the equator to the North Pole. In 1799, it was redefined in terms of a metre bar. In 1960, the metre was redefined in terms of a number of wavelengths of a certain emission line of krypton-86. In 1983, the current definition was adopted, the imperial inch is defined as 0.0254 metres. One metre is about 3 3⁄8 inches longer than a yard, Metre is the standard spelling of the metric unit for length in nearly all English-speaking nations except the United States and the Philippines, which use meter. Measuring devices are spelled -meter in all variants of English, the suffix -meter has the same Greek origin as the unit of length. This range of uses is found in Latin, French, English. Thus calls for measurement and moderation. In 1668 the English cleric and philosopher John Wilkins proposed in an essay a decimal-based unit of length, as a result of the French Revolution, the French Academy of Sciences charged a commission with determining a single scale for all measures. In 1668, Wilkins proposed using Christopher Wrens suggestion of defining the metre using a pendulum with a length which produced a half-period of one second, christiaan Huygens had observed that length to be 38 Rijnland inches or 39.26 English inches. This is the equivalent of what is now known to be 997 mm, no official action was taken regarding this suggestion. In the 18th century, there were two approaches to the definition of the unit of length. One favoured Wilkins approach, to define the metre in terms of the length of a pendulum which produced a half-period of one second. The other approach was to define the metre as one ten-millionth of the length of a quadrant along the Earths meridian, that is, the distance from the Equator to the North Pole. This means that the quadrant would have defined as exactly 10000000 metres at that time. To establish a universally accepted foundation for the definition of the metre, more measurements of this meridian were needed. This portion of the meridian, assumed to be the length as the Paris meridian, was to serve as the basis for the length of the half meridian connecting the North Pole with the Equator
8.
Second
–
The second is the base unit of time in the International System of Units. It is qualitatively defined as the division of the hour by sixty. SI definition of second is the duration of 9192631770 periods of the corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Seconds may be measured using a mechanical, electrical or an atomic clock, SI prefixes are combined with the word second to denote subdivisions of the second, e. g. the millisecond, the microsecond, and the nanosecond. Though SI prefixes may also be used to form multiples of the such as kilosecond. The second is also the unit of time in other systems of measurement, the centimetre–gram–second, metre–kilogram–second, metre–tonne–second. Absolute zero implies no movement, and therefore zero external radiation effects, the second thus defined is consistent with the ephemeris second, which was based on astronomical measurements. The realization of the second is described briefly in a special publication from the National Institute of Standards and Technology. 1 international second is equal to, 1⁄60 minute 1⁄3,600 hour 1⁄86,400 day 1⁄31,557,600 Julian year 1⁄, more generally, = 1⁄, the Hellenistic astronomers Hipparchus and Ptolemy subdivided the day into sixty parts. They also used an hour, simple fractions of an hour. No sexagesimal unit of the day was used as an independent unit of time. The modern second is subdivided using decimals - although the third remains in some languages. The earliest clocks to display seconds appeared during the last half of the 16th century, the second became accurately measurable with the development of mechanical clocks keeping mean time, as opposed to the apparent time displayed by sundials. The earliest spring-driven timepiece with a hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection. During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute, in 1579, Jost Bürgi built a clock for William of Hesse that marked seconds. In 1581, Tycho Brahe redesigned clocks that displayed minutes at his observatory so they also displayed seconds, however, they were not yet accurate enough for seconds. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds, in 1670, London clockmaker William Clement added this seconds pendulum to the original pendulum clock of Christiaan Huygens. From 1670 to 1680, Clement made many improvements to his clock and this clock used an anchor escapement mechanism with a seconds pendulum to display seconds in a small subdial
9.
Kilowatt hour
–
The kilowatt-hour is a derived unit of energy equal to 3.6 megajoules. If the energy is being transmitted or used at a constant rate over a period of time, the kilowatt-hour is commonly used as a billing unit for energy delivered to consumers by electric utilities. The kilowatt-hour is a unit of energy equivalent to one kilowatt of power sustained for one hour. 1 k W ⋅ h = =3600 =3600 k J =3.6 M J One watt is equal to 1 J/s. One kilowatt-hour is 3.6 megajoules, which is the amount of energy converted if work is done at a rate of one thousand watts for one hour. The base unit of energy within the International System of Units is the joule, the hour is a unit of time outside the SI, making the kilowatt-hour a non-SI unit of energy. The kilowatt-hour is not listed among the non-SI units accepted by the BIPM for use with the SI, although the hour, an electric heater rated at 1000 watts, operating for one hour uses one kilowatt-hour of energy. A television rated at 100 watts operating for 10 hours continuously uses one kilowatt-hour, a 40-watt light bulb operating continuously for 25 hours uses one kilowatt-hour. Electrical energy is sold in kilowatt-hours, cost of running equipment is the product of power in kilowatts multiplied by running time in hours, the unit price of electricity may depend upon the rate of consumption and the time of day. Industrial users may also have extra charges according to their peak usage, the symbol kWh is commonly used in commercial, educational, scientific and media publications, and is the usual practice in electrical power engineering. Other abbreviations and symbols may be encountered, kW h is less commonly used and it is consistent with SI standards. This is supported by a standard issued jointly by an international and national organization. However, at least one major usage guide and the IEEE/ASTM standard allow kWh, One guide published by NIST specifically recommends avoiding kWh to avoid possible confusion. KW·h is, like kW h, preferred with SI standards, the US official fuel-economy window sticker for electric vehicles uses the abbreviation kW-hrs. Variations in capitalization are sometimes seen, KWh, KWH, kwh, the notation kW/h, as a symbol for kilowatt-hour, is not correct. To convert a quantity measured in a unit in the column to the units in the top row, multiply by the factor in the cell where the row. All the SI prefixes are commonly applied to the watt-hour, a kilowatt-hour is 1,000 W·h (symbols kW·h, kWh or kW h, a megawatt-hour is 1 million W·h, a milliwatt-hour is 1/1000 W·h and so on. Megawatt-hours, gigawatt-hours, and terawatt-hours are often used for metering larger amounts of energy to industrial customers
10.
Thermochemistry
–
Thermochemistry is the study of the energy and heat associated with chemical reactions and/or physical transformations. A reaction may release or absorb energy, and a change may do the same. Thermochemistry focuses on these changes, particularly on the systems energy exchange with its surroundings. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction, in combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable. Endothermic reactions absorb heat, while exothermic reactions release heat, thermochemistry coalesces the concepts of thermodynamics with the concept of energy in the form of chemical bonds. The subject commonly includes calculations of such quantities as heat capacity, heat of combustion, heat of formation, enthalpy, entropy, free energy, and calories. Stated in modern terms, they are as follows, Lavoisier and Laplace’s law, The energy change accompanying any transformation is equal, Hess law, The energy change accompanying any transformation is the same whether the process occurs in one step or many. These statements preceded the first law of thermodynamics and helped in its formulation, Lavoisier, Laplace and Hess also investigated specific heat and latent heat, although it was Joseph Black who made the most important contributions to the development of latent energy changes. Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in capacity between products and reactants, dΔH / dT = ΔCp. Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature, the measurement of heat changes is performed using calorimetry, usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored using a thermometer or thermocouple. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, several thermodynamic definitions are very useful in thermochemistry. A system is the portion of the universe that is being studied. Everything outside the system is considered the surroundings or environment, a process relates to the change of state. An isothermal process occurs when temperature of the system remains constant, an isobaric process occurs when the pressure of the system remains constant. A process is adiabatic when no heat exchange occurs
11.
Electronvolt
–
In physics, the electronvolt is a unit of energy equal to approximately 1. 6×10−19 joules. By definition, it is the amount of energy gained by the charge of an electron moving across an electric potential difference of one volt. Thus it is 1 volt multiplied by the elementary charge, therefore, one electronvolt is equal to 6981160217662079999♠1. 6021766208×10−19 J. The electronvolt is not a SI unit, and its definition is empirical, like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0. It is a unit of energy within physics, widely used in solid state, atomic, nuclear. It is commonly used with the metric prefixes milli-, kilo-, in some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electronvolts, it is equivalent to the GeV. By mass–energy equivalence, the electronvolt is also a unit of mass and it is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. It is common to express mass in terms of eV as a unit of mass. The mass equivalent of 1 eV/c2 is 1 eV / c 2 = ⋅1 V2 =1.783 ×10 −36 kg. For example, an electron and a positron, each with a mass of 0.511 MeV/c2, the proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, the unified atomic mass unit,1 gram divided by Avogadros number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to megaelectronvolts, use the formula,1 u =931.4941 MeV/c2 =0.9314941 GeV/c2, in high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy and this gives rise to usage of eV as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of units are LMT−1. The dimensions of units are L2MT−2. Then, dividing the units of energy by a constant that has units of velocity. In the field of particle physics, the fundamental velocity unit is the speed of light in vacuum c. Thus, dividing energy in eV by the speed of light, the fundamental velocity constant c is often dropped from the units of momentum by way of defining units of length such that the value of c is unity
12.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
13.
Work (physics)
–
In physics, a force is said to do work if, when acting, there is a displacement of the point of application in the direction of the force. For example, when a ball is held above the ground and then dropped, the SI unit of work is the joule. The SI unit of work is the joule, which is defined as the work expended by a force of one newton through a distance of one metre. The dimensionally equivalent newton-metre is sometimes used as the unit for work, but this can be confused with the unit newton-metre. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton metres is a torque measurement, or a measurement of energy. Non-SI units of work include the erg, the foot-pound, the foot-poundal, the hour, the litre-atmosphere. Due to work having the physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU. The work done by a constant force of magnitude F on a point that moves a distance s in a line in the direction of the force is the product W = F s. For example, if a force of 10 newtons acts along a point that travels 2 meters and this is approximately the work done lifting a 1 kg weight from ground level to over a persons head against the force of gravity. Notice that the work is doubled either by lifting twice the weight the distance or by lifting the same weight twice the distance. Work is closely related to energy, the work-energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in energy is caused by an equal amount of negative work done by the resultant force. From Newtons second law, it can be shown that work on a free, rigid body, is equal to the change in energy of the velocity and rotation of that body. The work of forces generated by a function is known as potential energy. These formulas demonstrate that work is the associated with the action of a force, so work subsequently possesses the physical dimensions. The work/energy principles discussed here are identical to Electric work/energy principles, constraint forces determine the movement of components in a system, constraining the object within a boundary. Constraint forces ensure the velocity in the direction of the constraint is zero and this only applies for a single particle system. For example, in an Atwood machine, the rope does work on each body, there are, however, cases where this is not true
14.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
15.
Newton (unit)
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
16.
Current (electricity)
–
An electric current is a flow of electric charge. In electric circuits this charge is carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas. The SI unit for measuring a current is the ampere. Electric current is measured using a device called an ammeter, electric currents cause Joule heating, which creates light in incandescent light bulbs. They also create magnetic fields, which are used in motors, inductors and generators, the particles that carry the charge in an electric current are called charge carriers. In metals, one or more electrons from each atom are loosely bound to the atom and these conduction electrons are the charge carriers in metal conductors. The conventional symbol for current is I, which originates from the French phrase intensité de courant, current intensity is often referred to simply as current. The I symbol was used by André-Marie Ampère, after whom the unit of current is named, in formulating the eponymous Ampères force law. The notation travelled from France to Great Britain, where it became standard, in a conductive material, the moving charged particles which constitute the electric current are called charge carriers. In other materials, notably the semiconductors, the carriers can be positive or negative. Positive and negative charge carriers may even be present at the same time, a flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of positive or negative charges. The direction of current is arbitrarily defined as the same direction as positive charges flow. This is called the direction of current I. If the current flows in the direction, the variable I has a negative value. When analyzing electrical circuits, the direction of current through a specific circuit element is usually unknown. Consequently, the directions of currents are often assigned arbitrarily
17.
Ampere
–
The ampere, often shortened to amp, is a unit of electric current. In the International System of Units the ampere is one of the seven SI base units and it is named after André-Marie Ampère, French mathematician and physicist, considered the father of electrodynamics. SI defines the ampere in terms of base units by measuring the electromagnetic force between electrical conductors carrying electric current. The ampere was then defined as one coulomb of charge per second, in SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second. In the future, the SI definition may shift back to charge as the base unit, ampères force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the definition of the ampere. The SI unit of charge, the coulomb, is the quantity of electricity carried in 1 second by a current of 1 ampere, conversely, a current of one ampere is one coulomb of charge going past a given point per second,1 A =1 C s. In general, charge Q is determined by steady current I flowing for a time t as Q = It, constant, instantaneous and average current are expressed in amperes and the charge accumulated, or passed through a circuit over a period of time is expressed in coulombs. The relation of the ampere to the coulomb is the same as that of the watt to the joule, the ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the derived from it in the MKSA system would be conveniently sized. The international ampere was a realization of the ampere, defined as the current that would deposit 0.001118 grams of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is 0.99985 A, at present, techniques to establish the realization of an ampere have a relative uncertainty of approximately a few parts in 107, and involve realizations of the watt, the ohm and the volt. Rather than a definition in terms of the force between two current-carrying wires, it has proposed that the ampere should be defined in terms of the rate of flow of elementary charges. Since a coulomb is equal to 6. 2415093×1018 elementary charges. The proposed change would define 1 A as being the current in the direction of flow of a number of elementary charges per second. In 2005, the International Committee for Weights and Measures agreed to study the proposed change, the new definition was discussed at the 25th General Conference on Weights and Measures in 2014 but for the time being was not adopted. The current drawn by typical constant-voltage energy distribution systems is usually dictated by the power consumed by the system, for this reason the examples given below are grouped by voltage level
18.
Electrical resistance and conductance
–
The electrical resistance of an electrical conductor is a measure of the difficulty to pass an electric current through that conductor. The inverse quantity is electrical conductance, and is the ease with which a current passes. Electrical resistance shares some parallels with the notion of mechanical friction. The SI unit of resistance is the ohm, while electrical conductance is measured in siemens. An object of uniform cross section has a proportional to its resistivity and length. All materials show some resistance, except for superconductors, which have a resistance of zero and this proportionality is called Ohms law, and materials that satisfy it are called ohmic materials. In other cases, such as a diode or battery, V and I are not directly proportional. The ratio V/I is sometimes useful, and is referred to as a chordal resistance or static resistance, since it corresponds to the inverse slope of a chord between the origin and an I–V curve. In other situations, the derivative d V d I may be most useful, in the hydraulic analogy, current flowing through a wire is like water flowing through a pipe, and the voltage drop across the wire is like the pressure drop that pushes water through the pipe. Conductance is proportional to how much flow occurs for a given pressure, the voltage drop, not the voltage itself, provides the driving force pushing current through a resistor. In hydraulics, it is similar, The pressure difference between two sides of a pipe, not the pressure itself, determines the flow through it, for example, there may be a large water pressure above the pipe, which tries to push water down through the pipe. But there may be a large water pressure below the pipe. If these pressures are equal, no water flows, in the same way, a long, thin copper wire has higher resistance than a short, thick copper wire. A pipe filled with hair restricts the flow of more than a clean pipe of the same shape. The difference between copper, steel, and rubber is related to their structure and electron configuration. In addition to geometry and material, there are other factors that influence resistance and conductance, such as temperature. Substances in which electricity can flow are called conductors, a piece of conducting material of a particular resistance meant for use in a circuit is called a resistor. Conductors are made of high-conductivity materials such as metals, in particular copper, Resistors, on the other hand, are made of a wide variety of materials depending on factors such as the desired resistance, amount of energy that it needs to dissipate, precision, and costs
19.
Ohm
–
The ohm is the SI derived unit of electrical resistance, named after German physicist Georg Simon Ohm. The definition of the ohm was revised several times, today the definition of the ohm is expressed from the quantum Hall effect. In many cases the resistance of a conductor in ohms is approximately constant within a range of voltages, temperatures. In alternating current circuits, electrical impedance is also measured in ohms, the siemens is the SI derived unit of electric conductance and admittance, also known as the mho, it is the reciprocal of resistance in ohms. The power dissipated by a resistor may be calculated from its resistance, non-linear resistors have a value that may vary depending on the applied voltage. The rapid rise of electrotechnology in the last half of the 19th century created a demand for a rational, coherent, consistent, telegraphers and other early users of electricity in the 19th century needed a practical standard unit of measurement for resistance. Two different methods of establishing a system of units can be chosen. Various artifacts, such as a length of wire or a standard cell, could be specified as producing defined quantities for resistance, voltage. This latter method ensures coherence with the units of energy, defining a unit for resistance that is coherent with units of energy and time in effect also requires defining units for potential and current. Some early definitions of a unit of resistance, for example, the absolute-units system related magnetic and electrostatic quantities to metric base units of mass, time, and length. These units had the advantage of simplifying the equations used in the solution of electromagnetic problems. However, the CGS units turned out to have impractical sizes for practical measurements, various artifact standards were proposed as the definition of the unit of resistance. In 1860 Werner Siemens published a suggestion for a reproducible resistance standard in Poggendorffs Annalen der Physik und Chemie and he proposed a column of pure mercury, of one square millimetre cross section, one metre long, Siemens mercury unit. However, this unit was not coherent with other units, one proposal was to devise a unit based on a mercury column that would be coherent – in effect, adjusting the length to make the resistance one ohm. Not all users of units had the resources to carry out experiments to the required precision. The BAAS in 1861 appointed a committee including Maxwell and Thomson to report upon Standards of Electrical Resistance, in the third report of the committee,1864, the resistance unit is referred to as B. A. unit, or Ohmad. By 1867 the unit is referred to as simply Ohm, the B. A. ohm was intended to be 109 CGS units but owing to an error in calculations the definition was 1. 3% too small. The error was significant for preparation of working standards, on September 21,1881 the Congrès internationale délectriciens defined a practical unit of Ohm for the resistance, based on CGS units, using a mercury column at zero deg
20.
Pascal (unit)
–
The pascal is the SI derived unit of pressure used to quantify internal pressure, stress, Youngs modulus and ultimate tensile strength. It is defined as one newton per square meter and it is named after the French polymath Blaise Pascal. Common multiple units of the pascal are the hectopascal which is equal to one millibar, the unit of measurement called standard atmosphere is defined as 101,325 Pa and approximates to the average pressure at sea-level at the latitude 45° N. Meteorological reports typically state atmospheric pressure in hectopascals, the unit is named after Blaise Pascal, noted for his contributions to hydrodynamics and hydrostatics, and experiments with a barometer. The name pascal was adopted for the SI unit newton per square metre by the 14th General Conference on Weights, one pascal is the pressure exerted by a force of magnitude one newton perpendicularly upon an area of one square metre. The unit of measurement called atmosphere or standard atmosphere is 101325 Pa and this value is often used as a reference pressure and specified as such in some national and international standards, such as ISO2787, ISO2533 and ISO5024. In contrast, IUPAC recommends the use of 100 kPa as a standard pressure when reporting the properties of substances, geophysicists use the gigapascal in measuring or calculating tectonic stresses and pressures within the Earth. Medical elastography measures tissue stiffness non-invasively with ultrasound or magnetic resonance imaging, in materials science and engineering, the pascal measures the stiffness, tensile strength and compressive strength of materials. In engineering use, because the pascal represents a small quantity. The pascal is also equivalent to the SI unit of energy density and this applies not only to the thermodynamics of pressurised gases, but also to the energy density of electric, magnetic, and gravitational fields. In measurements of sound pressure, or loudness of sound, one pascal is equal to 94 decibels SPL, the quietest sound a human can hear, known as the threshold of hearing, is 0 dB SPL, or 20 µPa. The airtightness of buildings is measured at 50 Pa, the units of atmospheric pressure commonly used in meteorology were formerly the bar, which was close to the average air pressure on Earth, and the millibar. Since the introduction of SI units, meteorologists generally measure pressures in hectopascals unit, exceptions include Canada and Portugal, which use kilopascals. In many other fields of science, the SI is preferred, many countries also use the millibar or hectopascal to give aviation altimeter settings. In practically all fields, the kilopascal is used instead. Centimetre of water Metric prefix Orders of magnitude Pascals law
21.
Coulomb
–
The coulomb is the International System of Units unit of electric charge. 242×1018 protons, and −1 C is equivalent to the charge of approximately 6. 242×1018 electrons. This SI unit is named after Charles-Augustin de Coulomb, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, the SI system defines the coulomb in terms of the ampere and second,1 C =1 A ×1 s. The second is defined in terms of a frequency emitted by caesium atoms. The ampere is defined using Ampères force law, the definition relies in part on the mass of the prototype kilogram. In practice, the balance is used to measure amperes with the highest possible accuracy. One coulomb is the magnitude of charge in 6. 24150934×10^18 protons or electrons. The inverse of this gives the elementary charge of 1. 6021766208×10−19 C. The magnitude of the charge of one mole of elementary charges is known as a faraday unit of charge. In terms of Avogadros number, one coulomb is equal to approximately 1.036 × NA×10−5 elementary charges, one ampere-hour =3600 C,1 mA⋅h =3.6 C. One statcoulomb, the obsolete CGS electrostatic unit of charge, is approximately 3. 3356×10−10 C or about one-third of a nanocoulomb, the elementary charge, the charge of a proton, is approximately 1. 6021766208×10−19 C. In SI, the charge in coulombs is an approximate value. However, in other systems, the elementary charge has an exact value by definition. Specifically, e90 = / C exactly, SI itself may someday change its definitions in a similar way. For example, one possible proposed redefinition is the ampere. is such that the value of the charge e is exactly 1. 602176487×10−19 coulombs. This proposal is not yet accepted as part of the SI, the charges in static electricity from rubbing materials together are typically a few microcoulombs. The amount of charge that travels through a lightning bolt is typically around 15 C, the amount of charge that travels through a typical alkaline AA battery from being fully charged to discharged is about 5 kC =5000 C ≈1400 mA⋅h. The hydraulic analogy uses everyday terms to illustrate movement of charge, the analogy equates charge to a volume of water, and voltage to pressure
22.
Volt
–
The volt is the derived unit for electric potential, electric potential difference, and electromotive force. One volt is defined as the difference in potential between two points of a conducting wire when an electric current of one ampere dissipates one watt of power between those points. It is also equal to the difference between two parallel, infinite planes spaced 1 meter apart that create an electric field of 1 newton per coulomb. Additionally, it is the difference between two points that will impart one joule of energy per coulomb of charge that passes through it. It can also be expressed as amperes times ohms, watts per ampere, or joules per coulomb, for the Josephson constant, KJ = 2e/h, the conventional value KJ-90 is used, K J-90 =0.4835979 GHz μ V. This standard is typically realized using an array of several thousand or tens of thousands of junctions. Empirically, several experiments have shown that the method is independent of device design, material, measurement setup, etc. in the water-flow analogy sometimes used to explain electric circuits by comparing them with water-filled pipes, voltage is likened to difference in water pressure. Current is proportional to the diameter of the pipe or the amount of water flowing at that pressure. A resistor would be a reduced diameter somewhere in the piping, the relationship between voltage and current is defined by Ohms Law. Ohms Law is analogous to the Hagen–Poiseuille equation, as both are linear models relating flux and potential in their respective systems, the voltage produced by each electrochemical cell in a battery is determined by the chemistry of that cell. Cells can be combined in series for multiples of that voltage, mechanical generators can usually be constructed to any voltage in a range of feasibility. High-voltage electric power lines,110 kV and up Lightning, Varies greatly. Volta had determined that the most effective pair of metals to produce electricity was zinc. In 1861, Latimer Clark and Sir Charles Bright coined the name volt for the unit of resistance, by 1873, the British Association for the Advancement of Science had defined the volt, ohm, and farad. In 1881, the International Electrical Congress, now the International Electrotechnical Commission and they made the volt equal to 108 cgs units of voltage, the cgs system at the time being the customary system of units in science. At that time, the volt was defined as the difference across a conductor when a current of one ampere dissipates one watt of power. The international volt was defined in 1893 as 1/1.434 of the emf of a Clark cell and this definition was abandoned in 1908 in favor of a definition based on the international ohm and international ampere until the entire set of reproducible units was abandoned in 1948. Prior to the development of the Josephson junction voltage standard, the volt was maintained in laboratories using specially constructed batteries called standard cells
23.
Electric charge
–
Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of charges, positive and negative. Like charges repel and unlike attract, an absence of net charge is referred to as neutral. An object is charged if it has an excess of electrons. The SI derived unit of charge is the coulomb. In electrical engineering, it is common to use the ampere-hour. The symbol Q often denotes charge, early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that dont require consideration of quantum effects. The electric charge is a conserved property of some subatomic particles. Electrically charged matter is influenced by, and produces, electromagnetic fields, the interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. 602×10−19 coulombs. The proton has a charge of +e, and the electron has a charge of −e, the study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics. Charge is the property of forms of matter that exhibit electrostatic attraction or repulsion in the presence of other matter. Electric charge is a property of many subatomic particles. The charges of free-standing particles are integer multiples of the charge e. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge, robert Millikans oil drop experiment demonstrated this fact directly, and measured the elementary charge. By convention, the charge of an electron is −1, while that of a proton is +1, charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. The charge of an antiparticle equals that of the corresponding particle, quarks have fractional charges of either −1/3 or +2/3, but free-standing quarks have never been observed. The electric charge of an object is the sum of the electric charges of the particles that make it up. An ion is an atom that has lost one or more electrons, giving it a net charge, or that has gained one or more electrons
24.
Voltage
–
Voltage, electric potential difference, electric pressure or electric tension is the difference in electric potential energy between two points per unit electric charge. The voltage between two points is equal to the work done per unit of charge against an electric field to move the test charge between two points. This is measured in units of volts, voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three. A voltmeter can be used to measure the voltage between two points in a system, often a reference potential such as the ground of the system is used as one of the points. A voltage may represent either a source of energy or lost, used, given two points in space, x A and x B, voltage is the difference in electric potential between those two points. Electric potential must be distinguished from electric energy by noting that the potential is a per-unit-charge quantity. Like mechanical potential energy, the zero of electric potential can be chosen at any point, so the difference in potential, i. e. the voltage, is the quantity which is physically meaningful. The voltage between point A to point B is equal to the work which would have to be done, per unit charge, against or by the electric field to move the charge from A to B. The voltage between the two ends of a path is the energy required to move a small electric charge along that path. Mathematically this is expressed as the integral of the electric field. In the general case, both an electric field and a dynamic electromagnetic field must be included in determining the voltage between two points. Historically this quantity has also called tension and pressure. Pressure is now obsolete but tension is used, for example within the phrase high tension which is commonly used in thermionic valve based electronics. Voltage is defined so that negatively charged objects are pulled towards higher voltages, therefore, the conventional current in a wire or resistor always flows from higher voltage to lower voltage. Current can flow from lower voltage to higher voltage, but only when a source of energy is present to push it against the electric field. This is the case within any electric power source, for example, inside a battery, chemical reactions provide the energy needed for ion current to flow from the negative to the positive terminal. The electric field is not the only factor determining charge flow in a material, the electric potential of a material is not even a well defined quantity, since it varies on the subatomic scale. A more convenient definition of voltage can be found instead in the concept of Fermi level, in this case the voltage between two bodies is the thermodynamic work required to move a unit of charge between them
25.
Power (physics)
–
In physics, power is the rate of doing work. It is the amount of energy consumed per unit time, having no direction, it is a scalar quantity. In the SI system, the unit of power is the joule per second, known as the watt in honour of James Watt, another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written, because this integral depends on the trajectory of the point of application of the force and torque, this calculation of work is said to be path dependent. As a physical concept, power requires both a change in the universe and a specified time in which the change occurs. This is distinct from the concept of work, which is measured in terms of a net change in the state of the physical universe. The output power of a motor is the product of the torque that the motor generates. The power involved in moving a vehicle is the product of the force of the wheels. The dimension of power is divided by time. The SI unit of power is the watt, which is equal to one joule per second, other units of power include ergs per second, horsepower, metric horsepower, and foot-pounds per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the required to lift 550 pounds by one foot in one second. Other units include dBm, a logarithmic measure with 1 milliwatt as reference, food calories per hour, Btu per hour. This shows how power is an amount of energy consumed per unit time. If ΔW is the amount of work performed during a period of time of duration Δt and it is the average amount of work done or energy converted per unit of time. The average power is simply called power when the context makes it clear. The instantaneous power is then the value of the average power as the time interval Δt approaches zero. P = lim Δ t →0 P a v g = lim Δ t →0 Δ W Δ t = d W d t. In the case of constant power P, the amount of work performed during a period of duration T is given by, W = P t
26.
Symbol
–
A symbol is a mark, sign, or word that indicates, signifies, or is understood as representing an idea, object, or relationship. Symbols allow people to go beyond what is known or seen by creating linkages between otherwise very different concepts and experiences, all communication is achieved through the use of symbols. Symbols take the form of words, sounds, gestures, ideas or visual images and are used to other ideas. For example, a red octagon may be a symbol for STOP, on a map, a blue line might represent a river. Alphabetic letters may be symbols for sounds, personal names are symbols representing individuals. A red rose may symbolize love and compassion, the variable x, in a mathematical equation, may symbolize the position of a particle in space. In cartography, a collection of symbols forms a legend for a map The word derives from the Greek symbolon meaning token or watchword. It is an amalgam of syn- together + bole a throwing, a casting, the sense evolution in Greek is from throwing things together to contrasting to comparing to token used in comparisons to determine if something is genuine. The meaning something which stands for something else was first recorded in 1590, later, expanding on what he means by this definition Campbell says, a symbol, like everything else, shows a double aspect. We must distinguish, therefore between the sense and the meaning of the symbol. The term meaning can only to the first two but these, today, are in the charge of science – which is the province as we have said, not of symbols. The ineffable, the unknowable, can be only sensed. Heinrich Zimmer gives an overview of the nature, and perennial relevance. Concepts and words are symbols, just as visions, rituals, through all of these a transcendent reality is mirrored. They are so many metaphors reflecting and implying something which, though thus variously expressed, is ineffable, though thus rendered multiform, Symbols hold the mind to truth but are not themselves the truth, hence it is delusory to borrow them. Each civilisation, every age, must bring forth its own, in the book Signs and Symbols, it is stated that A symbol. Is a visual image or sign representing an idea -- a deeper indicator of a universal truth, Symbols are a means of complex communication that often can have multiple levels of meaning. This separates symbols from signs, as signs have only one meaning, human cultures use symbols to express specific ideologies and social structures and to represent aspects of their specific culture
27.
Letter case
–
Letter case is the distinction between the letters that are in larger upper case and smaller lower case in the written representation of certain languages. The writing systems that distinguish between the upper and lower case have two sets of letters, with each letter in one set usually having an equivalent in the other set. Basically, the two variants are alternative representations of the same letter, they have the same name and pronunciation. Letter case is generally applied in a fashion, with both upper- and lower-case letters appearing in a given piece of text. The choice of case is often prescribed by the grammar of a language or by the conventions of a particular discipline, in mathematics, letter case may indicate the relationship between objects, with upper-case letters often representing superior objects. In some contexts, it is conventional to use only one case, the terms upper case and lower case can be written as two consecutive words, connected with a hyphen, or as a single word. These terms originated from the layouts of the shallow drawers called type cases used to hold the movable type for letterpress printing. Traditionally, the letters were stored in a separate case that was located above the case that held the small letters. Majuscule, for palaeographers, is technically any script in which the letters have very few or very short ascenders and descenders, or none at all. By virtue of their impact, this made the term majuscule an apt descriptor for what much later came to be more commonly referred to as uppercase letters. The word is often spelled miniscule, by association with the word miniature. This has traditionally been regarded as a mistake, but is now so common that some dictionaries tend to accept it as a nonstandard or variant spelling. Miniscule is still less likely, however, to be used in reference to lower-case letters, the glyphs of lower-case letters can resemble smaller forms of the upper-case glyphs restricted to the base band or can look hardly related. There is more variation in the height of the minuscules, as some of them have higher or lower than the typical size. In Times New Roman, for instance, b, d, f, h, k, l, t are the letters with ascenders, and g, j, p, q, y are the ones with descenders. In addition, with old-style numerals still used by traditional or classical fonts,6 and 8 make up the ascender set. Writing systems using two separate cases are bicameral scripts, languages that use the Latin, Cyrillic, Greek, Coptic, Armenian, Adlam, Varang Kshiti, Cherokee, and Osage scripts use letter cases in their written form as an aid to clarity. Other bicameral scripts, which are not used for any modern languages, are Old Hungarian, Glagolitic, the Georgian alphabet has several variants, and there were attempts to use them as different cases, but the modern written Georgian language does not distinguish case
28.
Celsius
–
Celsius, also known as centigrade, is a metric scale and unit of measurement for temperature. As an SI derived unit, it is used by most countries in the world and it is named after the Swedish astronomer Anders Celsius, who developed a similar temperature scale. The degree Celsius can refer to a temperature on the Celsius scale as well as a unit to indicate a temperature interval. Before being renamed to honour Anders Celsius in 1948, the unit was called centigrade, from the Latin centum, which means 100, and gradus, which means steps. The scale is based on 0° for the point of water. This scale is widely taught in schools today, by international agreement the unit degree Celsius and the Celsius scale are currently defined by two different temperatures, absolute zero, and the triple point of VSMOW. This definition also precisely relates the Celsius scale to the Kelvin scale, absolute zero, the lowest temperature possible, is defined as being precisely 0 K and −273.15 °C. The temperature of the point of water is defined as precisely 273.16 K at 611.657 pascals pressure. This definition fixes the magnitude of both the degree Celsius and the kelvin as precisely 1 part in 273.16 of the difference between absolute zero and the point of water. Thus, it sets the magnitude of one degree Celsius and that of one kelvin as exactly the same, additionally, it establishes the difference between the two scales null points as being precisely 273.15 degrees. In his paper Observations of two persistent degrees on a thermometer, he recounted his experiments showing that the point of ice is essentially unaffected by pressure. He also determined with precision how the boiling point of water varied as a function of atmospheric pressure. He proposed that the point of his temperature scale, being the boiling point. This pressure is known as one standard atmosphere, the BIPMs 10th General Conference on Weights and Measures later defined one standard atmosphere to equal precisely 1013250dynes per square centimetre. On 19 May 1743 he published the design of a mercury thermometer, in 1744, coincident with the death of Anders Celsius, the Swedish botanist Carolus Linnaeus reversed Celsiuss scale. In it, Linnaeus recounted the temperatures inside the orangery at the University of Uppsala Botanical Garden, since the 19th century, the scientific and thermometry communities worldwide referred to this scale as the centigrade scale. Temperatures on the scale were often reported simply as degrees or. More properly, what was defined as centigrade then would now be hectograde.2 gradians, for scientific use, Celsius is the term usually used, with centigrade otherwise continuing to be in common but decreasing use, especially in informal contexts in English-speaking countries
29.
Mechanics
–
Mechanics is an area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle, during the early modern period, scientists such as Khayaam, Galileo, Kepler, and Newton, laid the foundation for what is now known as classical mechanics. It is a branch of physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of, historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newtons laws of motion in Philosophiæ Naturalis Principia Mathematica, both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences, essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of quantum numbers. Quantum mechanics has superseded classical mechanics at the level and is indispensable for the explanation and prediction of processes at the molecular, atomic. However, for macroscopic processes classical mechanics is able to solve problems which are difficult in quantum mechanics and hence remains useful. Modern descriptions of such behavior begin with a definition of such quantities as displacement, time, velocity, acceleration, mass. Until about 400 years ago, however, motion was explained from a different point of view. He showed that the speed of falling objects increases steadily during the time of their fall and this acceleration is the same for heavy objects as for light ones, provided air friction is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass, for objects traveling at speeds close to the speed of light, Newton’s laws were superseded by Albert Einstein’s theory of relativity. For atomic and subatomic particles, Newton’s laws were superseded by quantum theory, for everyday phenomena, however, Newton’s three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einsteins general and special theories of relativity have expanded the scope of Newton, the differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a massive body approaches the speed of light. Relativistic corrections are also needed for quantum mechanics, although general relativity has not been integrated, the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything
30.
Torque
–
Torque, moment, or moment of force is rotational force. Just as a force is a push or a pull. Loosely speaking, torque is a measure of the force on an object such as a bolt or a flywheel. For example, pushing or pulling the handle of a wrench connected to a nut or bolt produces a torque that loosens or tightens the nut or bolt, the symbol for torque is typically τ, the lowercase Greek letter tau. When it is called moment of force, it is denoted by M. The SI unit for torque is the newton metre, for more on the units of torque, see Units. This article follows US physics terminology in its use of the word torque, in the UK and in US mechanical engineering, this is called moment of force, usually shortened to moment. In US physics and UK physics terminology these terms are interchangeable, unlike in US mechanical engineering, Torque is defined mathematically as the rate of change of angular momentum of an object. The definition of states that one or both of the angular velocity or the moment of inertia of an object are changing. Moment is the term used for the tendency of one or more applied forces to rotate an object about an axis. For example, a force applied to a shaft causing acceleration, such as a drill bit accelerating from rest. By contrast, a force on a beam produces a moment, but since the angular momentum of the beam is not changing. Similarly with any force couple on an object that has no change to its angular momentum and this article follows the US physics terminology by calling all moments by the term torque, whether or not they cause the angular momentum of an object to change. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers, the term torque was apparently introduced into English scientific literature by James Thomson, the brother of Lord Kelvin, in 1884. A force applied at an angle to a lever multiplied by its distance from the levers fulcrum is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. More generally, the torque on a particle can be defined as the product, τ = r × F, where r is the particles position vector relative to the fulcrum. Alternatively, τ = r F ⊥, where F⊥ is the amount of force directed perpendicularly to the position of the particle, any force directed parallel to the particles position vector does not produce a torque
31.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
32.
Moment of inertia
–
It depends on the bodys mass distribution and the axis chosen, with larger moments requiring more torque to change the bodys rotation. It is a property, the moment of inertia of a composite system is the sum of the moments of inertia of its component subsystems. One of its definitions is the moment of mass with respect to distance from an axis r, I = ∫ Q r 2 d m. For bodies constrained to rotate in a plane, it is sufficient to consider their moment of inertia about a perpendicular to the plane. When a body is rotating, or free to rotate, around an axis, the amount of torque needed to cause any given angular acceleration is proportional to the moment of inertia of the body. Moment of inertia may be expressed in units of kilogram metre squared in SI units, moment of inertia plays the role in rotational kinetics that mass plays in linear kinetics - both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, for a point-like mass, the moment of inertia about some axis is given by mr2, where r is the distance to the axis, and m is the mass. For an extended body, the moment of inertia is just the sum of all the pieces of mass multiplied by the square of their distances from the axis in question. For an extended body of a shape and uniform density. In 1673 Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, the term moment of inertia was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Eulers second law. Comparison of this frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. Moment of inertia appears in momentum, kinetic energy, and in Newtons laws of motion for a rigid body as a physical parameter that combines its shape. There is a difference in the way moment of inertia appears in planar. The moment of inertia of a flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. Moment of inertia I is defined as the ratio of the angular momentum L of a system to its angular velocity ω around a principal axis, if the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their arms or divers curl their bodies into a tuck position during a dive. For a simple pendulum, this yields a formula for the moment of inertia I in terms of the mass m of the pendulum and its distance r from the pivot point as. Thus, moment of inertia depends on both the mass m of a body and its geometry, or shape, as defined by the distance r to the axis of rotation
33.
Algebra
–
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols, as such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra, the abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine, abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are unknown or allowed to take on many values. For example, in x +2 =5 the letter x is unknown, in E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of object in abstract algebra is called an algebra. A mathematician who does research in algebra is called an algebraist, the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wal-muḳābala by Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the century, from either Spanish, Italian. It originally referred to the procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century, the word algebra has several related meanings in mathematics, as a single word or with qualifiers. As a single word without an article, algebra names a broad part of mathematics, as a single word with an article or in plural, an algebra or algebras denotes a specific mathematical structure, whose precise definition depends on the author. Usually the structure has an addition, multiplication, and a scalar multiplication, when some authors use the term algebra, they make a subset of the following additional assumptions, associative, commutative, unital, and/or finite-dimensional. In universal algebra, the word refers to a generalization of the above concept. With a qualifier, there is the distinction, Without an article, it means a part of algebra, such as linear algebra, elementary algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, sometimes both meanings exist for the same qualifier, as in the sentence, Commutative algebra is the study of commutative rings, which are commutative algebras over the integers
34.
Dimensional analysis
–
Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra. The concept of physical dimension was introduced by Joseph Fourier in 1822, Physical quantities that are measurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If physical quantities have different dimensions, they cannot be compared by similar units, hence, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour. Any physically meaningful equation will have the dimensions on their left and right sides. Checking for dimensional homogeneity is an application of dimensional analysis. Dimensional analysis is routinely used as a check of the plausibility of derived equations and computations. It is generally used to categorize types of quantities and units based on their relationship to or dependence on other units. Many parameters and measurements in the sciences and engineering are expressed as a concrete number – a numerical quantity. Often a quantity is expressed in terms of other quantities, for example, speed is a combination of length and time. Compound relations with per are expressed with division, e. g.60 mi/1 h, other relations can involve multiplication, powers, or combinations thereof. A base unit is a unit that cannot be expressed as a combination of other units, for example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the units of length. Sometimes the names of units obscure that they are derived units, for example, an ampere is a unit of electric current, which is equivalent to electric charge per unit time and is measured in coulombs per second, so 1 A =1 C/s. Similarly, one newton is 1 kg⋅m/s2, percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as 1/100, derivatives with respect to a quantity add the dimensions of the variable one is differentiating with respect to on the denominator. Thus, position has the dimension L, derivative of position with respect to time has dimension LT−1 – length from position, time from the derivative, the second derivative has dimension LT−2. In economics, one distinguishes between stocks and flows, a stock has units of units, while a flow is a derivative of a stock, in some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions
35.
Dot product
–
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. Sometimes it is called inner product in the context of Euclidean space, algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them, the dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance, the equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In such a presentation, the notions of length and angles are not primitive, so the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. For instance, in space, the dot product of vectors and is. In Euclidean space, a Euclidean vector is an object that possesses both a magnitude and a direction. A vector can be pictured as an arrow and its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector a is denoted by ∥ a ∥, the dot product of two Euclidean vectors a and b is defined by a ⋅ b = ∥ a ∥ ∥ b ∥ cos , where θ is the angle between a and b. In particular, if a and b are orthogonal, then the angle between them is 90° and a ⋅ b =0. The scalar projection of a Euclidean vector a in the direction of a Euclidean vector b is given by a b = ∥ a ∥ cos θ, where θ is the angle between a and b. In terms of the definition of the dot product, this can be rewritten a b = a ⋅ b ^. The dot product is thus characterized geometrically by a ⋅ b = a b ∥ b ∥ = b a ∥ a ∥. The dot product, defined in this manner, is homogeneous under scaling in each variable and it also satisfies a distributive law, meaning that a ⋅ = a ⋅ b + a ⋅ c. These properties may be summarized by saying that the dot product is a bilinear form, moreover, this bilinear form is positive definite, which means that a ⋅ a is never negative and is zero if and only if a =0. En are the basis vectors in Rn, then we may write a = = ∑ i a i e i b = = ∑ i b i e i. The vectors ei are a basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length e i ⋅ e i =1 and since they form right angles with each other, thus in general we can say that, e i ⋅ e j = δ i j
36.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
37.
Cross product
–
In mathematics and vector algebra, the cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. Given two linearly independent vectors a and b, the product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and it should not be confused with dot product. If two vectors have the direction or if either one has zero length, then their cross product is zero. The cross product is anticommutative and is distributive over addition, the space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. If one adds the further requirement that the product be uniquely defined, the cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, if the vectors a and b are parallel, by the above formula, the cross product of a and b is the zero vector 0. Then, the n is coming out of the thumb. Using this rule implies that the cross-product is anti-commutative, i. e. b × a = −. By pointing the forefinger toward b first, and then pointing the finger toward a. Using the cross product requires the handedness of the system to be taken into account. If a left-handed coordinate system is used, the direction of the n is given by the left-hand rule. This, however, creates a problem because transforming from one arbitrary reference system to another, the problem is clarified by realizing that the cross product of two vectors is not a vector, but rather a pseudovector. See cross product and handedness for more detail, in 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period and an x, respectively, to denote them. These alternative names are widely used in the literature. Both the cross notation and the cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b, as explained below, the cross product can be expressed in the form of a determinant of a special 3 ×3 matrix
38.
Radian
–
The radian is the standard unit of angular measure, used in many areas of mathematics. The length of an arc of a circle is numerically equal to the measurement in radians of the angle that it subtends. The unit was formerly an SI supplementary unit, but this category was abolished in 1995, separately, the SI unit of solid angle measurement is the steradian. The radian is represented by the symbol rad, so for example, a value of 1.2 radians could be written as 1.2 rad,1.2 r,1. 2rad, or 1. 2c. Radian describes the angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. Conversely, the length of the arc is equal to the radius multiplied by the magnitude of the angle in radians. As the ratio of two lengths, the radian is a number that needs no unit symbol, and in mathematical writing the symbol rad is almost always omitted. When quantifying an angle in the absence of any symbol, radians are assumed, and it follows that the magnitude in radians of one complete revolution is the length of the entire circumference divided by the radius, or 2πr / r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees, the concept of radian measure, as opposed to the degree of an angle, is normally credited to Roger Cotes in 1714. He described the radian in everything but name, and he recognized its naturalness as a unit of angular measure, the idea of measuring angles by the length of the arc was already in use by other mathematicians. For example, al-Kashi used so-called diameter parts as units where one part was 1/60 radian. The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson at Queens College, Belfast. He had used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, in 1874, after a consultation with James Thomson, Muir adopted radian. As stated, one radian is equal to 180/π degrees, thus, to convert from radians to degrees, multiply by 180/π. The length of circumference of a circle is given by 2 π r, so, to convert from radians to gradians multiply by 200 / π, and to convert from gradians to radians multiply by π /200. This is because radians have a mathematical naturalness that leads to a more elegant formulation of a number of important results, most notably, results in analysis involving trigonometric functions are simple and elegant when the functions arguments are expressed in radians. Because of these and other properties, the trigonometric functions appear in solutions to problems that are not obviously related to the functions geometrical meanings
39.
Heat
–
In physics, heat is the amount of energy flowing from one body to another spontaneously due to their temperature difference, or by any means other than through work or the transfer of matter. Thus, energy exchanged as heat during a process changes the energy of each body by equal. The sign of the quantity of heat can indicate the direction of the transfer, for example from system A to system B, negation indicates energy flowing in the opposite direction. While heat flows spontaneously from hot to cold, it is possible to construct a heat pump or refrigeration system that does work to increase the difference in temperature between two systems, conversely, a heat engine reduces an existing temperature difference to do work on another system. Heat is a consequence of the motion of particles. When heat is transferred between two objects or systems, the energy of the object or systems particles increases, as this occurs, the arrangement between particles becomes more and more disordered. In other words, heat is related to the concept of entropy, historically, many energy units for measurement of heat have been used. The standards-based unit in the International System of Units is the joule, Heat is measured by its effect on the states of interacting bodies, for example, by the amount of ice melted or a change in temperature. The quantification of heat via the change of a body is called calorimetry. In calorimetry, sensible heat is defined with respect to a specific chosen state variable of the system, sensible heat causes a change of the temperature of the system while leaving the chosen state variable unchanged. Heat transfer that occurs at a constant system temperature but changes the state variable is called latent heat with respect to the variable, for infinitesimal changes, the total incremental heat transfer is then the sum of the latent and sensible heat. Physicist James Clerk Maxwell, in his 1871 classic Theory of Heat, was one of many who began to build on the established idea that heat has something to do with matter in motion. This was the idea put forth by Benjamin Thompson in 1798. One of Maxwells recommended books was Heat as a Mode of Motion, Maxwell outlined four stipulations for the definition of heat, It is something which may be transferred from one body to another, according to the second law of thermodynamics. It is a quantity, and so can be treated mathematically. It cannot be treated as a substance, because it may be transformed into something that is not a material substance. Heat is one of the forms of energy and this was the way of the historical pioneers of thermodynamics. Maxwell writes that convection as such is not a purely thermal phenomenon, in thermodynamics, convection in general is regarded as transport of internal energy