Dimensional analysis
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities and units of measure and tracking these dimensions as calculations or comparisons are performed. The conversion of units from one dimensional unit to another is somewhat complex. Dimensional analysis, or more the factor-label method known as the unit-factor method, is a used technique for such conversions using the rules of algebra; the concept of physical dimension was introduced by Joseph Fourier in 1822. Physical quantities that are of the same kind have the same dimension and can be directly compared to each other if they are expressed in differing units of measure. If physical quantities have different dimensions, they cannot be expressed in terms of similar units and cannot be compared in quantity. For example, asking whether a kilogram is larger than an hour is meaningless. Any physically meaningful equation will have the same dimensions on its left and right sides, a property known as dimensional homogeneity.
Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation. Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number – a numerical quantity and a corresponding dimensional unit. A quantity is expressed in terms of several other quantities. Compound relations with "per" are expressed with division, e.g. 60 mi/1 h. Other relations can involve powers, or combinations thereof. A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others, in terms of which all the remaining units of the system can be expressed. For example, units for length and time are chosen as base units. Units for volume, can be factored into the base units of length, thus they are considered derived or compound units.
Sometimes the names of units obscure the fact. For example, a newton is a unit of force; the newton is defined as 1 N = 1 kg⋅m⋅s−2. Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since 1% = 1/100. Taking a derivative with respect to a quantity adds the dimension of the variable one is differentiating with respect to, in the denominator. Thus: position has the dimension L. In economics, one distinguishes between stocks and flows: a stock has units of "units", while a flow is a derivative of a stock, has units of "units/time". In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are expressed as percentages: total debt outstanding divided by annual GDP – but one may argue that in comparing a stock to a flow, annual GDP should have dimensions of currency/time, thus Debt-to-GDP should have units of years, which indicates that Debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, 100 kPa = 1 bar; the rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to 100 kPa / 1 bar = 1. Since any quantity can be multiplied by 1 without changing it, the expression "100 kPa / 1 bar" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including units. For example, 5 bar × 100 kPa / 1 bar = 500 kPa because 5 × 100 / 1 = 500, bar/bar cancels out, so 5 bar = 500 kPa; the most basic rule of dimensional analysis is that of dimensional homogeneity. 1. Only commensurable quantities may be compared, added, or subtracted. However, the dimensions form an abelian group under multiplication, so: 2. One may take ratios of incommensurable quantities, multiply or divide them. For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometer, as these have different dimensions, nor to add 1 hour to 1 kilometer.
However, it makes perfect sense to ask whether 1 mile is more, the same, or less than 1 kilometer being the same dimension of physical quantity though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h; the rule implies that in a physically mea
Angle
In plane geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane. Angles are formed by the intersection of two planes in Euclidean and other spaces; these are called dihedral angles. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the spherical angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is used to designate the measure of an angle or of a rotation; this measure is the ratio of the length of a circular arc to its radius. In the case of a geometric angle, the arc is delimited by the sides. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation; the word angle comes from the Latin word angulus, meaning "corner".
Both are connected with the Proto-Indo-European root *ank-, meaning "to bend" or "bow". Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, do not lie straight with respect to each other. According to Proclus an angle must be a relationship; the first concept was used by Eudemus. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle. Lower case Roman letters are used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples. In geometric figures, angles may be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB and AC is denoted ∠BAC or B A C ^. Sometimes, where there is no risk of confusion, the angle may be referred to by its vertex. An angle denoted, say, ∠BAC might refer to any of four angles: the clockwise angle from B to C, the anticlockwise angle from B to C, the clockwise angle from C to B, or the anticlockwise angle from C to B, where the direction in which the angle is measured determines its sign.
However, in many geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant, no ambiguity arises. Otherwise, a convention may be adopted so that ∠BAC always refers to the anticlockwise angle from B to C, ∠CAB to the anticlockwise angle from C to B. An angle equal to 0° or not turned is called a zero angle. Angles smaller than a right angle are called acute angles. An angle equal to 1/4 turn is called a right angle. Two lines that form a right angle are said to be orthogonal, or perpendicular. Angles larger than a right angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle. Angles larger than a straight angle but less than 1 turn are called reflex angles. An angle equal to 1 turn is called complete angle, round angle or a perigon. Angles that are not right angles or a multiple of a right angle are called oblique angles; the names and measured units are shown in a table below: Angles that have the same measure are said to be equal or congruent.
An angle is not dependent upon the lengths of the sides of the angle. Two angles which share terminal sides, but differ in size by an integer multiple of a turn, are called coterminal angles. A reference angle is the acute version of any angle determined by subtracting or adding straight angle, to the results as necessary, until the magnitude of result is an acute angle, a value between 0 and 1/4 turn, 90°, or π/2 radians. For example, an angle of 30 degrees has a reference angle of 30 degrees, an angle of 150 degrees has a reference angle of 30 degrees. An angle of 750 degrees has a reference angle of 30 degrees; when two straight lines intersect at a point, four angles are formed. Pairwise these angles are named according to their location relative to each other. A pair of angles opposite each other, formed by two intersecting straight lines that form an "X"-like shape, are called vertical angles or opposite angles or vertically opposite angles, they are abbreviated as vert. opp. ∠s. The equality of vertically opposite angles is called the vertical angle theorem.
Eudemus of Rhodes attributed the proof to Thales of Miletus. The proposition showed that since both of a pair of vertical angles are supplementary to both of the adjacent angles, the vertical angles are equal in measure. According to a historical Note, w
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo
Force
In physics, a force is any interaction that, when unopposed, will change the motion of an object. A force can cause an object with mass i.e. to accelerate. Force can be described intuitively as a push or a pull. A force has both direction, making it a vector quantity, it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newton's second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. If the mass of the object is constant, this law implies that the acceleration of an object is directly proportional to the net force acting on the object, is in the direction of the net force, is inversely proportional to the mass of the object. Concepts related to force include: thrust. In an extended body, each part applies forces on the adjacent parts; such internal mechanical stresses cause no acceleration of that body as the forces balance one another. Pressure, the distribution of many small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate.
Stress causes deformation of solid materials, or flow in fluids. Philosophers in antiquity used the concept of force in the study of stationary and moving objects and simple machines, but thinkers such as Aristotle and Archimedes retained fundamental errors in understanding force. In part this was due to an incomplete understanding of the sometimes non-obvious force of friction, a inadequate view of the nature of natural motion. A fundamental error was the belief that a force is required to maintain motion at a constant velocity. Most of the previous misunderstandings about motion and force were corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved for nearly three hundred years. By the early 20th century, Einstein developed a theory of relativity that predicted the action of forces on objects with increasing momenta near the speed of light, provided insight into the forces produced by gravitation and inertia.
With modern insights into quantum mechanics and technology that can accelerate particles close to the speed of light, particle physics has devised a Standard Model to describe forces between particles smaller than atoms. The Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known: in order of decreasing strength, they are: strong, electromagnetic and gravitational. High-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines; the mechanical advantage given by a simple machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces culminated in the work of Archimedes, famous for formulating a treatment of buoyant forces inherent in fluids.
Aristotle provided a philosophical discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotle's view, the terrestrial sphere contained four elements that come to rest at different "natural places" therein. Aristotle believed that motionless objects on Earth, those composed of the elements earth and water, to be in their natural place on the ground and that they will stay that way if left alone, he distinguished between the innate tendency of objects to find their "natural place", which led to "natural motion", unnatural or forced motion, which required continued application of a force. This theory, based on the everyday experience of how objects move, such as the constant application of a force needed to keep a cart moving, had conceptual trouble accounting for the behavior of projectiles, such as the flight of arrows; the place where the archer moves the projectile was at the start of the flight, while the projectile sailed through the air, no discernible efficient cause acts on it.
Aristotle was aware of this problem and proposed that the air displaced through the projectile's path carries the projectile to its target. This explanation demands a continuum like air for change of place in general. Aristotelian physics began facing criticism in medieval science, first by John Philoponus in the 6th century; the shortcomings of Aristotelian physics would not be corrected until the 17th century work of Galileo Galilei, influenced by the late medieval idea that objects in forced motion carried an innate force of impetus. Galileo constructed an experiment in which stones and cannonballs were both rolled down an incline to disprove the Aristotelian theory of motion, he showed that the bodies were accelerated by gravity to an extent, independent of their mass and argued that objects retain their velocity unless acted on by a force, for example friction. Sir Isaac Newton described the motion of all objects using the concepts of inertia and force, in doing so he found they obey certain conservation laws.
In 1687, Newton published his thesis Philosophiæ Naturalis Principia Mathematica. In this work Newton set out three laws of motion that to this day are t
Frequency
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T
Acceleration
In physics, acceleration is the rate of change of velocity of an object with respect to time. An object's acceleration is the net result of all forces acting on the object, as described by Newton's Second Law; the SI unit for acceleration is metre per second squared. Accelerations add according to the parallelogram law; the vector of the net force acting on a body has the same direction as the vector of the body's acceleration, its magnitude is proportional to the magnitude of the acceleration, with the object's mass as proportionality constant. For example, when a car starts from a standstill and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the car turns, an acceleration occurs toward the new direction; the forward acceleration of the car is called a linear acceleration, the reaction to which passengers in the car experience as a force pushing them back into their seats. When changing direction, this is called radial acceleration, the reaction to which passengers experience as a sideways force.
If the speed of the car decreases, this is an acceleration in the opposite direction of the velocity of the vehicle, sometimes called deceleration or Retrograde burning in spacecraft. Passengers experience the reaction to deceleration as a force pushing them forwards. Both acceleration and deceleration are treated the same, they are both changes in velocity; each of these accelerations is felt by passengers until their velocity matches that of the uniformly moving car. An object's average acceleration over a period of time is its change in velocity divided by the duration of the period. Mathematically, a ¯ = Δ v Δ t. Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: a = lim Δ t → 0 Δ v Δ t = d v d t It can be seen that the integral of the acceleration function a is the velocity function v. V = ∫ a d t As acceleration is defined as the derivative of velocity, v, with respect to time t and velocity is defined as the derivative of position, x, with respect to time, acceleration can be thought of as the second derivative of x with respect to t: a = d v d t = d 2 x d t 2 Acceleration has the dimensions of velocity divided by time, i.e. L.
T−2. The SI unit of acceleration is the metre per second squared. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal acceleration. Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer. In classical mechanics, for a body with constant mass, the acceleration of the body's center of mass is proportional to the net force vector acting on it: F = m a → a = F / m where F is the net force acting on the body, m is the mass of the body, a is the center-of-mass acceleration; as speeds approach the speed of light, relativistic effects become large. The velocity of a particle moving on a curved path as a function of time can be written as: v = v v v = v u t, with v equal to the speed of travel along the path, u t = v v, {\displaystyle \mathbf _=
Kilogram
The kilogram or kilogramme is the base unit of mass in the International System of Units. Until 20 May 2019, it remains defined by a platinum alloy cylinder, the International Prototype Kilogram, manufactured in 1889, stored in Saint-Cloud, a suburb of Paris. After 20 May, it will be defined in terms of fundamental physical constants; the kilogram was defined as the mass of a litre of water. That was an inconvenient quantity to replicate, so in 1799 a platinum artefact was fashioned to define the kilogram; that artefact, the IPK, have been the standard of the unit of mass for the metric system since. In spite of best efforts to maintain it, the IPK has diverged from its replicas by 50 micrograms since their manufacture late in the 19th century; this led to efforts to develop measurement technology precise enough to allow replacing the kilogram artifact with a definition based directly on physical phenomena, now scheduled to take place in 2019. The new definition is based on invariant constants of nature, in particular the Planck constant, which will change to being defined rather than measured, thereby fixing the value of the kilogram in terms of the second and the metre, eliminating the need for the IPK.
The new definition was approved by the General Conference on Weights and Measures on 16 November 2018. The Planck constant relates a light particle's energy, hence mass, to its frequency; the new definition only became possible when instruments were devised to measure the Planck constant with sufficient accuracy based on the IPK definition of the kilogram. The gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimetre of water at the melting point of ice; the final kilogram, manufactured as a prototype in 1799 and from which the International Prototype Kilogram was derived in 1875, had a mass equal to the mass of 1 dm3 of water under atmospheric pressure and at the temperature of its maximum density, 4 °C. The kilogram is the only named SI unit with an SI prefix as part of its name; until the 2019 redefinition of SI base units, it was the last SI unit, still directly defined by an artefact rather than a fundamental physical property that could be independently reproduced in different laboratories.
Three other base units and 17 derived units in the SI system are defined in relation to the kilogram, thus its stability is important. The definitions of only eight other named SI units do not depend on the kilogram: those of temperature and frequency, angle; the IPK is used or handled. Copies of the IPK kept by national metrology laboratories around the world were compared with the IPK in 1889, 1948, 1989 to provide traceability of measurements of mass anywhere in the world back to the IPK; the International Prototype Kilogram was commissioned by the General Conference on Weights and Measures under the authority of the Metre Convention, in the custody of the International Bureau of Weights and Measures who hold it on behalf of the CGPM. After the International Prototype Kilogram had been found to vary in mass over time relative to its reproductions, the International Committee for Weights and Measures recommended in 2005 that the kilogram be redefined in terms of a fundamental constant of nature.
At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, h. The decision was deferred until 2014. CIPM has proposed revised definitions of the SI base units, for consideration at the 26th CGPM; the formal vote, which took place on 16 November 2018, approved the change, with the new definitions coming into force on 20 May 2019. The accepted redefinition defines the Planck constant as 6.62607015×10−34 kg⋅m2⋅s−1, thereby defining the kilogram in terms of the second and the metre. Since the second and metre are defined in terms of physical constants, the kilogram is defined in terms of physical constants only; the avoirdupois pound, used in both the imperial and US customary systems, is now defined in terms of the kilogram. Other traditional units of weight and mass around the world are now defined in terms of the kilogram, making the kilogram the primary standard for all units of mass on Earth; the word kilogramme or kilogram is derived from the French kilogramme, which itself was a learned coinage, prefixing the Greek stem of χίλιοι khilioi "a thousand" to gramma, a Late Latin term for "a small weight", itself from Greek γράμμα.
The word kilogramme was written into French law in 1795, in the Decree of 18 Germinal, which revised the older system of units introduced by the French National Convention in 1793, where the gravet had been defined as weight of a cubic centimetre of water, equal to 1/1000 of a grave. In the decree of 1795, the term gramme thus replaced gravet, kilogramme replaced grave; the French spelling was adopted in Great Britain when the word was used for the first time in English in 1795, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with "kilogram" having become by far the more common. UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has been used to mean both kilogram and kilometre. While kilo is acceptable in many generalist texts