1.
Gravity Recovery and Climate Experiment
–
By measuring gravity anomalies, GRACE shows how mass is distributed around the planet and how it varies over time. Data from the GRACE satellites is an important tool for studying Earths ocean, geology, the Jet Propulsion Laboratory is responsible for the overall mission management under the NASA ESSP program. The principal investigator is Dr. Byron Tapley of the University of Texas Center for Space Research, the GRACE satellites were launched from Plesetsk Cosmodrome, Russia on a Rockot launch vehicle, on March 17,2002. The spacecraft were launched to an altitude of approximately 500 km at a near-polar inclination of 89°. The satellites are separated by approximately 200 km along their orbit track, GRACE has far exceeded its designed five-year lifespan. As of March 2017 the GRACE spacecrafts orbit has decayed by 150 km, GRACE chiefly detects changes in the distribution of water across the planet. Scientists use GRACE data to estimate ocean bottom pressure—as important to oceanographers as atmospheric pressure is to meteorologists, high-resolution static gravity fields estimated from GRACE data have helped improve the understanding of global ocean circulation. The hills and valleys in the surface are due to currents. GRACE enables separation of two effects to better measure ocean currents and their effect on climate. GRACE data have provided a record of loss within the ice sheets of Greenland. Greenland has been found to lose 280 ±58 Gt of ice per year between 2003 and 2013, while Antarctica has lost 67±44 Gt per year in the same period and these equate to a total of 0.9 mm/yr of sea level rise. GRACE data have provided insights into regional hydrology inaccessible to other forms of remote sensing, for example. The annual hydrology of the Amazon river basin provides a strong signal when viewed by GRACE. The most over-stressed is the Arabian aquifer system upon which more than 60 million people depend for water, GRACE also detects changes in the gravity field due to geophysical processes. Glacial isostatic adjustment— the slow rise of land masses once depressed by the weight of ice sheets from the last ice age—is chief among these signals. GIA signals appear as secular trends in gravity field measurements and must be removed to accurately estimate changes in water, GRACE is also sensitive to permanent changes in the gravity field due to earthquakes. For instance, GRACE data have been used to analyze the shifts in the Earths crust caused by the earthquake created the 2004 Indian Ocean tsunami. GRACE is sensitive to variations in the mass of the atmosphere
2.
Earth ellipsoid
–
An Earth ellipsoid is a mathematical figure approximating the shape of the Earth, used as a reference frame for computations in geodesy, astronomy and the geosciences. Various different ellipsoids have been used as approximations and it is an ellipsoid of revolution, whose short axis is approximately aligned with the rotation axis of the Earth. The ellipsoid is defined by the axis a and the polar axis b. Additional parameters are the mass function J2, the correspondent gravity formula, many methods exist for determination of the axes of an Earth ellipsoid, ranging from meridian arcs up to modern satellite geodesy or the analysis and interconnection of continental geodetic networks. Amongst the different set of used in national surveys are several of special importance, the Bessel ellipsoid of 1841, the international Hayford ellipsoid of 1924. A data set describes the global average of the Earths surface curvature is called the mean Earth Ellipsoid. It refers to a theoretical coherence between the latitude and the meridional curvature of the geoid. The latter is close to the sea level, and therefore an ideal Earth ellipsoid has the same volume as the geoid. While the mean Earth ellipsoid is the basis of global geodesy. Another reason is a one, the coordinates of millions of boundary stones should remain fixed for a long period. If their reference surface changes, the coordinates themselves also change, however, for international networks, GPS positioning, or astronautics, these regional reasons are less relevant. As knowledge of the Earths figure is accurate, the International Geoscientific Union IUGG usually adapts the axes of the Earth ellipsoid to the best available data. High precision land surveys can be used to determine the distance between two places at nearly the same longitude by measuring a base line and a chain of triangles. The distance Δ along the meridian from one end point to a point at the latitude as the second end point is then calculated by trigonometry. The surface distance Δ is reduced to Δ, the distance at mean sea level. The intermediate distances to points on the meridian at the same latitudes as other stations of the survey may also be calculated. The geographic latitudes of both end points, φs and φf and possibly at other points are determined by astrogeodesy, if latitudes are measured at end points only, the radius of curvature at the midpoint of the meridian arc can be calculated from R = Δ/. A second meridian arc will allow the derivation of two parameters required to specify a reference ellipsoid, longer arcs with intermediate latitude determinations can completely determine the ellipsoid
3.
Acceleration
–
Acceleration, in physics, is the rate of change of velocity of an object with respect to time. An objects acceleration is the net result of any and all forces acting on the object, the SI unit for acceleration is metre per second squared. Accelerations are vector quantities and add according to the parallelogram law, as a vector, the calculated net force is equal to the product of the objects mass and its acceleration. For example, when a car starts from a standstill and travels in a line at increasing speeds. If the car turns, there is an acceleration toward the new direction, in this example, we can call the forward acceleration of the car a linear acceleration, which passengers in the car might experience as a force pushing them back into their seats. When changing direction, we call this non-linear acceleration, which passengers might experience as a sideways force. If the speed of the car decreases, this is an acceleration in the direction from the direction of the vehicle. Passengers may experience deceleration as a force lifting them forwards, mathematically, there is no separate formula for deceleration, both are changes in velocity. Each of these accelerations might be felt by passengers until their velocity matches that of the car, an objects average acceleration over a period of time is its change in velocity divided by the duration of the period. Mathematically, a ¯ = Δ v Δ t, instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. The SI unit of acceleration is the metre per second squared, or metre per second per second, as the velocity in metres per second changes by the acceleration value, every second. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, in this case it is said to be undergoing centripetal acceleration. Proper acceleration, the acceleration of a relative to a free-fall condition, is measured by an instrument called an accelerometer. As speeds approach the speed of light, relativistic effects become increasingly large and these components are called the tangential acceleration and the normal or radial acceleration. Geometrical analysis of space curves, which explains tangent, normal and binormal, is described by the Frenet–Serret formulas. Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a gravitational field. The acceleration of a body in the absence of resistances to motion is dependent only on the gravitational field strength g
4.
Earth
–
Earth, otherwise known as the World, or the Globe, is the third planet from the Sun and the only object in the Universe known to harbor life. It is the densest planet in the Solar System and the largest of the four terrestrial planets, according to radiometric dating and other sources of evidence, Earth formed about 4.54 billion years ago. Earths gravity interacts with objects in space, especially the Sun. During one orbit around the Sun, Earth rotates about its axis over 365 times, thus, Earths axis of rotation is tilted, producing seasonal variations on the planets surface. The gravitational interaction between the Earth and Moon causes ocean tides, stabilizes the Earths orientation on its axis, Earths lithosphere is divided into several rigid tectonic plates that migrate across the surface over periods of many millions of years. About 71% of Earths surface is covered with water, mostly by its oceans, the remaining 29% is land consisting of continents and islands that together have many lakes, rivers and other sources of water that contribute to the hydrosphere. The majority of Earths polar regions are covered in ice, including the Antarctic ice sheet, Earths interior remains active with a solid iron inner core, a liquid outer core that generates the Earths magnetic field, and a convecting mantle that drives plate tectonics. Within the first billion years of Earths history, life appeared in the oceans and began to affect the Earths atmosphere and surface, some geological evidence indicates that life may have arisen as much as 4.1 billion years ago. Since then, the combination of Earths distance from the Sun, physical properties, in the history of the Earth, biodiversity has gone through long periods of expansion, occasionally punctuated by mass extinction events. Over 99% of all species that lived on Earth are extinct. Estimates of the number of species on Earth today vary widely, over 7.4 billion humans live on Earth and depend on its biosphere and minerals for their survival. Humans have developed diverse societies and cultures, politically, the world has about 200 sovereign states, the modern English word Earth developed from a wide variety of Middle English forms, which derived from an Old English noun most often spelled eorðe. It has cognates in every Germanic language, and their proto-Germanic root has been reconstructed as *erþō, originally, earth was written in lowercase, and from early Middle English, its definite sense as the globe was expressed as the earth. By early Modern English, many nouns were capitalized, and the became the Earth. More recently, the name is simply given as Earth. House styles now vary, Oxford spelling recognizes the lowercase form as the most common, another convention capitalizes Earth when appearing as a name but writes it in lowercase when preceded by the. It almost always appears in lowercase in colloquial expressions such as what on earth are you doing, the oldest material found in the Solar System is dated to 4. 5672±0.0006 billion years ago. By 4. 54±0.04 Gya the primordial Earth had formed, the formation and evolution of Solar System bodies occurred along with the Sun
5.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
6.
Metre
–
The metre or meter, is the base unit of length in the International System of Units. The metre is defined as the length of the path travelled by light in a vacuum in 1/299792458 seconds, the metre was originally defined in 1793 as one ten-millionth of the distance from the equator to the North Pole. In 1799, it was redefined in terms of a metre bar. In 1960, the metre was redefined in terms of a number of wavelengths of a certain emission line of krypton-86. In 1983, the current definition was adopted, the imperial inch is defined as 0.0254 metres. One metre is about 3 3⁄8 inches longer than a yard, Metre is the standard spelling of the metric unit for length in nearly all English-speaking nations except the United States and the Philippines, which use meter. Measuring devices are spelled -meter in all variants of English, the suffix -meter has the same Greek origin as the unit of length. This range of uses is found in Latin, French, English. Thus calls for measurement and moderation. In 1668 the English cleric and philosopher John Wilkins proposed in an essay a decimal-based unit of length, as a result of the French Revolution, the French Academy of Sciences charged a commission with determining a single scale for all measures. In 1668, Wilkins proposed using Christopher Wrens suggestion of defining the metre using a pendulum with a length which produced a half-period of one second, christiaan Huygens had observed that length to be 38 Rijnland inches or 39.26 English inches. This is the equivalent of what is now known to be 997 mm, no official action was taken regarding this suggestion. In the 18th century, there were two approaches to the definition of the unit of length. One favoured Wilkins approach, to define the metre in terms of the length of a pendulum which produced a half-period of one second. The other approach was to define the metre as one ten-millionth of the length of a quadrant along the Earths meridian, that is, the distance from the Equator to the North Pole. This means that the quadrant would have defined as exactly 10000000 metres at that time. To establish a universally accepted foundation for the definition of the metre, more measurements of this meridian were needed. This portion of the meridian, assumed to be the length as the Paris meridian, was to serve as the basis for the length of the half meridian connecting the North Pole with the Equator
7.
Second
–
The second is the base unit of time in the International System of Units. It is qualitatively defined as the division of the hour by sixty. SI definition of second is the duration of 9192631770 periods of the corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Seconds may be measured using a mechanical, electrical or an atomic clock, SI prefixes are combined with the word second to denote subdivisions of the second, e. g. the millisecond, the microsecond, and the nanosecond. Though SI prefixes may also be used to form multiples of the such as kilosecond. The second is also the unit of time in other systems of measurement, the centimetre–gram–second, metre–kilogram–second, metre–tonne–second. Absolute zero implies no movement, and therefore zero external radiation effects, the second thus defined is consistent with the ephemeris second, which was based on astronomical measurements. The realization of the second is described briefly in a special publication from the National Institute of Standards and Technology. 1 international second is equal to, 1⁄60 minute 1⁄3,600 hour 1⁄86,400 day 1⁄31,557,600 Julian year 1⁄, more generally, = 1⁄, the Hellenistic astronomers Hipparchus and Ptolemy subdivided the day into sixty parts. They also used an hour, simple fractions of an hour. No sexagesimal unit of the day was used as an independent unit of time. The modern second is subdivided using decimals - although the third remains in some languages. The earliest clocks to display seconds appeared during the last half of the 16th century, the second became accurately measurable with the development of mechanical clocks keeping mean time, as opposed to the apparent time displayed by sundials. The earliest spring-driven timepiece with a hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection. During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute, in 1579, Jost Bürgi built a clock for William of Hesse that marked seconds. In 1581, Tycho Brahe redesigned clocks that displayed minutes at his observatory so they also displayed seconds, however, they were not yet accurate enough for seconds. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds, in 1670, London clockmaker William Clement added this seconds pendulum to the original pendulum clock of Christiaan Huygens. From 1670 to 1680, Clement made many improvements to his clock and this clock used an anchor escapement mechanism with a seconds pendulum to display seconds in a small subdial
8.
Newton (unit)
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
9.
Kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
10.
Drag (physics)
–
In fluid dynamics, drag is a force acting opposite to the relative motion of any object moving with respect to a surrounding fluid. This can exist between two layers or a fluid and a solid surface. Unlike other resistive forces, such as dry friction, which are independent of velocity. Drag force is proportional to the velocity for a laminar flow, even though the ultimate cause of a drag is viscous friction, the turbulent drag is independent of viscosity. Drag forces always decrease fluid velocity relative to the object in the fluids path. In the case of viscous drag of fluid in a pipe, in physics of sports, the drag force is necessary to explain the performance of runners, particularly of sprinters. Types of drag are generally divided into the categories, parasitic drag, consisting of form drag, skin friction, interference drag, lift-induced drag. The phrase parasitic drag is used in aerodynamics, since for lifting wings drag it is in general small compared to lift. For flow around bluff bodies, form and interference drags often dominate, further, lift-induced drag is only relevant when wings or a lifting body are present, and is therefore usually discussed either in aviation or in the design of semi-planing or planing hulls. Wave drag occurs either when an object is moving through a fluid at or near the speed of sound or when a solid object is moving along a fluid boundary. Drag depends on the properties of the fluid and on the size, shape, at low R e, C D is asymptotically proportional to R e −1, which means that the drag is linearly proportional to the speed. At high R e, C D is more or less constant, the graph to the right shows how C D varies with R e for the case of a sphere. As mentioned, the equation with a constant drag coefficient gives the force experienced by an object moving through a fluid at relatively large velocity. This is also called quadratic drag, the equation is attributed to Lord Rayleigh, who originally used L2 in place of A. Sometimes a body is a composite of different parts, each with a different reference areas, in the case of a wing the reference areas are the same and the drag force is in the same ratio to the lift force as the ratio of drag coefficient to lift coefficient. Therefore, the reference for a wing is often the area rather than the frontal area. For an object with a surface, and non-fixed separation points—like a sphere or circular cylinder—the drag coefficient may vary with Reynolds number Re. For an object with well-defined fixed separation points, like a disk with its plane normal to the flow direction
11.
Speed
–
In everyday use and in kinematics, the speed of an object is the magnitude of its velocity, it is thus a scalar quantity. Speed has the dimensions of distance divided by time, the SI unit of speed is the metre per second, but the most common unit of speed in everyday usage is the kilometre per hour or, in the US and the UK, miles per hour. For air and marine travel the knot is commonly used, the fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in a vacuum c =299792458 metres per second. Matter cannot quite reach the speed of light, as this would require an amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed, italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time, in equation form, this is v = d t, where v is speed, d is distance, and t is time. A cyclist who covers 30 metres in a time of 2 seconds, objects in motion often have variations in speed. If s is the length of the path travelled until time t, in the special case where the velocity is constant, this can be simplified to v = s / t. The average speed over a time interval is the total distance travelled divided by the time duration. Speed at some instant, or assumed constant during a short period of time, is called instantaneous speed. By looking at a speedometer, one can read the speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, if the vehicle continued at that speed for half an hour, it would cover half that distance. If it continued for one minute, it would cover about 833 m. Different from instantaneous speed, average speed is defined as the distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres is divided by a time in hours, average speed does not describe the speed variations that may have taken place during shorter time intervals, and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres. Linear speed is the distance travelled per unit of time, while speed is the linear speed of something moving along a circular path
12.
Free fall
–
In Newtonian physics, free fall is any motion of a body where gravity is the only force acting upon it. In the context of relativity, where gravitation is reduced to a space-time curvature. The present article only concerns itself with free fall in the Newtonian domain, an object in the technical sense of free fall may not necessarily be falling down in the usual sense of the term. An object moving upwards would not normally be considered to be falling, the moon is thus in free fall. The term free fall is used more loosely than in the strict sense defined above. Thus, falling through an atmosphere without a parachute, or lifting device, is also often referred to as free fall. The ancient Greek philosopher Aristotle discussed falling objects in Physics which was perhaps the first book on mechanics, the Italian scientist Galileo Galilei subjected the Aristotelian theories to experimentation and careful observation. He then combined the results of experiments with mathematical analysis in an unprecedented way. According to a tale that may be apocryphal, in 1589–92 Galileo dropped two objects of mass from the Leaning Tower of Pisa. Given the speed at such a fall would occur, it is doubtful that Galileo could have extracted much information from this experiment. Most of his observations of falling bodies were really of bodies rolling down ramps and this slowed things down enough to the point where he was able to measure the time intervals with water clocks and his own pulse. This he repeated a full hundred times until he had achieved an accuracy such that the deviation between two observations never exceeded one-tenth of a pulse beat, in 1589–92, Galileo wrote De Motu Antiquiora, an unpublished manuscript on the motion of falling bodies. Examples of objects in free fall include, A spacecraft with propulsion off, an object dropped at the top of a drop tube. An object thrown upward or a jumping off the ground at low speed. Technically, an object is in free fall even when moving upwards or instantaneously at rest at the top of its motion, if gravity is the only influence acting, then the acceleration is always downward and has the same magnitude for all bodies, commonly denoted g. Since all objects fall at the rate in the absence of other forces, objects. Examples of objects not in free fall, Flying in an aircraft, standing on the ground, the gravitational force is counteracted by the normal force from the ground. Descending to the Earth using a parachute, which balances the force of gravity with a drag force
13.
Gravitational constant
–
Its measured value is 6. 67408×10−11 m3⋅kg−1⋅s−2. The constant of proportionality, G, is the gravitational constant, colloquially, the gravitational constant is also called Big G, for disambiguation with small g, which is the local gravitational field of Earth. The two quantities are related by g = GME/r2 E. In the general theory of relativity, the Einstein field equations, R μ ν −12 R g μ ν =8 π G c 4 T μ ν, the scaled gravitational constant κ = 8π/c4G ≈2. 071×10−43 s2·m−1·kg−1 is also known as Einsteins constant. The gravitational constant is a constant that is difficult to measure with high accuracy. This is because the force is extremely weak compared with other fundamental forces. In SI units, the 2014 CODATA-recommended value of the constant is. In cgs, G can be written as G ≈6. 674×10−8 cm3·g−1·s−2, in other words, in Planck units, G has the numerical value of 1. In astrophysics, it is convenient to measure distances in parsecs, velocities in kilometers per second, in these units, the gravitational constant is, G ≈4.302 ×10 −3 p c M ⊙ −12. In orbital mechanics, the period P of an object in orbit around a spherical object obeys G M =3 π V P2 where V is the volume inside the radius of the orbit. It follows that P2 =3 π G V M ≈10.896 h r 2 g c m −3 V M. This way of expressing G shows the relationship between the density of a planet and the period of a satellite orbiting just above its surface. Cavendish measured G implicitly, using a torsion balance invented by the geologist Rev. John Michell and he used a horizontal torsion beam with lead balls whose inertia he could tell by timing the beams oscillation. Their faint attraction to other balls placed alongside the beam was detectable by the deflection it caused, cavendishs aim was not actually to measure the gravitational constant, but rather to measure Earths density relative to water, through the precise knowledge of the gravitational interaction. In modern units, the density that Cavendish calculated implied a value for G of 6. 754×10−11 m3·kg−1·s−2, the accuracy of the measured value of G has increased only modestly since the original Cavendish experiment. G is quite difficult to measure, because gravity is weaker than other fundamental forces. Published values of G have varied rather broadly, and some recent measurements of precision are, in fact. This led to the 2010 CODATA value by NIST having 20% increased uncertainty than in 2006, for the 2014 update, CODATA reduced the uncertainty to less than half the 2010 value
14.
Weight
–
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity. Weight is a vector whose magnitude, often denoted by an italic letter W, is the product of the m of the object. The unit of measurement for weight is that of force, which in the International System of Units is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, in this sense of weight, a body can be weightless only if it is far away from any other mass. Although weight and mass are scientifically distinct quantities, the terms are often confused with other in everyday use. There is also a tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the force exerted on a body. Typically, in measuring an objects weight, the object is placed on scales at rest with respect to the earth, thus, in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless, ignoring air resistance, the famous apple falling from the tree, on its way to meet the ground near Isaac Newton, is weightless. Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to gravity is modelled as a consequence of the curvature of spacetime. In the teaching community, a debate has existed for over half a century on how to define weight for their students. The current situation is that a set of concepts co-exist. Discussion of the concepts of heaviness and lightness date back to the ancient Greek philosophers and these were typically viewed as inherent properties of objects. Plato described weight as the tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the order of the basic elements, air, earth, fire. He ascribed absolute weight to earth and absolute levity to fire, archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as, weight is the heaviness or lightness of one thing, compared to another, operational balances had, however, been around much longer. According to Aristotle, weight was the cause of the falling motion of an object
15.
Newton's laws of motion
–
Newtons laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a measure of the force. These three laws have been expressed in different ways, over nearly three centuries, and can be summarised as follows. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation. Newtons laws are applied to objects which are idealised as single point masses, in the sense that the size and this can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star, in their original form, Newtons laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newtons laws of motion for rigid bodies called Eulers laws of motion, if a body is represented as an assemblage of discrete particles, each governed by Newtons laws of motion, then Eulers laws can be derived from Newtons laws. Eulers laws can, however, be taken as axioms describing the laws of motion for extended bodies, Newtons laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Other authors do treat the first law as a corollary of the second, the explicit concept of an inertial frame of reference was not developed until long after Newtons death. In the given mass, acceleration, momentum, and force are assumed to be externally defined quantities. This is the most common, but not the interpretation of the way one can consider the laws to be a definition of these quantities. Newtonian mechanics has been superseded by special relativity, but it is useful as an approximation when the speeds involved are much slower than the speed of light. The first law states that if the net force is zero, the first law can be stated mathematically when the mass is a non-zero constant, as, ∑ F =0 ⇔ d v d t =0. Consequently, An object that is at rest will stay at rest unless a force acts upon it, an object that is in motion will not change its velocity unless a force acts upon it. This is known as uniform motion, an object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest, if an object is moving, it continues to move without turning or changing its speed
16.
Sphere
–
A sphere is a perfectly round geometrical object in three-dimensional space that is the surface of a completely round ball. This distance r is the radius of the ball, and the point is the center of the mathematical ball. The longest straight line through the ball, connecting two points of the sphere, passes through the center and its length is twice the radius. While outside mathematics the terms sphere and ball are used interchangeably. The ball and the share the same radius, diameter. The surface area of a sphere is, A =4 π r 2, at any given radius r, the incremental volume equals the product of the surface area at radius r and the thickness of a shell, δ V ≈ A ⋅ δ r. The total volume is the summation of all volumes, V ≈ ∑ A ⋅ δ r. In the limit as δr approaches zero this equation becomes, V = ∫0 r A d r ′, substitute V,43 π r 3 = ∫0 r A d r ′. Differentiating both sides of equation with respect to r yields A as a function of r,4 π r 2 = A. Which is generally abbreviated as, A =4 π r 2, alternatively, the area element on the sphere is given in spherical coordinates by dA = r2 sin θ dθ dφ. In Cartesian coordinates, the element is d S = r r 2 − ∑ i ≠ k x i 2 ∏ i ≠ k d x i, ∀ k. For more generality, see area element, the total area can thus be obtained by integration, A = ∫02 π ∫0 π r 2 sin θ d θ d φ =4 π r 2. In three dimensions, the volume inside a sphere is derived to be V =43 π r 3 where r is the radius of the sphere, archimedes first derived this formula, which shows that the volume inside a sphere is 2/3 that of a circumscribed cylinder. In modern mathematics, this formula can be derived using integral calculus, at any given x, the incremental volume equals the product of the cross-sectional area of the disk at x and its thickness, δ V ≈ π y 2 ⋅ δ x. The total volume is the summation of all volumes, V ≈ ∑ π y 2 ⋅ δ x. In the limit as δx approaches zero this equation becomes, V = ∫ − r r π y 2 d x. At any given x, a right-angled triangle connects x, y and r to the origin, hence, applying the Pythagorean theorem yields, thus, substituting y with a function of x gives, V = ∫ − r r π d x. Which can now be evaluated as follows, V = π − r r = π − π =43 π r 3
17.
Planetary surface
–
Land is the term given to non-liquid planetary surfaces. The term landing is used to describe the collision of an object with a surface and is usually at a velocity in which the object can remain intact. In differentiated bodies, the surface is where the crust meets the boundary layer. Anything below this is regarded as being sub-surface or sub-marine, most bodies more massive than Super-Earths, including stars and gas giants, as well as smaller gas dwarfs, transition contiguously between phases including gas, liquid and solids. As such they are regarded as lacking surfaces. Planetary surfaces and surface life are of particular interest to humans as it the primary habitat of the species, human space exploration and space colonization therefore focuses heavily on them. Humans have only directly explored the surface of the Earth and Moon, the vast distances of space and the complexities of makes direct exploration of even Near-Earth objects dangerous and expensive. As such, all other exploration has been indirect via Space probes, indirect observations by flyby or orbit currently provide insufficient information to confirm the composition and properties of planetary surfaces. Much of what is known is from the use of such as astronomical spectroscopy. Lander spacecraft have explored the surfaces of planets Mars and Venus, Mars is the only other planet to have had its surface explored by a mobile surface probe. Titan is the only object of planetary mass to have been explored by lander. Landers have explored several smaller bodies including 433 Eros,25143 Itokawa, Tempel 1, surface conditions, temperatures and terrain vary significantly due to a number of factors including Albedo often generated by the surfaces itself. Measures of surface conditions include surface area, surface gravity, surface temperature, surface stability may be affected by erosion through Aeolian processes, hydrology, subduction, volcanism, sediment or seismic activity. Some surfaces are dynamic while others remain unchanged for millions of years, distance, gravity, atmospheric conditions and unknown factors make exploration is both costly and risky. This necessitates the space probes for early exploration of planetary surfaces, many probes are stationary have a limited study range and generally survive on extraterrestrial surfaces for a short period, however mobile probes have surveyed larger surface areas. The first extraterrestrial planetary surface to be explored was the surface by Luna 2 in 1959. Venera 7 was the first landing of a probe on another planet on December 15,1970, NEAR Shoemaker was the first to soft land on an asteroid -433 Eros in February 2001 while Hayabusa was the first to return samples from 25143 Itokawa on 13 June 2010. Huygens soft landed and returned data from Titan on January 14,2005, there have been many failed attempts, more recently Fobos-Grunt, a sample return mission aimed at exploring the surface of Phobos
18.
Spheroid
–
A spheroid, or ellipsoid of revolution, is a quadric surface obtained by rotating an ellipse about one of its principal axes, in other words, an ellipsoid with two equal semi-diameters. If the ellipse is rotated about its axis, the result is a prolate spheroid. If the ellipse is rotated about its axis, the result is an oblate spheroid. If the generating ellipse is a circle, the result is a sphere, because of the combined effects of gravity and rotation, the Earths shape is not quite a sphere but instead is slightly flattened in the direction of its axis of rotation. For that reason, in cartography the Earth is often approximated by an oblate spheroid instead of a sphere, the current World Geodetic System model uses a spheroid whose radius is 6,378.137 km at the equator and 6,356.752 km at the poles. The semi-major axis a is the radius of the spheroid. There are two cases, c < a, oblate spheroid c > a, prolate spheroid The case of a = c reduces to a sphere. An oblate spheroid with c < a has surface area S o b l a t e =2 π a 2 where e 2 =1 − c 2 a 2. The oblate spheroid is generated by rotation about the z-axis of an ellipse with semi-major axis a and semi-minor axis c, therefore e may be identified as the eccentricity. A prolate spheroid with c > a has surface area S p r o l a t e =2 π a 2 where e 2 =1 − a 2 c 2. The prolate spheroid is generated by rotation about the z-axis of an ellipse with semi-major axis c and semi-minor axis a and these formulas are identical in the sense that the formula for Soblate can be used to calculate the surface area of a prolate spheroid and vice versa. However, e then becomes imaginary and can no longer directly be identified with the eccentricity, both of these results may be cast into many other forms using standard mathematical identities and relations between parameters of the ellipse. The volume inside a spheroid is 4π/3a2c ≈4. 19a2c, if A = 2a is the equatorial diameter, and C = 2c is the polar diameter, the volume is π/6A2C ≈0. 523A2C. Both of these curvatures are always positive, so every point on a spheroid is elliptic. These are just two of different parameters used to define an ellipse and its solid body counterparts. The most common shapes for the density distribution of protons and neutrons in an atomic nucleus are spherical, prolate and oblate spheroidal, deformed nuclear shapes occur as a result of the competition between electromagnetic repulsion between protons, surface tension and quantum shell effects. An extreme example of a planet in science fiction is Mesklin, in Hal Clements novel Mission of Gravity. The prolate spheroid is the shape of the ball in several sports, several moons of the Solar system approximate prolate spheroids in shape, though they are actually triaxial ellipsoids
19.
Net force
–
In physics, net force is the overall force acting on an object. In order to calculate the net force, the body is isolated and it is always possible to determine the torque associated with a point of application of a net force so that it maintains the movement of the object under the original system of forces. With its associated torque, the net force becomes the resultant force and has the effect on the rotational motion of the object as all actual forces taken together. It is possible for a system of forces to define a torque-free resultant force, in this case, the net force when applied at the proper line of action has the same effect on the body as all of the forces at their points of application. It is not always possible to find a torque-free resultant force, the sum of forces acting on a particle is called the total force or the net force. The net force is a force that replaces the effect of the original forces on the particles motion. It gives the particle the same acceleration as all actual forces together as described by the Newtons second law of motion. Force is a quantity, which means that it has a magnitude and a direction. Graphically, a force is represented as line segment from its point of application A to a point B which defines its direction, the length of the segment AB represents the magnitude of the force. Vector calculus was developed in the late 1800s and early 1900s, the parallelogram rule used for the addition of forces, however, dates from antiquity and is noted explicitly by Galileo and Newton. The diagram shows the addition of the forces F →1 and F →2, the sum F → of the two forces is drawn as the diagonal of a parallelogram defined by the two forces. Forces applied to a body can have different points of application. Forces are bound vectors and can be added if they are applied at the same point. The net force on a body applied at a point with the appropriate torque is known as the resultant force. A force is known as a vector which means it has a direction and magnitude. A convenient way to define a force is by a segment from a point A to a point B. If we denote the coordinates of points as A= and B=. The length of the vector B-A defines the magnitude of F and is given by | F | =2 +2 +2, the sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them
20.
Plumb bob
–
A plumb bob, or plummet, is a weight, usually with a pointed tip on the bottom, suspended from a string and used as a vertical reference line, or plumb-line. It is essentially the equivalent of a water level. The instrument has been used since at least the time of ancient Egypt to ensure that constructions are plumb and it is also used in surveying, to establish the nadir with respect to gravity of a point in space. It is used with a variety of instruments to set the instrument exactly over a survey marker or to transcribe positions onto the ground for placing a marker. The plumb in plumb-bob comes from the fact that tools were originally made of lead. The adjective plumb developed by extension, as did the noun aplomb, until the modern age, plumb-bobs were used on most tall structures to provide vertical datum lines for the building measurements. A section of the scaffolding would hold a line, which was centered over a datum mark on the floor. As the building proceeded upward, the line would also be taken higher. Many cathedral spires, domes and towers still have brass datum marks inlaid into their floors, which signify the center of the structure above. Although a plumb-bob and line alone can determine only a vertical if they are mounted on a suitable scale, the early skyscrapers used heavy plumb-bobs, hung on wire in their elevator shafts. A plumb bob may be in a container of water, molasses, very viscous oils or other liquids to dampen any swinging movement, functioning as a shock absorber. Students of figure drawing will also use of a plumb line to find the vertical axis through the center of gravity of their subject. The device used may be purpose-made plumb lines, or simply makeshift devices made from a piece of string and this plumb line is important for lining up anatomical geometries and visualizing the subjects center of balance. Bob Centre of mass – used to find the centre of mass on a 2D shape which has uniform density Chalk line Vertical direction 60 oz. Plumb Bob
21.
Arctic Ocean
–
The Arctic Ocean is the smallest and shallowest of the worlds five major oceans. Alternatively, the Arctic Ocean can be seen as the northernmost part of the all-encompassing World Ocean, located mostly in the Arctic north polar region in the middle of the Northern Hemisphere, the Arctic Ocean is almost completely surrounded by Eurasia and North America. It is partly covered by sea ice throughout the year and almost completely in winter, the summer shrinking of the ice has been quoted at 50%. The US National Snow and Ice Data Center uses satellite data to provide a record of Arctic sea ice cover. The Arctic may become ice free for the first time in human history within a few years or by 2040, for much of European history, the north polar regions remained largely unexplored and their geography conjectural. He was probably describing loose sea ice known today as growlers or bergy bits, his Thule was probably Norway, early cartographers were unsure whether to draw the region around the North Pole as land or water. The makers of navigational charts, more conservative than some of the more fanciful cartographers, tended to leave the region blank and this lack of knowledge of what lay north of the shifting barrier of ice gave rise to a number of conjectures. In England and other European nations, the myth of an Open Polar Sea was persistent, john Barrow, longtime Second Secretary of the British Admiralty, promoted exploration of the region from 1818 to 1845 in search of this. In the United States in the 1850s and 1860s, the explorers Elisha Kane, even quite late in the century, the eminent authority Matthew Fontaine Maury included a description of the Open Polar Sea in his textbook The Physical Geography of the Sea. Nevertheless, as all the explorers who travelled closer and closer to the reported, the polar ice cap is quite thick. Fridtjof Nansen was the first to make a crossing of the Arctic Ocean. The first surface crossing of the ocean was led by Wally Herbert in 1969, in a dog sled expedition from Alaska to Svalbard, with air support. The first nautical transit of the pole was made in 1958 by the submarine USS Nautilus. Since 1937, Soviet and Russian manned drifting ice stations have extensively monitored the Arctic Ocean, scientific settlements were established on the drift ice and carried thousands of kilometres by ice floes. In World War II, the European region of the Arctic Ocean was heavily contested, the Arctic Ocean occupies a roughly circular basin and covers an area of about 14,056,000 km2, almost the size of Antarctica. The coastline is 45,390 km long and it is surrounded by the land masses of Eurasia, North America, Greenland, and by several islands. It is connected to the Pacific Ocean by the Bering Strait and to the Atlantic Ocean through the Greenland Sea, countries bordering the Arctic Ocean are, Russia, Norway, Iceland, Greenland, Canada and the United States. There are several ports and harbours around the Arctic Ocean In Alaska, in Canada, ships may anchor at Churchill in Manitoba, Nanisivik in Nunavut, Tuktoyaktuk or Inuvik in the Northwest territories
22.
Kuala Lumpur
–
Kuala Lumpur, officially the Federal Territory of Kuala Lumpur, or more commonly called KL is the national capital of Malaysia as well as its largest city. Being rated as an Alpha world city, Kuala Lumpur is the global city in Malaysia which covers an area of 243 km2 and has an estimated population of 1.73 million as of 2016. Greater Kuala Lumpur, also known as the Klang Valley, is an agglomeration of 7.25 million people as of 2017. It is among the fastest growing regions in South-East Asia, in terms of population. Kuala Lumpur is the seat of the Parliament of Malaysia, the city was once home to the executive and judicial branches of the federal government, but they were moved to Putrajaya in early 1999. Some sections of the judiciary still remain in the city of Kuala Lumpur. The official residence of the Malaysian King, the Istana Negara, is situated in Kuala Lumpur. Kuala Lumpur is the cultural, financial and economic centre of Malaysia due to its position as the capital as well as being a key city. Kuala Lumpur is one of three Federal Territories of Malaysia, enclaved within the state of Selangor, on the central west coast of Peninsular Malaysia. Since the 1990s, the city has played host to international sporting, political and cultural events including the 1998 Commonwealth Games. Kuala Lumpur has undergone rapid development in recent decades and it is home to the tallest twin buildings in the world, the Petronas Twin Towers, which have become an iconic symbol of Malaysias futuristic development. Kuala Lumpur means muddy confluence, kuala is the point where two rivers join together or an estuary, and lumpur means mud. One suggestion is that it was named after Sungai Lumpur, it was recorded in 1824 that Sungei Lumpoor was the most important tin-producing settlement up the Klang River. It has also proposed that Kuala Lumpur was originally named Pengkalan Lumpur in the same way that Klang was once called Pengkalan Batu. Another suggestion is that it was initially a Cantonese word lam-pa meaning flooded jungle or decayed jungle, there is however no firm contemporary evidence for these suggestions other than anecdotes. It is also possible that the name is a form of an earlier. It is unknown who founded or named the settlement called Kuala Lumpur, Kuala Lumpur was originally a small hamlet of just a few houses and shops at the confluence of Sungai Gombak and Sungai Klang before it grew into a town. The miners landed at Kuala Lumpur and continued their journey on foot to Ampang where the first mine was opened
23.
Mexico City
–
Mexico City, or City of Mexico, is the capital and most populous city of Mexico. As an alpha global city, Mexico City is one of the most important financial centers in the Americas and it is located in the Valley of Mexico, a large valley in the high plateaus at the center of Mexico, at an altitude of 2,240 metres. The city consists of sixteen municipalities, the 2009 estimated population for the city proper was approximately 8.84 million people, with a land area of 1,485 square kilometres. The Greater Mexico City has a domestic product of US$411 billion in 2011. The city was responsible for generating 15. 8% of Mexicos Gross Domestic Product, as a stand-alone country, in 2013, Mexico City would be the fifth-largest economy in Latin America—five times as large as Costa Ricas and about the same size as Perus. Mexico’s capital is both the oldest capital city in the Americas and one of two founded by Amerindians, the other being Quito. In 1524, the municipality of Mexico City was established, known as México Tenochtitlán, Mexico City served as the political, administrative and financial center of a major part of the Spanish colonial empire. After independence from Spain was achieved, the district was created in 1824. Ever since, the left-wing Party of the Democratic Revolution has controlled both of them, in recent years, the local government has passed a wave of liberal policies, such as abortion on request, a limited form of euthanasia, no-fault divorce, and same-sex marriage. On January 29,2016, it ceased to be called the Federal District and is now in transition to become the countrys 32nd federal entity, giving it a level of autonomy comparable to that of a state. Because of a clause in the Mexican Constitution, however, as the seat of the powers of the federation, it can never become a state, the city of Mexico-Tenochtitlan was founded by the Mexica people in 1325. According to legend, the Mexicas principal god, Huitzilopochtli indicated the site where they were to build their home by presenting an eagle perched on a cactus with a snake in its beak. Between 1325 and 1521, Tenochtitlan grew in size and strength, eventually dominating the other city-states around Lake Texcoco, when the Spaniards arrived, the Aztec Empire had reached much of Mesoamerica, touching both the Gulf of Mexico and the Pacific Ocean. After landing in Veracruz, Spanish explorer Hernán Cortés advanced upon Tenochtitlan with the aid of many of the native peoples. Cortés put Moctezuma under house arrest, hoping to rule through him, the Aztecs thought the Spaniards were permanently gone, and they elected a new king, Cuitláhuac, but he soon died, the next king was Cuauhtémoc. Cortés began a siege of Tenochtitlan in May 1521, for three months, the city suffered from the lack of food and water as well as the spread of smallpox brought by the Europeans. Cortés and his allies landed their forces in the south of the island, the Spaniards practically razed Tenochtitlan during the final siege of the conquest. Cortés first settled in Coyoacán, but decided to rebuild the Aztec site to erase all traces of the old order and he did not establish a territory under his own personal rule, but remained loyal to the Spanish crown
24.
Singapore
–
Singapore, officially the Republic of Singapore, sometimes referred to as the Lion City or the Little Red Dot, is a sovereign city-state in Southeast Asia. It lies one degree north of the equator, at the tip of peninsular Malaysia. Singapores territory consists of one island along with 62 other islets. Since independence, extensive land reclamation has increased its size by 23%. During the Second World War, Singapore was occupied by Japan, after early years of turbulence, and despite lacking natural resources and a hinterland, the nation developed rapidly as an Asian Tiger economy, based on external trade and its workforce. Singapore is a global commerce, finance and transport hub, the country has also been identified as a tax haven. Singapore ranks 5th internationally and first in Asia on the UN Human Development Index and it is ranked highly in education, healthcare, life expectancy, quality of life, personal safety, and housing, but does not fare well on the Democracy index. Although income inequality is high, 90% of homes are owner-occupied, 38% of Singapores 5.6 million residents are permanent residents and other foreign nationals. There are four languages on the island, Malay, Mandarin, Tamil. English is its language, most Singaporeans are bilingual. Singapore is a multiparty parliamentary republic, with a Westminster system of unicameral parliamentary government. The Peoples Action Party has won every election since self-government in 1959, however, it is unlikely that lions ever lived on the island, Sang Nila Utama, the Srivijayan prince said to have founded and named the island Singapura, perhaps saw a Malayan tiger. There are however other suggestions for the origin of the name, the central island has also been called Pulau Ujong as far back as the third century CE, literally island at the end in Malay. In 1299, according to the Malay Annals, the Kingdom of Singapura was founded on the island by Sang Nila Utama and these Indianized Kingdoms, a term coined by George Cœdès were characterized by surprising resilience, political integrity and administrative stability. In 1613, Portuguese raiders burned down the settlement, which by then was part of the Johor Sultanate. The wider maritime region and much trade was under Dutch control for the following period, in 1824 the entire island, as well as the Temenggong, became a British possession after a further treaty with the Sultan. In 1826, Singapore became part of the Straits Settlements, under the jurisdiction of British India, prior to Raffles arrival, there were only about a thousand people living on the island, mostly indigenous Malays along with a handful of Chinese. By 1860 the population had swelled to over 80,000, many of these early immigrants came to work on the pepper and gambier plantations
25.
Oslo
–
Oslo is the capital and the most populous city in Norway. It constitutes both a county and a municipality, founded in the year 1040, and established as a kaupstad or trading place in 1048 by Harald Hardrada, the city was elevated to a bishopric in 1070 and a capital under Haakon V of Norway around 1300. Personal unions with Denmark from 1397 to 1523 and again from 1536 to 1814, after being destroyed by a fire in 1624, the city was moved closer to Akershus Fortress during the reign of Christian IV of Denmark and renamed Christiania in his honour. It was established as a municipality on 1 January 1838, following a spelling reform, it was known as Kristiania from 1877 to 1925, at which time its original Norwegian name was restored. Oslo is the economic and governmental centre of Norway, the city is also a hub of Norwegian trade, banking, industry and shipping. It is an important centre for industries and maritime trade in Europe. The city is home to companies within the maritime sector, some of which are among the worlds largest shipping companies, shipbrokers. Oslo is a city of the Council of Europe and the European Commission intercultural cities programme. Oslo is considered a city and ranked Beta World City in studies carried out by the Globalization and World Cities Study Group. It was ranked one in terms of quality of life among European large cities in the European Cities of the Future 2012 report by fDi magazine. A survey conducted by ECA International in 2011 placed Oslo as the second most expensive city in the world for living expenses after Tokyo. In 2013 Oslo tied with the Australian city of Melbourne as the fourth most expensive city in the world, as of January 1,2016, the municipality of Oslo has a population of 658,390, while the population of the citys urban area was 942,084. The metropolitan area had an population of 1.71 million. The population was during the early 2000 increasing at record rates and this growth stems for the most part from international immigration and related high birth rates, but also from intra-national migration. The immigrant population in the city is growing faster than the Norwegian population. As of January 1,2016, the municipality of Oslo has a population of 658,390, the urban area extends beyond the boundaries of the municipality into the surrounding county of Akershus, the total population of this agglomeration is 942,084. To the north and east, wide forested hills rise above the city giving the location the shape of a giant amphitheatre. The urban municipality of Oslo and county of Oslo are two parts of the entity, making Oslo the only city in Norway where two administrative levels are integrated
26.
Helsinki
–
Helsinki is the capital and largest city of Finland. It is in the region of Uusimaa, in southern Finland, on the shore of the Gulf of Finland. Helsinki has a population of 629,512, a population of 1,231,595. Helsinki is located some 80 kilometres north of Tallinn, Estonia,400 km east of Stockholm, Sweden, Helsinki has close historical connections with these three cities. The Helsinki metropolitan area includes the core of Helsinki, Espoo, Vantaa, Kauniainen. It is the worlds northernmost metro area of one million people. The Helsinki metropolitan area is the fourth largest metropolitan area in the Nordic countries, Helsinki is Finlands major political, educational, financial, cultural, and research center as well as one of northern Europes major cities. Approximately 75% of foreign companies operating in Finland have settled in the Helsinki region, the nearby municipality of Vantaa is the location of Helsinki Airport, with frequent service to various destinations in Europe and Asia. In 2009, Helsinki was chosen to be the World Design Capital for 2012 by the International Council of Societies of Industrial Design, the city was the venue for the 1952 Summer Olympics and the 52nd Eurovision Song Contest 2007. In 2011, the Monocle magazine ranked Helsinki the most liveable city in the world in its Liveable Cities Index 2011, in the Economist Intelligence Units August 2015 Liveability survey, assessing the best and worst cities to live in globally, Helsinki placed among the worlds top ten cities. Helsinki is used to refer to the city in most languages, the Swedish name Helsingfors is the original official name of the city. The Finnish name probably comes from Helsinga and similar names used for the river that is known as the Vantaa River. Helsingfors comes from the name of the parish, Helsinge and the rapids, which flowed through the original village. As part of the Grand Duchy of Finland in the Russian Empire, one suggestion for the origin of the name Helsinge is that it originated with medieval Swedish settlers who came from Hälsingland in Sweden. Others have proposed that the name derives from the Swedish word helsing, other Scandinavian cities located at similar geographic locations were given similar names at the time, for example Helsingør and Helsingborg. The name Helsinki has been used in Finnish official documents and in Finnish language newspapers since 1819, the decrees issued in Helsinki were dated with Helsinki as the place of issue. This is how the form Helsinki came to be used in written Finnish, in Helsinki slang the city is called Stadi. Hesa, is not used by natives to the city, helsset is the Northern Sami name of Helsinki
27.
Centrifugal force
–
In Newtonian mechanics, the centrifugal force is an inertial force directed away from the axis of rotation that appears to act on all objects when viewed in a rotating reference frame. When they are analyzed in a coordinate system. The term has also been used for the force that is a reaction to a centripetal force. The centrifugal force is an outward force apparent in a reference frame. All measurements of position and velocity must be relative to some frame of reference. An inertial frame of reference is one that is not accelerating, the use of an inertial frame of reference, which will be the case for all elementary calculations, is often not explicitly stated but may generally be assumed unless stated otherwise. In terms of a frame of reference, the centrifugal force does not exist. All calculations can be performed using only Newtons laws of motion, in its current usage the term centrifugal force has no meaning in an inertial frame. In an inertial frame, an object that has no acting on it travels in a straight line. When measurements are made with respect to a reference frame, however. If it is desired to apply Newtons laws in the frame, it is necessary to introduce new, fictitious. Consider a stone being whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is the tension in the string. There are no forces acting on the stone so there is a net force on the stone in the horizontal plane. In an inertial frame of reference, were it not for this net force acting on the stone, in order to keep the stone moving in a circular path, this force, known as the centripetal force, must be continuously applied to the stone. As soon as it is removed the stone moves in a straight line, in this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newtons laws of motion. In a frame of reference rotating with the stone around the axis as the stone. However, the tension in the string is still acting on the stone, if Newtons laws were applied in their usual form, the stone would accelerate in the direction of the net applied force, towards the axis of rotation, which it does not do. With this new the net force on the stone is zero, with the addition of this extra inertial or fictitious force Newtons laws can be applied in the rotating frame as if it were an inertial frame
28.
Equatorial bulge
–
An equatorial bulge is a difference between the equatorial and polar diameters of a planet, due to the force exerted by its rotation. A rotating body tends to form an oblate spheroid rather than a sphere, the Earth has an equatorial bulge of 42.77 km, that is, its diameter measured across the equatorial plane is 42.77 km more than that measured between the poles. An observer standing at sea level on either pole, therefore, is 21.36 km closer to Earths centrepoint than if standing at sea level on the equator, the value of Earths radius may be approximated by the average of these radii. An often-cited result of Earths equatorial bulge is that the highest point on Earth, measured from the center outwards, is the peak of Mount Chimborazo in Ecuador, rather than Mount Everest. But since the ocean also bulges, like the Earth and the atmosphere, viewing the globe as a series of rotating discs, the radius R toward the poles gets very small and thus a smaller force is produced for the same rotational velocity. Moving towards the equator, v^2 increases much faster than R, in addition, because the Earth’s dense core is included in the cross sectional disc at the equator, it contributes more to the mass of the disc. Similarly, there is a bulge in the envelope of the oceans surrounding Earth. Sea level at the equator is 21.36 km higher than sea level at the poles, gravity tends to contract a celestial body into a sphere, the shape for which all the mass is as close to the center of gravity as possible. To get a feel for the type of equilibrium that is involved, imagine someone seated in a swivel chair. If the person in the chair pulls the weights towards them, they are doing work, the increase of rotation rate is so strong that at the faster rotation rate the required centripetal force is larger than with the starting rotation rate. Something analogous to this occurs in planet formation, as long as the proto-planet is still too oblate to be in equilibrium, the release of gravitational potential energy on contraction keeps driving the increase in rotational kinetic energy. As the contraction proceeds the rotation rate keeps going up, hence the force for further contraction keeps going up. There is a point where the increase of rotational energy on further contraction would be larger than the release of gravitational potential energy. The contraction process can only proceed up to point, so it halts there. When the equilibrium state has been reached then large scale conversion of energy to heat ceases. In that sense the state is the lowest state of energy that can be reached. The Earths rotation rate is still slowing down, though gradually, estimates of how fast the Earth was rotating in the past vary, because it is not known exactly how the moon was formed. Estimates of the Earths rotation 500 million years ago are around 20 modern hours per day, the Earths rate of rotation is slowing down mainly because of tidal interactions with the Moon and the Sun
29.
Geoid
–
The geoid is the shape that the surface of the oceans would take under the influence of Earths gravity and rotation alone, in the absence of other influences such as winds and tides. This surface is extended through the continents, all points on a geoid surface have the same gravity potential energy. The geoid can be defined at any value of gravitational potential such as within the earths crust or far out in space and it does not correspond to the actual surface of Earths crust, but to a surface which can only be known through extensive gravitational measurements and calculations. It is often described as the true figure of the Earth. The surface of the geoid is higher than the reference ellipsoid wherever there is a gravity anomaly. The geoid surface is irregular, unlike the ellipsoid which is a mathematical idealized representation of the physical Earth. Although the physical Earth has excursions of +8,848 m and −429 m, If the ocean surface were isopycnic and undisturbed by tides, currents, or weather, it would closely approximate the geoid. The permanent deviation between the geoid and mean sea level is called ocean surface topography, If the continental land masses were criss-crossed by a series of tunnels or canals, the sea level in these canals would also very nearly coincide with the geoid. This means that when traveling by ship, one does not notice the undulations of the geoid, the vertical is always perpendicular to the geoid. Likewise, spirit levels will always be parallel to the geoid, a long voyage, indicate height variations, even though the ship will always be at sea level. This is because GPS satellites, orbiting about the center of gravity of the Earth, to obtain ones geoidal height, a raw GPS reading must be corrected. Conversely, height determined by spirit leveling from a tidal measurement station, as in land surveying. Modern GPS receivers have a grid implemented inside where they obtain the height over the World Geodetic System ellipsoid from the current position. Then they are able to correct the height above WGS ellipsoid to the height above WGS84 geoid, in that case when the height is not zero on a ship it is due to various other factors such as ocean tides, atmospheric pressure and local sea surface topography. The gravitational field of the earth is neither perfect n If that perfect sphere were then covered in water, instead, the water level would be higher or lower depending on the particular strength of gravity in that location. Spherical harmonics are used to approximate the shape of the geoid. The current best such set of spherical harmonic coefficients is EGM96, the geoid is a particular equipotential surface, and is somewhat involved to compute. The gradient of this also provides a model of the gravitational acceleration
30.
Space Shuttle
–
The Space Shuttle was a partially reusable low Earth orbital spacecraft system operated by the U. S. National Aeronautics and Space Administration, as part of the Space Shuttle program. Its official program name was Space Transportation System, taken from a 1969 plan for a system of reusable spacecraft of which it was the only item funded for development, the first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982. Five complete Shuttle systems were built and used on a total of 135 missions from 1981 to 2011, the Shuttle fleets total mission time was 1322 days,19 hours,21 minutes and 23 seconds. Shuttle components included the Orbiter Vehicle, a pair of solid rocket boosters. The Shuttle was launched vertically, like a rocket, with the two SRBs operating in parallel with the OVs three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, and the ET was jettisoned just before orbit insertion, at the conclusion of the mission, the orbiter fired its OMS to de-orbit and re-enter the atmosphere. The orbiter then glided as a spaceplane to a landing, usually at the Shuttle Landing Facility of KSC or Rogers Dry Lake in Edwards Air Force Base. After landing at Edwards, the orbiter was back to the KSC on the Shuttle Carrier Aircraft. The first orbiter, Enterprise, was built in 1976 for use in Approach, four fully operational orbiters were initially built, Columbia, Challenger, Discovery, and Atlantis. Of these, two were lost in accidents, Challenger in 1986 and Columbia in 2003, with a total of fourteen astronauts killed. A fifth operational orbiter, Endeavour, was built in 1991 to replace Challenger, the Space Shuttle was retired from service upon the conclusion of Atlantiss final flight on July 21,2011. Nixons post-Apollo NASA budgeting withdrew support of all components except the Shuttle. The vehicle consisted of a spaceplane for orbit and re-entry, fueled by liquid hydrogen and liquid oxygen tanks. The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982, all launched from the Kennedy Space Center, Florida. The system was retired from service in 2011 after 135 missions, the program ended after Atlantis landed at the Kennedy Space Center on July 21,2011. Major missions included launching numerous satellites and interplanetary probes, conducting space science experiments, the first orbiter vehicle, named Enterprise, was built for the initial Approach and Landing Tests phase and lacked engines, heat shielding, and other equipment necessary for orbital flight. A total of five operational orbiters were built, and of these and it was used for orbital space missions by NASA, the US Department of Defense, the European Space Agency, Japan, and Germany. The United States funded Shuttle development and operations except for the Spacelab modules used on D1, sL-J was partially funded by Japan
31.
Earth radius
–
Earth radius is the distance from the Earths center to its surface, about 6,371 km. This length is used as a unit of distance, especially in astronomy and geology. This article deals primarily with spherical and ellipsoidal models of the Earth, see Figure of the Earth for a more complete discussion of the models. The Earth is only approximately spherical, so no single value serves as its natural radius, distances from points on the surface to the center range from 6,353 km to 6,384 km. Several different ways of modeling the Earth as a sphere each yield a mean radius of 6,371 km. It can also mean some kind of average of such distances, Aristotle, writing in On the Heavens around 350 BC, reports that the mathematicians guess the circumference of the Earth to be 400,000 stadia. Due to uncertainty about which stadion variant Aristotle meant, scholars have interpreted Aristotles figure to be anywhere from highly accurate to almost double the true value, the first known scientific measurement and calculation of the radius of the Earth was performed by Eratosthenes about 240 BC. Estimates of the accuracy of Eratosthenes’s measurement range from within 0. 5% to within 17%, as with Aristotles report, uncertainty in the accuracy of his measurement is due to modern uncertainty over which stadion definition he used. Earths rotation, internal density variations, and external tidal forces cause its shape to deviate systematically from a perfect sphere, local topography increases the variance, resulting in a surface of profound complexity. Our descriptions of the Earths surface must be simpler than reality in order to be tractable, hence, we create models to approximate characteristics of the Earths surface, generally relying on the simplest model that suits the need. Each of the models in use involve some notion of the geometric radius. Strictly speaking, spheres are the solids to have radii. In the case of the geoid and ellipsoids, the distance from any point on the model to the specified center is called a radius of the Earth or the radius of the Earth at that point. It is also common to refer to any mean radius of a model as the radius of the earth. When considering the Earths real surface, on the hand, it is uncommon to refer to a radius. Rather, elevation above or below sea level is useful, regardless of the model, any radius falls between the polar minimum of about 6,357 km and the equatorial maximum of about 6,378 km. Hence, the Earth deviates from a sphere by only a third of a percent. While specific values differ, the concepts in this article generalize to any major planet
32.
Shell theorem
–
In classical mechanics, the shell theorem gives gravitational simplifications that can be applied to objects inside or outside a spherically symmetrical body. This theorem has particular application to astronomy, isaac Newton proved the shell theorem and stated that, A spherically symmetric body affects external objects gravitationally as though all of its mass were concentrated at a point at its centre. If the body is a spherically symmetric shell, no net force is exerted by the shell on any object inside. A corollary is that inside a sphere of constant density. This is easy to see, take a point within such a sphere, then you can ignore all the shells of greater radius, according to the shell theorem. So, the mass m is proportional to r 3. These results were important to Newtons analysis of motion, they are not immediately obvious. The derivations below focus on gravity, but the results can easily be generalized to the electrostatic force, moreover, the results can be generalized to the case of general ellipsoidal bodies. A solid, spherically symmetric body can be modelled as a number of concentric. If one of these shells can be treated as a point mass, consider one such shell, Note, dθ appearing in the diagram refers to the small angle, not the arclength. Applying Newtons Universal Law of Gravitation, the sum of the due to mass elements in the shaded band is d F = G m d M s 2. However, since there is partial cancellation due to the nature of the force. The total force on m, then, is simply the sum of the force exerted by all the bands and these two relations link the three parameters θ, s and φ that appear in the integral together. When θ increases from 0 to π radians, φ varies from the initial value 0 to a value to finally return to zero for θ = π. S on the other hand increases from the value r − R to the final value r + R when θ increases from 0 to π radians. Between the radius of x to x + dx, dM can be expressed as a function of x, i. e. For a point inside the shell, the difference is that when θ is equal to zero, φ takes the value π radians and s the value R - r. When θ increases from 0 to π radians, φ decreases from the initial value π radians to zero and s increases from the initial value R - r to the value R + r
33.
Inverse-square law
–
The fundamental cause for this can be understood as geometric dilution corresponding to point-source radiation into three-dimensional space. Newtons law of gravitation follows an inverse-square law, as do the effects of electric, magnetic, light, sound. The inverse-square law generally applies when some force, energy, or other conserved quantity is evenly radiated outward from a point source in three-dimensional space, hence, the intensity of radiation passing through any unit area is inversely proportional to the square of the distance from the point source. Gausss law is applicable, and can be used with any physical quantity that acts in accord to the inverse-square relationship. Gravitation is the attraction of two objects with mass, the force is always attractive and acts along the line joining them. If the distribution of matter in each body is spherically symmetric, then the objects can be treated as point masses without approximation, as shown in the shell theorem. Otherwise, if we want to calculate the attraction between bodies, we need to add all the point-point attraction forces vectorially and the net attraction might not be exact inverse square. As the law of gravitation, this law was suggested in 1645 by Ismael Bullialdus, but Bullialdus did not accept Kepler’s second and third laws, nor did he appreciate Christiaan Huygens’s solution for circular motion. Indeed, Bullialdus maintained the suns force was attractive at aphelion, robert Hooke and Giovanni Alfonso Borelli both expounded gravitation in 1666 as an attractive force. By 1679, Hooke thought gravitation had inverse square dependence and communicated this in a letter to Isaac Newton, the deviation of the exponent from 2 is less than one part in 1015. More generally, the irradiance, i. e. the intensity, for non-isotropic radiators such as parabolic antennas, headlights, and lasers, the effective origin is located far behind the beam aperture. If you are close to the origin, you dont have to go far to double the radius, so the signal drops quickly. When you are far from the origin and still have a signal, like with a laser, you have to travel very far to double the radius. This means you have a signal or have antenna gain in the direction of the narrow beam relative to a wide beam in all directions of an isotropic antenna. In photography and stage lighting, the law is used to determine the fall off or the difference in illumination on a subject as it moves closer to or further from the light source. The fractional reduction in electromagnetic fluence for indirectly ionizing radiation with increasing distance from a point source can be calculated using the inverse-square law, since emissions from a point source have radial directions, they intercept at a perpendicular incidence. The area of such a shell is 4πr 2 where r is the distance from the center. At large distances from the source, this power is distributed over larger and larger spherical surfaces as the distance from the source increases
34.
Physical geodesy
–
Physical geodesy is the study of the physical properties of the gravity field of the Earth, the geopotential, with a view to their application in geodesy. Traditional geodetic instruments such as theodolites rely on the gravity field for orienting their vertical axis along the local plumb line or local vertical direction with the aid of a spirit level. After that, vertical angles are obtained with respect to this local vertical, levelling instruments again are used to obtain geopotential differences between points on the Earths surface. These can then be expressed as height differences by conversion to metric units, the vector triad is the orthonormal set of base vectors in space, pointing along the X, Y, Z coordinate axes. Note that both gravity and its potential contain a contribution from the centrifugal pseudo-force due to the Earths rotation. We can write W = V + Φ where V is the potential of the field, W that of the gravity field. It can be shown that this field, in a reference frame co-rotating with the Earth, has a potential associated with it that looks like this. This can be verified by taking the gradient operator of this expression, here, X, Y and Z are geocentric coordinates. Gravity is commonly measured in units of m·s−2 and this also can be expressed as newtons per kilogram of attracted mass. Potential is expressed as gravity times distance, m2·s−2, travelling one metre in the direction of a gravity vector of strength 1 m·s−2 will increase your potential by 1 m2·s−2. Again employing G as a multipier, the units can be changed to joules per kilogram of attracted mass, a more convenient unit is the GPU, or geopotential unit, it equals 10 m2·s−2. This means that travelling one metre in the direction, i. e. the direction of the 9.8 m·s−2 ambient gravity. Which again means that the difference in geopotential, in GPU, to a rough approximation, the Earth is a sphere, or to a much better approximation, an ellipsoid. It is more accurate to approximate the geopotential by a field that has the Earth reference ellipsoid as one of its equipotential surfaces, the most recent Earth reference ellipsoid is GRS80, or Geodetic Reference System 1980, which the Global Positioning system uses as its reference. Its geometric parameters are, semi-major axis a =6378137.0 m, a geopotential field U is constructed, being the sum of a gravitational potential Ψ and the known centrifugal potential Φ, that has the GRS80 reference ellipsoid as one of its equipotential surfaces. For practical purposes it makes sense to choose the zero point of normal gravity to be that of the reference ellipsoid. Due to the irregularity of the Earths true gravity field, the figure of sea water, or the geoid. The separation between two surfaces is called the undulation of the geoid, symbol N, and is closely related to the disturbing potential
35.
Topography
–
Topography is the study of the shape and features of the surface of the Earth and other observable astronomical objects including planets, moons, and asteroids. The topography of an area could refer to the shapes and features themselves. This field of geoscience and planetary science is concerned with detail in general, including not only relief but also natural and artificial features. This meaning is common in the United States, where topographic maps with elevation contours have made topography synonymous with relief. The older sense of topography as the study of place still has currency in Europe, topography in a narrow sense involves the recording of relief or terrain, the three-dimensional quality of the surface, and the identification of specific landforms. This is also known as geomorphometry, in modern usage, this involves generation of elevation data in digital form. It is often considered to include the representation of the landform on a map by a variety of techniques, including contour lines, hypsometric tints. The term topography originated in ancient Greece and continued in ancient Rome, the word comes from the Greek τόπος and -γραφία. In classical literature this refers to writing about a place or places, in Britain and in Europe in general, the word topography is still sometimes used in its original sense. Detailed military surveys in Britain were called Ordnance Surveys, and this term was used into the 20th century as generic for topographic surveys, the earliest scientific surveys in France were called the Cassini maps after the family who produced them over four generations. The term topographic surveys appears to be American in origin, the earliest detailed surveys in the United States were made by the “Topographical Bureau of the Army, ” formed during the War of 1812, which became the Corps of Topographical Engineers in 1838. In the 20th century, the term started to be used to describe surface description in other fields where mapping in a broader sense is used. An objective of topography is to determine the position of any feature or more generally any point in terms of both a horizontal coordinate system such as latitude, longitude, and altitude, identifying features, and recognizing typical landform patterns are also part of the field. There are a variety of approaches to studying topography, which method to use depend on the scale and size of the area under study, its accessibility, and the quality of existing surveys. Work on one of the first topographic maps was begun in France by Giovanni Domenico Cassini, in areas where there has been an extensive direct survey and mapping program, the compiled data forms the basis of basic digital elevation datasets such as USGS DEM data. This data must often be cleaned to eliminate discrepancies between surveys, but it forms a valuable set of information for large-scale analysis. The original American topographic surveys involved not only recording of relief, remote sensing is a general term for geodata collection at a distance from the subject area. Besides their role in photogrammetry, aerial and satellite imagery can be used to identify and delineate terrain features, certainly they have become more and more a part of geovisualization, whether maps or GIS systems
36.
Geology
–
Geology is an earth science concerned with the solid Earth, the rocks of which it is composed, and the processes by which they change over time. Geology can also refer generally to the study of the features of any terrestrial planet. Geology gives insight into the history of the Earth by providing the evidence for plate tectonics, the evolutionary history of life. Geology also plays a role in engineering and is a major academic discipline. The majority of data comes from research on solid Earth materials. These typically fall into one of two categories, rock and unconsolidated material, the majority of research in geology is associated with the study of rock, as rock provides the primary record of the majority of the geologic history of the Earth. There are three types of rock, igneous, sedimentary, and metamorphic. The rock cycle is an important concept in geology which illustrates the relationships between three types of rock, and magma. When a rock crystallizes from melt, it is an igneous rock, the sedimentary rock can then be subsequently turned into a metamorphic rock due to heat and pressure and is then weathered, eroded, deposited, and lithified, ultimately becoming a sedimentary rock. Sedimentary rock may also be re-eroded and redeposited, and metamorphic rock may also undergo additional metamorphism, all three types of rocks may be re-melted, when this happens, a new magma is formed, from which an igneous rock may once again crystallize. Geologists also study unlithified material which typically comes from more recent deposits and these materials are superficial deposits which lie above the bedrock. Because of this, the study of material is often known as Quaternary geology. This includes the study of sediment and soils, including studies in geomorphology, sedimentology and this theory is supported by several types of observations, including seafloor spreading, and the global distribution of mountain terrain and seismicity. This coupling between rigid plates moving on the surface of the Earth and the mantle is called plate tectonics. The development of plate tectonics provided a basis for many observations of the solid Earth. Long linear regions of geologic features could be explained as plate boundaries, mid-ocean ridges, high regions on the seafloor where hydrothermal vents and volcanoes exist, were explained as divergent boundaries, where two plates move apart. Arcs of volcanoes and earthquakes were explained as convergent boundaries, where one plate subducts under another, transform boundaries, such as the San Andreas Fault system, resulted in widespread powerful earthquakes. Plate tectonics also provided a mechanism for Alfred Wegeners theory of continental drift and they also provided a driving force for crustal deformation, and a new setting for the observations of structural geology
37.
Plate tectonics
–
The theoretical model builds on the concept of continental drift developed during the first few decades of the 20th century. The geoscientific community accepted plate-tectonic theory after seafloor spreading was validated in the late 1950s, the lithosphere, which is the rigid outermost shell of a planet, is broken up into tectonic plates. The Earths lithosphere is composed of seven or eight major plates, where the plates meet, their relative motion determines the type of boundary, convergent, divergent, or transform. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along plate boundaries. The relative movement of the plates typically ranges from zero to 100 mm annually, tectonic plates are composed of oceanic lithosphere and thicker continental lithosphere, each topped by its own kind of crust. Along convergent boundaries, subduction carries plates into the mantle, the material lost is balanced by the formation of new crust along divergent margins by seafloor spreading. In this way, the surface of the lithosphere remains the same. This prediction of plate tectonics is also referred to as the conveyor belt principle, earlier theories, since disproven, proposed gradual shrinking or gradual expansion of the globe. Tectonic plates are able to move because the Earths lithosphere has greater strength than the underlying asthenosphere. Lateral density variations in the result in convection. Plate movement is thought to be driven by a combination of the motion of the seafloor away from the ridge and drag, with downward suction. Another explanation lies in the different forces generated by forces of the Sun. The relative importance of each of these factors and their relationship to other is unclear. The outer layers of the Earth are divided into the lithosphere and asthenosphere and this is based on differences in mechanical properties and in the method for the transfer of heat. Mechanically, the lithosphere is cooler and more rigid, while the asthenosphere is hotter, in terms of heat transfer, the lithosphere loses heat by conduction, whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, Plate motions range up to a typical 10–40 mm/year, to about 160 mm/year. The driving mechanism behind this movement is described below, tectonic lithosphere plates consist of lithospheric mantle overlain by either or both of two types of crustal material, oceanic crust and continental crust. Average oceanic lithosphere is typically 100 km thick, its thickness is a function of its age, as passes, it conductively cools
38.
Gravity anomaly
–
A gravity anomaly is the difference between the observed acceleration of free fall, or gravity, on a planets surface, and the corresponding value predicted from a model of the planets gravity field. Typically the model is based on simplifying assumptions, such as that, under its self-gravitation and rotational motion, the planet assumes the figure of an ellipsoid of revolution. Gravity on the surface of this ellipsoid is given by a simple formula which only contains the latitude. As such, gravity anomalies describe the variations of the gravity field around the model field. These anomalies are thus of substantial geophysical and geological interest, cleanly extracting the response to the local sub-surface geology is the typical goal of applied geophysics. Lateral variations in gravity anomalies are related to anomalous density distributions within the Earth, Gravity measures help us to understand the internal structure of the planet. Synthetic calculations show that the gravity signature of a thickened crust is negative and larger in absolute value. The Bouguer anomalies usually are negative in the mountains because they involve reducing out the attraction of the mountain mass, typical anomalies in the Central Alps are −150 milligals. Rather local anomalies are used in applied geophysics, if they are positive, at scales between entire mountain ranges and ore bodies, Bouguer anomalies may indicate rock types. For example, the northeast-southwest trending high across central New Jersey represents a graben of Triassic age largely filled with dense basalts, salt domes are typically expressed in gravity maps as lows, because salt has a low density compared to the rocks the dome intrudes. Anomalies can help to distinguish sedimentary basins whose fill differs in density from that of the surrounding region - see Gravity Anomalies of Britain, in geodesy and geophysics, the usual theoretical model is the gravity on the surface of a reference ellipsoid such as WGS84. The elevation of the point where each gravity measurement was taken must be reduced to a reference datum to compare the whole profile and this is called the Free-air Correction, and when combined with the removal of theoretical gravity leaves the free-air anomaly. Simply, we have to correct for the effects of any material between the point where gravimetry was done and the geoid, to do this we model the material in between as being made up of an infinite number of slabs of thickness t. These slabs have no lateral variation in density, but each slab may have a different density than the one above or below it and this is called the Bouguer correction. A terrain correction, computed from a structure, accounts for the effects of rapid lateral change in density, e. g. edge of plateau, cliffs, steep mountains. For these reductions, different methods are used, The gravity changes as we move away from the surface of the Earth, for this reason, we must compensate with the free-air anomaly, application of the normal gradient 0.3086 mGal/m, but no terrain model. This anomaly means a shift of the point, together with the whole shape of the terrain. This simple method is ideal for many geodetic applications, simple Bouguer anomaly, downward reduction just by the Bouguer gradient
39.
Sea level
–
Mean sea level is an average level of the surface of one or more of Earths oceans from which heights such as elevations may be measured. A common and relatively straightforward mean sea-level standard is the midpoint between a low and mean high tide at a particular location. Sea levels can be affected by factors and are known to have varied greatly over geological time scales. The careful measurement of variations in MSL can offer insights into ongoing climate change, the term above sea level generally refers to above mean sea level. Precise determination of a sea level is a difficult problem because of the many factors that affect sea level. Sea level varies quite a lot on several scales of time and this is because the sea is in constant motion, affected by the tides, wind, atmospheric pressure, local gravitational differences, temperature, salinity and so forth. The easiest way this may be calculated is by selecting a location and calculating the mean sea level at that point, for example, a period of 19 years of hourly level observations may be averaged and used to determine the mean sea level at some measurement point. One measures the values of MSL in respect to the land, hence a change in MSL can result from a real change in sea level, or from a change in the height of the land on which the tide gauge operates. In the UK, the Ordnance Datum is the sea level measured at Newlyn in Cornwall between 1915 and 1921. Prior to 1921, the datum was MSL at the Victoria Dock, in Hong Kong, mPD is a surveying term meaning metres above Principal Datum and refers to height of 1. 230m below the average sea level. In France, the Marégraphe in Marseilles measures continuously the sea level since 1883 and it is used for a part of continental Europe and main part of Africa as official sea level. Elsewhere in Europe vertical elevation references are made to the Amsterdam Peil elevation, satellite altimeters have been making precise measurements of sea level since the launch of TOPEX/Poseidon in 1992. A joint mission of NASA and CNES, TOPEX/Poseidon was followed by Jason-1 in 2001, height above mean sea level is the elevation or altitude of an object, relative to the average sea level datum. It is also used in aviation, where some heights are recorded and reported with respect to sea level, and in the atmospheric sciences. An alternative is to base height measurements on an ellipsoid of the entire Earth, in aviation, the ellipsoid known as World Geodetic System 84 is increasingly used to define heights, however, differences up to 100 metres exist between this ellipsoid height and mean tidal height. The alternative is to use a vertical datum such as NAVD88. When referring to geographic features such as mountains on a topographic map, the elevation of a mountain denotes the highest point or summit and is typically illustrated as a small circle on a topographic map with the AMSL height shown in metres, feet or both. In the rare case that a location is below sea level, for one such case, see Amsterdam Airport Schiphol
40.
Pendulum
–
A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting, equilibrium position, when released, the restoring force combined with the pendulums mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one cycle, a left swing and a right swing, is called the period. The period depends on the length of the pendulum and also to a degree on the amplitude. Pendulums are also used in instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geophysical surveys, the word pendulum is new Latin, from the Latin pendulus, meaning hanging. The simple gravity pendulum is a mathematical model of a pendulum. This is a weight on the end of a massless cord suspended from a pivot, when given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines and it is independent of the mass of the bob. For small swings the period of swing is approximately the same for different size swings, that is and this property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, for larger amplitudes, the period increases gradually with amplitude so it is longer than given by equation. For example, at an amplitude of θ0 = 23° it is 1% larger than given by, the period increases asymptotically as θ0 approaches 180°, because the value θ0 = 180° is an unstable equilibrium point for the pendulum. The length L used to calculate the period of the simple pendulum in eq. above is the distance from the pivot point to the center of mass of the bob. Any swinging rigid body free to rotate about a horizontal axis is called a compound pendulum or physical pendulum. The appropriate equivalent length L for calculating the period of any such pendulum is the distance from the pivot to the center of oscillation. This point is located under the center of mass at a distance from the pivot traditionally called the radius of oscillation, if most of the mass is concentrated in a relatively small bob compared to the pendulum length, the center of oscillation is close to the center of mass. Substituting this expression in above, the period T of a pendulum is given by T =2 π I m g R for sufficiently small oscillations. For example, a rigid rod of length L pivoted about one end has moment of inertia I = m L2 /3