1.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
2.
Truss
–
In engineering, a truss is a structure that consists of two-force members only, where the members are organized so that the assemblage as a whole behaves as a single object. A two-force member is a component where force is applied to only two points. In this typical context, external forces and reactions to those forces are considered to act only at the nodes and result in forces in the members that are either tensile or compressive. For straight members, moments are explicitly excluded because, and only because, all the joints in a truss are treated as revolutes, as is necessary for the links to be two-force members. A planar truss is one all members and nodes lie within a two dimensional plane, while a space truss has members and nodes that extend into three dimensions. The top beams in a truss are called top chords and are typically in compression, the beams are called bottom chords. The interior beams are called webs, and the areas inside the webs are called panels, Truss derives from the Old French word trousse, from around 1200, which means collection of things bound together. The term truss has often used to describe any assembly of members such as a cruck frame or a couple of rafters. One engineering definition is, A truss is a single plane framework of individual structural member connected at their ends of forms a series of triangle to span a large distance, a truss consists of typically straight members connected at joints, traditionally termed panel points. Trusses are typically composed of triangles because of the stability of that shape. A triangle is the simplest geometric figure that will not change shape when the lengths of the sides are fixed, in comparison, both the angles and the lengths of a four-sided figure must be fixed for it to retain its shape. The joint at which a truss is designed to be supported is commonly referred to as the Munter Point, the simplest form of a truss is one single triangle. This type of truss is seen in a roof consisting of rafters and a ceiling joist. Because of the stability of this shape and the methods of used to calculate the forces within it. The traditional diamond-shape bicycle frame, which utilizes two conjoined triangles, is an example of a simple truss, a planar truss lies in a single plane. Planar trusses are used in parallel to form roofs and bridges. The depth of a truss, or the height between the upper and lower chords, is what makes it an efficient structural form, a solid girder or beam of equal strength would have substantial weight and material cost as compared to a truss. For a given span, a deeper truss will require less material in the chords, an optimum depth of the truss will maximize the efficiency
3.
Compression (physics)
–
It is contrasted with tension or traction, the application of balanced outward forces, and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration, in uniaxial compression the forces are directed along one direction only, so that they act towards decreasing the objects length along that direction. If the stress vector itself is opposite to x, the material is said to be under compression or pure compressive stress along x. In a solid, the amount of compression depends on the direction x. If the stress vector is purely compressive and has the same magnitude for all directions and this is the only type of static compression that liquids and gases can bear. In a mechanical longitudinal wave, or compression wave, the medium is displaced in the waves direction, when put under compression, every material will suffer some deformation, even if imperceptible, that causes the average relative positions of its atoms and molecules to change. The deformation may be permanent, or may be reversed when the compression forces disappear, in the latter case, the deformation gives rise to reaction forces that oppose the compression forces, and may eventually balance them. Liquids and gases cannot bear steady uniaxial or biaxial compression, they will deform promptly and permanently, however they can bear isotropic compression, and may be compressed in other ways momentarily, for instance in a sound wave. The deformation may not be uniform and may not be aligned with the compression forces, what happens in the directions where there is no compression depends on the material. Most materials will expand in those directions, but some special materials will remain unchanged or even contract, by inducing compression, mechanical properties such as compressive strength or modulus of elasticity, can be measured. Compression machines range from small table top systems to ones with over 53 MN capacity. Gases are often stored and shipped in highly compressed form, to save space, slightly compressed air or other gases are also used to fill balloons, rubber boats, and other inflatable structures. Compressed liquids are used in equipment and in fracking. In internal combustion engines the explosive mixture gets compressed before it is ignited, in the Otto cycle, for instance, the second stroke of the piston effects the compression of the charge which has been drawn into the cylinder by the first forward stroke. This compression, moreover, obviates the shock which would otherwise be caused by the admission of the steam for the return stroke. Buckling Container compression test Compression member Compressive strength Longitudinal wave P-wave Rarefaction Strength of materials
4.
Potential energy
–
In physics, potential energy is energy possessed by a body by virtue of its position relative to others, stresses within itself, electric charge, and other factors. The unit for energy in the International System of Units is the joule, the term potential energy was introduced by the 19th century Scottish engineer and physicist William Rankine, although it has links to Greek philosopher Aristotles concept of potentiality. Potential energy is associated with forces that act on a body in a way that the work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, that are called potential forces, can be represented at every point in space by vectors expressed as gradients of a scalar function called potential. Potential energy is the energy of an object. It is the energy by virtue of a position relative to other objects. Potential energy is associated with restoring forces such as a spring or the force of gravity. The action of stretching the spring or lifting the mass is performed by a force that works against the force field of the potential. This work is stored in the field, which is said to be stored as potential energy. If the external force is removed the field acts on the body to perform the work as it moves the body back to the initial position. Suppose a ball which mass is m, and it is in h position in height, if the acceleration of free fall is g, the weight of the ball is mg. There are various types of energy, each associated with a particular type of force. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components, the energy of random motions of particles and the potential energy of their mutual positions. Forces derivable from a potential are also called conservative forces, the work done by a conservative force is W = − Δ U where Δ U is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, common notations for potential energy are U, V, also Ep. Potential energy is closely linked with forces, in this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for a force is independent of the path, then the work done by the force is evaluated at the start
5.
Force
–
In physics, a force is any interaction that, when unopposed, will change the motion of an object. In other words, a force can cause an object with mass to change its velocity, force can also be described intuitively as a push or a pull. A force has both magnitude and direction, making it a vector quantity and it is measured in the SI unit of newtons and represented by the symbol F. The original form of Newtons second law states that the net force acting upon an object is equal to the rate at which its momentum changes with time. In an extended body, each part usually applies forces on the adjacent parts, such internal mechanical stresses cause no accelation of that body as the forces balance one another. Pressure, the distribution of small forces applied over an area of a body, is a simple type of stress that if unbalanced can cause the body to accelerate. Stress usually causes deformation of materials, or flow in fluids. In part this was due to an understanding of the sometimes non-obvious force of friction. A fundamental error was the belief that a force is required to maintain motion, most of the previous misunderstandings about motion and force were eventually corrected by Galileo Galilei and Sir Isaac Newton. With his mathematical insight, Sir Isaac Newton formulated laws of motion that were not improved-on for nearly three hundred years, the Standard Model predicts that exchanged particles called gauge bosons are the fundamental means by which forces are emitted and absorbed. Only four main interactions are known, in order of decreasing strength, they are, strong, electromagnetic, weak, high-energy particle physics observations made during the 1970s and 1980s confirmed that the weak and electromagnetic forces are expressions of a more fundamental electroweak interaction. Since antiquity the concept of force has been recognized as integral to the functioning of each of the simple machines. The mechanical advantage given by a machine allowed for less force to be used in exchange for that force acting over a greater distance for the same amount of work. Analysis of the characteristics of forces ultimately culminated in the work of Archimedes who was famous for formulating a treatment of buoyant forces inherent in fluids. Aristotle provided a discussion of the concept of a force as an integral part of Aristotelian cosmology. In Aristotles view, the sphere contained four elements that come to rest at different natural places therein. Aristotle believed that objects on Earth, those composed mostly of the elements earth and water, to be in their natural place on the ground. He distinguished between the tendency of objects to find their natural place, which led to natural motion, and unnatural or forced motion
6.
Newton (unit)
–
The newton is the International System of Units derived unit of force. It is named after Isaac Newton in recognition of his work on classical mechanics, see below for the conversion factors. One newton is the force needed to one kilogram of mass at the rate of one metre per second squared in direction of the applied force. In 1948, the 9th CGPM resolution 7 adopted the name newton for this force, the MKS system then became the blueprint for todays SI system of units. The newton thus became the unit of force in le Système International dUnités. This SI unit is named after Isaac Newton, as with every International System of Units unit named for a person, the first letter of its symbol is upper case. Note that degree Celsius conforms to this rule because the d is lowercase. — Based on The International System of Units, section 5.2. Newtons second law of motion states that F = ma, where F is the applied, m is the mass of the object receiving the force. The newton is therefore, where the symbols are used for the units, N for newton, kg for kilogram, m for metre. In dimensional analysis, F = M L T2 where F is force, M is mass, L is length, at average gravity on earth, a kilogram mass exerts a force of about 9.8 newtons. An average-sized apple exerts about one newton of force, which we measure as the apples weight, for example, the tractive effort of a Class Y steam train and the thrust of an F100 fighter jet engine are both around 130 kN. One kilonewton,1 kN, is 102.0 kgf,1 kN =102 kg ×9.81 m/s2 So for example, a platform rated at 321 kilonewtons will safely support a 32,100 kilograms load. Specifications in kilonewtons are common in safety specifications for, the values of fasteners, Earth anchors. Working loads in tension and in shear, thrust of rocket engines and launch vehicles clamping forces of the various moulds in injection moulding machines used to manufacture plastic parts
7.
Acceleration
–
Acceleration, in physics, is the rate of change of velocity of an object with respect to time. An objects acceleration is the net result of any and all forces acting on the object, the SI unit for acceleration is metre per second squared. Accelerations are vector quantities and add according to the parallelogram law, as a vector, the calculated net force is equal to the product of the objects mass and its acceleration. For example, when a car starts from a standstill and travels in a line at increasing speeds. If the car turns, there is an acceleration toward the new direction, in this example, we can call the forward acceleration of the car a linear acceleration, which passengers in the car might experience as a force pushing them back into their seats. When changing direction, we call this non-linear acceleration, which passengers might experience as a sideways force. If the speed of the car decreases, this is an acceleration in the direction from the direction of the vehicle. Passengers may experience deceleration as a force lifting them forwards, mathematically, there is no separate formula for deceleration, both are changes in velocity. Each of these accelerations might be felt by passengers until their velocity matches that of the car, an objects average acceleration over a period of time is its change in velocity divided by the duration of the period. Mathematically, a ¯ = Δ v Δ t, instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. The SI unit of acceleration is the metre per second squared, or metre per second per second, as the velocity in metres per second changes by the acceleration value, every second. An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, in this case it is said to be undergoing centripetal acceleration. Proper acceleration, the acceleration of a relative to a free-fall condition, is measured by an instrument called an accelerometer. As speeds approach the speed of light, relativistic effects become increasingly large and these components are called the tangential acceleration and the normal or radial acceleration. Geometrical analysis of space curves, which explains tangent, normal and binormal, is described by the Frenet–Serret formulas. Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a gravitational field. The acceleration of a body in the absence of resistances to motion is dependent only on the gravitational field strength g
8.
Net force
–
In physics, net force is the overall force acting on an object. In order to calculate the net force, the body is isolated and it is always possible to determine the torque associated with a point of application of a net force so that it maintains the movement of the object under the original system of forces. With its associated torque, the net force becomes the resultant force and has the effect on the rotational motion of the object as all actual forces taken together. It is possible for a system of forces to define a torque-free resultant force, in this case, the net force when applied at the proper line of action has the same effect on the body as all of the forces at their points of application. It is not always possible to find a torque-free resultant force, the sum of forces acting on a particle is called the total force or the net force. The net force is a force that replaces the effect of the original forces on the particles motion. It gives the particle the same acceleration as all actual forces together as described by the Newtons second law of motion. Force is a quantity, which means that it has a magnitude and a direction. Graphically, a force is represented as line segment from its point of application A to a point B which defines its direction, the length of the segment AB represents the magnitude of the force. Vector calculus was developed in the late 1800s and early 1900s, the parallelogram rule used for the addition of forces, however, dates from antiquity and is noted explicitly by Galileo and Newton. The diagram shows the addition of the forces F →1 and F →2, the sum F → of the two forces is drawn as the diagonal of a parallelogram defined by the two forces. Forces applied to a body can have different points of application. Forces are bound vectors and can be added if they are applied at the same point. The net force on a body applied at a point with the appropriate torque is known as the resultant force. A force is known as a vector which means it has a direction and magnitude. A convenient way to define a force is by a segment from a point A to a point B. If we denote the coordinates of points as A= and B=. The length of the vector B-A defines the magnitude of F and is given by | F | =2 +2 +2, the sum of two forces F1 and F2 applied at A can be computed from the sum of the segments that define them
9.
Cross section (geometry)
–
In geometry and science, a cross section is the intersection of a body in three-dimensional space with a plane, or the analog in higher-dimensional space. Cutting an object into slices creates many parallel cross sections, conic sections – circles, ellipses, parabolas, and hyperbolas – are formed by cross-sections of a cone at various different angles, as seen in the diagram at left. Any planar cross-section passing through the center of an ellipsoid forms an ellipse on its surface, a cross-section of a cylinder is a circle if the cross-section is parallel to the cylinders base, or an ellipse with non-zero eccentricity if it is neither parallel nor perpendicular to the base. If the cross-section is perpendicular to the base it consists of two line segments unless it is just tangent to the cylinder, in which case it is a single line segment. A cross section of a polyhedron is a polygon, if instead the cross section is taken for a fixed value of the density, the result is an iso-density contour. For the normal distribution, these contours are ellipses, a cross section can be used to visualize the partial derivative of a function with respect to one of its arguments, as shown at left. In economics, a function f specifies the output that can be produced by various quantities x and y of inputs, typically labor. The production function of a firm or a society can be plotted in three-dimensional space, also in economics, a cardinal or ordinal utility function u gives the degree of satisfaction of a consumer obtained by consuming quantities w and v of two goods. Cross sections are used in anatomy to illustrate the inner structure of an organ. A cross section of a trunk, as shown at left, reveals growth rings that can be used to find the age of the tree. Cavalieris principle states that solids with corresponding sections of equal areas have equal volumes. The cross-sectional area of an object when viewed from an angle is the total area of the orthographic projection of the object from that angle. For example, a cylinder of height h and radius r has A ′ = π r 2 when viewed along its central axis, a sphere of radius r has A ′ = π r 2 when viewed from any angle. For a convex body, each ray through the object from the viewers perspective crosses just two surfaces, descriptive geometry Exploded view drawing Graphical projection Plans
10.
Vibration
–
Vibration is a mechanical phenomenon whereby oscillations occur about an equilibrium point. The word comes from Latin vibrationem, the oscillations may be periodic, such as the motion of a pendulum—or random, such as the movement of a tire on a gravel road. Vibration can be desirable, for example, the motion of a fork, the reed in a woodwind instrument or harmonica. In many cases, however, vibration is undesirable, wasting energy, for example, the vibrational motions of engines, electric motors, or any mechanical device in operation are typically unwanted. Such vibrations could be caused by imbalances in the parts, uneven friction. Careful designs usually minimize unwanted vibrations, the studies of sound and vibration are closely related. Sound, or pressure waves, are generated by vibrating structures, hence, attempts to reduce noise are often related to issues of vibration. Free vibration occurs when a system is set in motion with an initial input. Examples of this type of vibration are pulling a child back on a swing and letting go, or hitting a tuning fork, the mechanical system vibrates at one or more of its natural frequencies and damps down to motionlessness. Forced vibration is when a disturbance is applied to a mechanical system. The disturbance can be a periodic and steady-state input, a transient input, the periodic input can be a harmonic or a non-harmonic disturbance. Damped vibration, When the energy of a system is gradually dissipated by friction and other resistances. The vibrations gradually reduce or change in frequency or intensity or cease, Vibration testing is accomplished by introducing a forcing function into a structure, usually with some type of shaker. Alternately, a DUT is attached to the table of a shaker, Vibration testing is performed to examine the response of a device under test to a defined vibration environment. The measured response may be life, resonant frequencies or squeak. Squeak and rattle testing is performed with a type of quiet shaker that produces very low sound levels while under operation. For relatively low frequency forcing, servohydraulic shakers are used, for higher frequencies, electrodynamic shakers are used. Generally, one or more input or control points located on the DUT-side of a fixture is kept at a specified acceleration, other response points experience maximum vibration level or minimum vibration level
11.
Pulley
–
A pulley is a wheel on an axle or shaft that is designed to support movement and change of direction of a taut cable, supporting shell is referred to as a block. A pulley may also be called a sheave or drum and may have a groove or grooves between two flanges around its circumference. The drive element of a system can be a rope, cable, belt. Hero of Alexandria identified the pulley as one of six simple machines used to lift weights, pulleys are assembled to form a block and tackle in order to provide mechanical advantage to apply large forces. Pulleys are also assembled as part of belt and chain drives in order to power from one rotating shaft to another. A set of pulleys assembled so that they rotate independently on the axle from a block. Two blocks with an attached to one of the blocks. A block and tackle is assembled so one block is attached to fixed mounting point, the ideal mechanical advantage of the block and tackle is equal to the number of parts of the rope that support the moving block. This system is included in the list of simple machines identified by Renaissance scientists, if the rope and pulley system does not dissipate or store energy, then its mechanical advantage is the number of parts of the rope that act on the load. This can be shown as follows, consider the set of pulleys that form the moving block and the parts of the rope that support this block. If there are p of these parts of the supporting the load W. This means the force on the rope is T=W/p. Thus, the block and tackle reduces the force by the factor p. The simplest theory of operation for a pulley system assumes that the pulleys and lines are weightless, and it is also assumed that the lines do not stretch. In equilibrium, the forces on the block must sum to zero. In addition the tension in the rope must be the same for each of its parts and this means that the two parts of the rope supporting the moving block must each support half the load. These are different types of systems, Fixed, A fixed pulley has an axle mounted in bearings attached to a supporting structure. A fixed pulley changes the direction of the force on a rope or belt that moves along its circumference, mechanical advantage is gained by combining a fixed pulley with a movable pulley or another fixed pulley of a different diameter
12.
Newton's laws of motion
–
Newtons laws of motion are three physical laws that, together, laid the foundation for classical mechanics. They describe the relationship between a body and the forces acting upon it, and its motion in response to those forces. More precisely, the first law defines the force qualitatively, the second law offers a measure of the force. These three laws have been expressed in different ways, over nearly three centuries, and can be summarised as follows. The three laws of motion were first compiled by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, Newton used them to explain and investigate the motion of many physical objects and systems. For example, in the volume of the text, Newton showed that these laws of motion, combined with his law of universal gravitation. Newtons laws are applied to objects which are idealised as single point masses, in the sense that the size and this can be done when the object is small compared to the distances involved in its analysis, or the deformation and rotation of the body are of no importance. In this way, even a planet can be idealised as a particle for analysis of its orbital motion around a star, in their original form, Newtons laws of motion are not adequate to characterise the motion of rigid bodies and deformable bodies. Leonhard Euler in 1750 introduced a generalisation of Newtons laws of motion for rigid bodies called Eulers laws of motion, if a body is represented as an assemblage of discrete particles, each governed by Newtons laws of motion, then Eulers laws can be derived from Newtons laws. Eulers laws can, however, be taken as axioms describing the laws of motion for extended bodies, Newtons laws hold only with respect to a certain set of frames of reference called Newtonian or inertial reference frames. Other authors do treat the first law as a corollary of the second, the explicit concept of an inertial frame of reference was not developed until long after Newtons death. In the given mass, acceleration, momentum, and force are assumed to be externally defined quantities. This is the most common, but not the interpretation of the way one can consider the laws to be a definition of these quantities. Newtonian mechanics has been superseded by special relativity, but it is useful as an approximation when the speeds involved are much slower than the speed of light. The first law states that if the net force is zero, the first law can be stated mathematically when the mass is a non-zero constant, as, ∑ F =0 ⇔ d v d t =0. Consequently, An object that is at rest will stay at rest unless a force acts upon it, an object that is in motion will not change its velocity unless a force acts upon it. This is known as uniform motion, an object continues to do whatever it happens to be doing unless a force is exerted upon it. If it is at rest, it continues in a state of rest, if an object is moving, it continues to move without turning or changing its speed
13.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
14.
Friction
–
Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. There are several types of friction, Dry friction resists relative lateral motion of two surfaces in contact. Dry friction is subdivided into static friction between non-moving surfaces, and kinetic friction between moving surfaces, fluid friction describes the friction between layers of a viscous fluid that are moving relative to each other. Lubricated friction is a case of fluid friction where a lubricant fluid separates two solid surfaces, skin friction is a component of drag, the force resisting the motion of a fluid across the surface of a body. Internal friction is the force resisting motion between the making up a solid material while it undergoes deformation. When surfaces in contact move relative to other, the friction between the two surfaces converts kinetic energy into thermal energy. This property can have consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Kinetic energy is converted to thermal energy whenever motion with friction occurs, another important consequence of many types of friction can be wear, which may lead to performance degradation and/or damage to components. Friction is a component of the science of tribology, Friction is not itself a fundamental force. Dry friction arises from a combination of adhesion, surface roughness, surface deformation. The complexity of interactions makes the calculation of friction from first principles impractical and necessitates the use of empirical methods for analysis. Friction is a non-conservative force - work done against friction is path dependent, in the presence of friction, some energy is always lost in the form of heat. Thus mechanical energy is not conserved, the Greeks, including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A. D. that it is easier to further the motion of a moving body than to move a body at rest. The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology and these laws were rediscovered by Guillaume Amontons in 1699. Amontons presented the nature of friction in terms of surface irregularities, the understanding of friction was further developed by Charles-Augustin de Coulomb. Coulomb further considered the influence of sliding velocity, temperature and humidity, the distinction between static and dynamic friction is made in Coulombs friction law, although this distinction was already drawn by Johann Andreas von Segner in 1758. Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, in Leslies view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before
15.
String vibration
–
A vibration in a string is a wave. Resonance causes a string to produce a sound with constant frequency. If the length or tension of the string is correctly adjusted, vibrating strings are the basis of string instruments such as guitars, cellos, and pianos. Let Δ x be the length of a piece of string, m its mass, and μ its linear density. If the horizontal component of tension in the string is a constant, T, if both angles are small, then the tensions on either side are equal and the net horizontal force is zero. This is the equation for y, and the coefficient of the second time derivative term is equal to v −2, thus v = T μ. Once the speed of propagation is known, the frequency of the produced by the string can be calculated. The speed of propagation of a wave is equal to the wavelength λ divided by the period τ, or multiplied by the frequency f, v = λ τ = λ f. If the length of the string is L, the harmonic is the one produced by the vibration whose nodes are the two ends of the string, so L is half of the wavelength of the fundamental harmonic. Hence one obtains Mersennes laws, f = v 2 L =12 L T μ where T is the tension, μ is the linear density, and L is the length of the vibrating part of the string. This effect is called the effect, and the rate at which the string seems to vibrate is the difference between the frequency of the string and the refresh rate of the screen. The same can happen with a fluorescent lamp, at a rate that is the difference between the frequency of the string and the frequency of the alternating current. In daylight and other non-oscillating light sources, this effect does not occur and the string appears still but thicker, a similar but more controllable effect can be obtained using a stroboscope. This device allows matching the frequency of the flash lamp to the frequency of vibration of the string. In a dark room, this shows the waveform. Otherwise, one can use bending or, perhaps more easily, by adjusting the machine heads, to obtain the same, or a multiple, of the AC frequency to achieve the same effect. For example, in the case of a guitar, the 6th string pressed to the third gives a G at 97.999 Hz. A slight adjustment can alter it to 100 Hz, exactly one octave above the current frequency in Europe and most countries in Africa
16.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
17.
Transverse wave
–
A transverse wave is a moving wave that consists of oscillations occurring perpendicular to the direction of energy transfer. If a transverse wave is moving in the positive x-direction, its oscillations are in up, light is an example of a transverse wave. With regard to transverse waves in matter, the displacement of the medium is perpendicular to the direction of propagation of the wave, a ripple in a pond and a wave on a string are easily visualized as transverse waves. Transverse waves are waves that are oscillating perpendicularly to the direction of propagation, if you anchor one end of a ribbon or string and hold the other end in your hand, you can create transverse waves by moving your hand up and down. Notice though, that you can also launch waves by moving your hand side-to-side, there are two independent directions in which wave motion can occur. In this case, these motions are the y and z directions mentioned above, continuing with the string example, if you carefully move your hand in a clockwise circle, you will launch waves in the form of a left-handed helix as they propagate away. Similarly, if you move your hand in a counter-clockwise circle and these phenomena of simultaneous motion in two directions go beyond the kinds of waves you can create on the surface of water, in general a wave on a string can be two-dimensional. Two-dimensional transverse waves exhibit a phenomenon called polarization, a wave produced by moving your hand in a line, up and down for instance, is a linearly polarized wave, a special case. A wave produced by moving your hand in a circle is a polarized wave. If your motion is not strictly in a line or a circle your hand will describe an ellipse, electromagnetic waves behave in this same way, although it is slightly harder to see. Electromagnetic waves are also two-dimensional transverse waves, ray theory does not describe phenomena such as interference and diffraction, which require wave theory. You can think of a ray of light, in optics, a light ray is a line or curve that is perpendicular to the lights wavefronts. Light rays bend at the interface between two media and may be curved in a medium in which the refractive index changes. Geometric optics describes how rays propagate through an optical system, the light wave diagram shows linear polarization. Each of these fields, the electric and the magnetic, exhibits two-dimensional transverse wave behavior, the transverse plane wave animation shown is also an example of linear polarization. The wave shown could occur on a water surface, transverse and Longitudinal Waves Introductory module on these waves at Connexions
18.
Eigenvalues and eigenvectors
–
In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector whose direction does not change when that linear transformation is applied to it. This condition can be written as the equation T = λ v, there is a correspondence between n by n square matrices and linear transformations from an n-dimensional vector space to itself. For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically an eigenvector, corresponding to a real eigenvalue, points in a direction that is stretched by the transformation. If the eigenvalue is negative, the direction is reversed, Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen for proper, inherent, own, individual, special, specific, peculiar, or characteristic. In essence, an eigenvector v of a linear transformation T is a vector that. Applying T to the eigenvector only scales the eigenvector by the scalar value λ and this condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar, for example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex. The Mona Lisa example pictured at right provides a simple illustration, each point on the painting can be represented as a vector pointing from the center of the painting to that point. The linear transformation in this example is called a shear mapping, the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all when this transformation is applied, therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction. Moreover, these all have an eigenvalue equal to one because the mapping does not change their length. Linear transformations can take different forms, mapping vectors in a variety of vector spaces. Alternatively, the transformation could take the form of an n by n matrix. For a matrix, eigenvalues and eigenvectors can be used to decompose the matrix, the set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T, Eigenvalues are often introduced in the context of linear algebra or matrix theory. Historically, however, they arose in the study of quadratic forms, in the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the principal axes
19.
Scale of harmonics
–
The scale of harmonics is a musical scale based on the noded positions of the natural harmonics existing on a string. This musical scale is present on the guqin, regarded as one of the first string instruments with a musical scale, most fret positions appearing on Non-Western string instruments are equal to positions of this scale. Unexpectedly, these positions are actually the corresponding undertones of the overtones from the harmonic series. The distance from the nut to the fret is a number lower than the distance from the fret to the bridge. On the guqin, the end of the dotted scale is a mirror image of the right end. The instrument is played with flageolet tones as well as pressing the strings on the wood, the flageolets appear on the harmonic positions of the overtone series, therefore these positions are marked as the musical scale of this instrument. The flageolet positions also represent the harmonic consonant relation of the string part with the open string. The guqin has one anomaly in its scale, the guqin scale represents the first six harmonics and the eighth harmonic. The seventh harmonic is left out, however this tone is still consonant related to the open string and has a lesser consonant relation to all other harmonic positions. A Vietnamese monochord, called the Đàn bầu, also functions with the scale of harmonics, on this instrument only the right half of the scale is present up to the limit of the first seven overtones. The dots are on the string lengths 1/2, 1/3, 1/4, 1/5, 1/6, partchs tone selection otonality from his utonality and otonality concept are the complement pitches of the overtones. For instance, the frequency ratio 5,4 is equal to 4/5th of the length and 4/5 is the complement of 1/5. Groven used the seljefløyte as basis for his research, the flute uses only the upper harmonic scale. The scale is present on the Moodswinger. Although this functions quite differently than a Guqin, oddly enough the scale occurs on this instrument while it is not played in a just intonation tuning, arithmetic progression Harmonic spectrum Otonality and Utonality Partch, Harry. Genesis Of A Music, An Account Of A Creative Work, Its Roots, article about the overtoning positions and their relation to musical scales
20.
Stringed instrument
–
String instruments, stringed instruments, or chordophones are musical instruments that produce sound from vibrating strings when the performer plays or sounds the strings in some manner. Musicians play some string instruments by plucking the strings with their fingers or a plectrum—and others by hitting the strings with a wooden hammer or by rubbing the strings with a bow. In some keyboard instruments, such as the harpsichord or piano, with bowed instruments, the player rubs the strings with a horsehair bow, causing them to vibrate. With a hurdy-gurdy, the musician operates a wheel that rubs the strings. Bowed instruments include the string instruments of the Classical music orchestra. All of the string instruments can also be plucked with the fingers. Some types of string instrument are mainly plucked, such as the harp, in the Hornbostel-Sachs scheme of musical instrument classification, used in organology, string instruments are called chordophones. Other examples include the sitar, rebab, banjo, mandolin, ukulele, in most string instruments, the vibrations are transmitted to the body of the instrument, which often incorporates some sort of hollow or enclosed area. The body of the instrument also vibrates, along with the air inside it, the vibration of the body of the instrument and the enclosed hollow or chamber make the vibration of the string more audible to the performer and audience. The body of most string instruments is hollow, some, however—such as electric guitar and other instruments that rely on electronic amplification—may have a solid wood body. Archaeological digs have identified some of the earliest stringed instruments in Ancient Mesopotamian sites, like the lyres of Ur, the development of lyre instruments required the technology to create a tuning mechanism to tighten and loosen the string tension. During the medieval era, instrument development varied from country to country, Middle Eastern rebecs represented breakthroughs in terms of shape and strings, with a half a pear shape using three strings. Early versions of the violin and fiddle, by comparison, emerged in Europe through instruments such as the gittern, a four stringed precursor to the guitar and these instruments typically used catgut and other materials, including silk, for their strings. String instrument design refined during the Renaissance and into the Baroque period of musical history, violins and guitars became more consistent in design, and were roughly similar to what we use in the 2000s. At the same time, the 19th century guitar became more associated with six string models. In big bands of the 1920s, the guitar played backing chords. The development of guitar amplifiers, which contained a power amplifier, the development of the electric guitar provided guitarists with an instrument that was built to connect to guitar amplifiers. Electric guitars have magnetic pickups, volume control knobs and an output jack, in the 1960s, larger, more powerful guitar amplifiers were developed, called stacks
21.
Structural load
–
Structural loads or actions are forces, deformations, or accelerations applied to a structure or its components. Loads cause stresses, deformations, and displacements in structures, assessment of their effects is carried out by the methods of structural analysis. Excess load or overloading may cause failure, and hence such possibility should be either considered in the design or strictly controlled. Mechanical structures, such as aircraft, satellites, rockets, space stations, ships, engineers often evaluate structural loads based upon published regulations, contracts, or specifications. Accepted technical standards are used for testing and inspection. Dead loads are static forces that are constant for an extended time. They can be in tension or compression, the term can refer to a laboratory test method or to the normal usage of a material or structure. Live loads are usually unstable or moving loads and these dynamic loads may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids, etc. An impact load is one time of application on a material is less than one-third of the natural period of vibration of that material. Cyclic loads on a structure can lead to damage, cumulative damage. These loads can be repeated loadings on a structure or can be due to vibration, structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and materials of construction. Structural loads are split into categories by their originating cause, in terms of the actual load on a structure, there is no difference between dead or live loading, but the split occurs for use in safety calculations or ease of analysis on complex models. To meet the requirement that design strength be higher than maximum loads, building codes prescribe that, for structural design and these load factors are, roughly, a ratio of the theoretical design strength to the maximum load expected in service. The dead load includes loads that are constant over time, including the weight of the structure itself. The roof is also a dead load, dead loads are also known as permanent or static loads. Building materials are not dead loads until constructed in permanent position, iS875-1987 give unit weight of building materials, parts, components. Live loads, or imposed loads, are temporary, of short duration and these dynamic loads may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids and material fatigue
22.
Stress (mechanics)
–
For example, when a solid vertical bar is supporting a weight, each particle in the bar pushes on the particles immediately below it. When a liquid is in a container under pressure, each particle gets pushed against by all the surrounding particles. The container walls and the pressure-inducing surface push against them in reaction and these macroscopic forces are actually the net result of a very large number of intermolecular forces and collisions between the particles in those molecules. Strain inside a material may arise by various mechanisms, such as stress as applied by external forces to the material or to its surface. Any strain of a material generates an internal elastic stress, analogous to the reaction force of a spring. In liquids and gases, only deformations that change the volume generate persistent elastic stress, however, if the deformation is gradually changing with time, even in fluids there will usually be some viscous stress, opposing that change. Elastic and viscous stresses are usually combined under the mechanical stress. Significant stress may exist even when deformation is negligible or non-existent, stress may exist in the absence of external forces, such built-in stress is important, for example, in prestressed concrete and tempered glass. Stress may also be imposed on a material without the application of net forces, for example by changes in temperature or chemical composition, stress that exceeds certain strength limits of the material will result in permanent deformation or even change its crystal structure and chemical composition. In some branches of engineering, the stress is occasionally used in a looser sense as a synonym of internal force. For example, in the analysis of trusses, it may refer to the total traction or compression force acting on a beam, since ancient times humans have been consciously aware of stress inside materials. Until the 17th century the understanding of stress was largely intuitive and empirical, with those tools, Augustin-Louis Cauchy was able to give the first rigorous and general mathematical model for stress in a homogeneous medium. Cauchy observed that the force across a surface was a linear function of its normal vector, and, moreover. The understanding of stress in liquids started with Newton, who provided a formula for friction forces in parallel laminar flow. Stress is defined as the force across a small boundary per unit area of that boundary, following the basic premises of continuum mechanics, stress is a macroscopic concept. In a fluid at rest the force is perpendicular to the surface, in a solid, or in a flow of viscous liquid, the force F may not be perpendicular to S, hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude depend on the orientation of S. Thus the stress state of the material must be described by a tensor, called the stress tensor, with respect to any chosen coordinate system, the Cauchy stress tensor can be represented as a symmetric matrix of 3×3 real numbers
23.
Tensor
–
In mathematics, tensors are geometric objects that describe linear relations between geometric vectors, scalars, and other tensors. Elementary examples of such include the dot product, the cross product. Geometric vectors, often used in physics and engineering applications, given a coordinate basis or fixed frame of reference, a tensor can be represented as an organized multidimensional array of numerical values. The order of a tensor is the dimensionality of the array needed to represent it, or equivalently, for example, a linear map is represented by a matrix in a basis, and therefore is a 2nd-order tensor. A vector is represented as a 1-dimensional array in a basis, scalars are single numbers and are thus 0th-order tensors. Because they express a relationship between vectors, tensors themselves must be independent of a choice of coordinate system. The precise form of the transformation law determines the type of the tensor, the tensor type is a pair of natural numbers, where n is the number of contravariant indices and m is the number of covariant indices. The total order of a tensor is the sum of two numbers. The concept enabled an alternative formulation of the differential geometry of a manifold in the form of the Riemann curvature tensor. There are several approaches to defining tensors, although seemingly different, the approaches just describe the same geometric concept using different languages and at different levels of abstraction. For example, an operator is represented in a basis as a two-dimensional square n × n array. The numbers in the array are known as the scalar components of the tensor or simply its components. They are denoted by giving their position in the array, as subscripts and superscripts. For example, the components of an order 2 tensor T could be denoted Tij , whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. The total number of required to identify each component uniquely is equal to the dimension of the array. However, the term generally has another meaning in the context of matrices. Just as the components of a change when we change the basis of the vector space. Each tensor comes equipped with a law that details how the components of the tensor respond to a change of basis
24.
Velocity
–
The velocity of an object is the rate of change of its position with respect to a frame of reference, and is a function of time. Velocity is equivalent to a specification of its speed and direction of motion, Velocity is an important concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity is a vector quantity, both magnitude and direction are needed to define it. The scalar absolute value of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI system as metres per second or as the SI base unit of. For example,5 metres per second is a scalar, whereas 5 metres per second east is a vector, if there is a change in speed, direction or both, then the object has a changing velocity and is said to be undergoing an acceleration. To have a constant velocity, an object must have a constant speed in a constant direction, constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a path has a constant speed. Hence, the car is considered to be undergoing an acceleration, Speed describes only how fast an object is moving, whereas velocity gives both how fast and in what direction the object is moving. If a car is said to travel at 60 km/h, its speed has been specified, however, if the car is said to move at 60 km/h to the north, its velocity has now been specified. The big difference can be noticed when we consider movement around a circle and this is because the average velocity is calculated by only considering the displacement between the starting and the end points while the average speed considers only the total distance traveled. Velocity is defined as the rate of change of position with respect to time, average velocity can be calculated as, v ¯ = Δ x Δ t. The average velocity is less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, from this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time is the displacement, x. In calculus terms, the integral of the velocity v is the displacement function x. In the figure, this corresponds to the area under the curve labeled s. Since the derivative of the position with respect to time gives the change in position divided by the change in time, although velocity is defined as the rate of change of position, it is often common to start with an expression for an objects acceleration. As seen by the three green tangent lines in the figure, an objects instantaneous acceleration at a point in time is the slope of the tangent to the curve of a v graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time, from there, we can obtain an expression for velocity as the area under an a acceleration vs. time graph
25.
Weight
–
In science and engineering, the weight of an object is usually taken to be the force on the object due to gravity. Weight is a vector whose magnitude, often denoted by an italic letter W, is the product of the m of the object. The unit of measurement for weight is that of force, which in the International System of Units is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, in this sense of weight, a body can be weightless only if it is far away from any other mass. Although weight and mass are scientifically distinct quantities, the terms are often confused with other in everyday use. There is also a tradition within Newtonian physics and engineering which sees weight as that which is measured when one uses scales. There the weight is a measure of the magnitude of the force exerted on a body. Typically, in measuring an objects weight, the object is placed on scales at rest with respect to the earth, thus, in a state of free fall, the weight would be zero. In this second sense of weight, terrestrial objects can be weightless, ignoring air resistance, the famous apple falling from the tree, on its way to meet the ground near Isaac Newton, is weightless. Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to gravity is modelled as a consequence of the curvature of spacetime. In the teaching community, a debate has existed for over half a century on how to define weight for their students. The current situation is that a set of concepts co-exist. Discussion of the concepts of heaviness and lightness date back to the ancient Greek philosophers and these were typically viewed as inherent properties of objects. Plato described weight as the tendency of objects to seek their kin. To Aristotle weight and levity represented the tendency to restore the order of the basic elements, air, earth, fire. He ascribed absolute weight to earth and absolute levity to fire, archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as, weight is the heaviness or lightness of one thing, compared to another, operational balances had, however, been around much longer. According to Aristotle, weight was the cause of the falling motion of an object
26.
Gravity of Earth
–
The gravity of Earth, which is denoted by g, refers to the acceleration that is imparted to objects due to the distribution of mass within the Earth. In SI units this acceleration is measured in metres per second squared or equivalently in newtons per kilogram and this quantity is sometimes referred to informally as little g. The precise strength of Earths gravity varies depending on location, the nominal average value at the Earths surface, known as standard gravity is, by definition,9.80665 m/s2. This quantity is denoted variously as gn, ge, g0, gee, the weight of an object on the Earths surface is the downwards force on that object, given by Newtons second law of motion, or F = ma. Gravitational acceleration contributes to the acceleration, but other factors, such as the rotation of the Earth, also contribute. The Earth is not spherically symmetric, but is slightly flatter at the poles while bulging at the Equator, there are consequently slight deviations in both the magnitude and direction of gravity across its surface. The net force as measured by a scale and plumb bob is called effective gravity or apparent gravity, effective gravity includes other factors that affect the net force. These factors vary and include such as centrifugal force at the surface from the Earths rotation. Effective gravity on the Earths surface varies by around 0. 7%, in large cities, it ranges from 9.766 in Kuala Lumpur, Mexico City, and Singapore to 9.825 in Oslo and Helsinki. The surface of the Earth is rotating, so it is not a frame of reference. At latitudes nearer the Equator, the centrifugal force produced by Earths rotation is larger than at polar latitudes. This counteracts the Earths gravity to a small degree – up to a maximum of 0. 3% at the Equator –, the same two factors influence the direction of the effective gravity. Gravity decreases with altitude as one rises above the Earths surface because greater altitude means greater distance from the Earths centre, all other things being equal, an increase in altitude from sea level to 9,000 metres causes a weight decrease of about 0. 29%. It is a misconception that astronauts in orbit are weightless because they have flown high enough to escape the Earths gravity. In fact, at an altitude of 400 kilometres, equivalent to an orbit of the Space Shuttle. Weightlessness actually occurs because orbiting objects are in free-fall, the effect of ground elevation depends on the density of the ground. A person flying at 30000 ft above sea level over mountains will feel more gravity than someone at the same elevation, however, a person standing on the earths surface feels less gravity when the elevation is higher. The following formula approximates the Earths gravity variation with altitude, g h = g 02 Where gh is the acceleration at height h above sea level
27.
Hooke's law
–
Hookes law is a principle of physics that states that the force needed to extend or compress a spring by some distance X is proportional to that distance. That is, F = kX, where k is a constant factor characteristic of the spring, its stiffness, the law is named after 17th-century British physicist Robert Hooke. He first stated the law in 1676 as a Latin anagram and he published the solution of his anagram in 1678 as, ut tensio, sic vis. Hooke states in the 1678 work that he was aware of the law already in 1660, an elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean. Hookes law is only a linear approximation to the real response of springs. Many materials will deviate from Hookes law well before those elastic limits are reached. On the other hand, Hookes law is an approximation for most solid bodies, as long as the forces. For this reason, Hookes law is used in all branches of science and engineering. It is also the principle behind the spring scale, the manometer. The modern theory of elasticity generalizes Hookes law to say that the strain of an object or material is proportional to the stress applied to it. In this general form, Hookes law makes it possible to deduce the relation between strain and stress for complex objects in terms of properties of the materials it is made of. Consider a simple helical spring that has one end attached to some fixed object, suppose that the spring has reached a state of equilibrium, where its length is not changing anymore. Let X be the amount by which the end of the spring was displaced from its relaxed position. Hookes law states that F = k X or, equivalently, X = F k where k is a real number. Moreover, the formula holds when the spring is compressed. According to this formula, the graph of the applied force F as a function of the displacement X will be a line passing through the origin. Hookes law for a spring is often stated under the convention that F is the force exerted by the spring on whatever is pulling its free end. In that case, the equation becomes F = − k X since the direction of the force is opposite to that of the displacement
28.
Special relativity
–
In physics, special relativity is the generally accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einsteins original pedagogical treatment, it is based on two postulates, The laws of physics are invariant in all inertial systems, the speed of light in a vacuum is the same for all observers, regardless of the motion of the light source. It was originally proposed in 1905 by Albert Einstein in the paper On the Electrodynamics of Moving Bodies, as of today, special relativity is the most accurate model of motion at any speed. Even so, the Newtonian mechanics model is useful as an approximation at small velocities relative to the speed of light. Not until Einstein developed general relativity, to incorporate general frames of reference, a translation that has often been used is restricted relativity, special really means special case. It has replaced the notion of an absolute universal time with the notion of a time that is dependent on reference frame. Rather than an invariant time interval between two events, there is an invariant spacetime interval, a defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other, rather space and time are interwoven into a single continuum known as spacetime. Events that occur at the time for one observer can occur at different times for another. The theory is special in that it applies in the special case where the curvature of spacetime due to gravity is negligible. In order to include gravity, Einstein formulated general relativity in 1915, Special relativity, contrary to some outdated descriptions, is capable of handling accelerations as well as accelerated frames of reference. e. At a sufficiently small scale and in conditions of free fall, a locally Lorentz-invariant frame that abides by special relativity can be defined at sufficiently small scales, even in curved spacetime. Galileo Galilei had already postulated that there is no absolute and well-defined state of rest, Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been recently observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light and the independence of physical laws from the choice of inertial system, the Principle of Invariant Light Speed –. Light is always propagated in empty space with a definite velocity c which is independent of the state of motion of the emitting body. That is, light in vacuum propagates with the c in at least one system of inertial coordinates. Following Einsteins original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations, however, the most common set of postulates remains those employed by Einstein in his original paper
29.
Color confinement
–
Color confinement, often simply called confinement, is the phenomenon that color charged particles cannot be isolated singularly, and therefore cannot be directly observed. Quarks, by default, clump together to form groups, or hadrons, the two types of hadrons are the mesons and the baryons. The constituent quarks in a group cannot be separated from their parent hadron, the reasons for quark confinement are somewhat complicated, no analytic proof exists that quantum chromodynamics should be confining. The current theory is that confinement is due to the force-carrying gluons having color charge, as any two electrically charged particles separate, the electric fields between them diminish quickly, allowing electrons to become unbound from atomic nuclei. However, as a quark-antiquark pair separates, the field forms a narrow tube of color field between them. This is quite different from the behavior of the field of a pair of positive and negative electric charges. Because of this behavior of the field, a strong force between the quark pair acts constantly—regardless of their distance—with a force of around 10,000 newtons. As a result of this, when quarks are produced in accelerators, instead of seeing the individual quarks in detectors, scientists see jets of many color-neutral particles. This process is called hadronization, fragmentation, or string breaking, in a non-confining theory, the action of such a loop is proportional to its perimeter. However, in a theory, the action of the loop is instead proportional to its area. Since the area will be proportional to the separation of the quark–antiquark pair, mesons are allowed in such a picture, since a loop containing another loop in the opposite direction will have only a small area between the two loops. Besides QCD in four dimensions, another model which exhibits confinement is the Schwinger model. Compact Abelian gauge theories also exhibit confinement in 2 and 3 spacetime dimensions, confinement has recently been found in elementary excitations of magnetic systems called spinons. Besides the quark confinement idea, there is a possibility that the color charge of quarks gets fully screened by the gluonic color surrounding the quark. Exact solutions of SU classical Yang–Mills theory which provide full screening of the charge of a quark have been found. However, such classical solutions do not take into account non-trivial properties of QCD vacuum, therefore, the significance of such full gluonic screening solutions for a separated quark is not clear. Gluon field strength tensor Asymptotic freedom Center vortices Deconfining phase Quantum mechanics Particle physics Fundamental force Dual superconducting model Beta-function Infrared safety Quarks
30.
Quark
–
A quark is an elementary particle and a fundamental constituent of matter. Quarks combine to form composite particles called hadrons, the most stable of which are protons and neutrons, due to a phenomenon known as color confinement, quarks are never directly observed or found in isolation, they can be found only within hadrons, such as baryons and mesons. For this reason, much of what is known about quarks has been drawn from observations of the hadrons themselves, Quarks have various intrinsic properties, including electric charge, mass, color charge, and spin. There are six types of quarks, known as flavors, up, down, strange, charm, top, up and down quarks have the lowest masses of all quarks. The heavier quarks rapidly change into up and down quarks through a process of particle decay, the transformation from a higher mass state to a lower mass state. Because of this, up and down quarks are generally stable and the most common in the universe, whereas strange, charm, bottom, and top quarks can only be produced in high energy collisions. For every quark flavor there is a type of antiparticle, known as an antiquark. The quark model was proposed by physicists Murray Gell-Mann and George Zweig in 1964. Accelerator experiments have provided evidence for all six flavors, the top quark was the last to be discovered at Fermilab in 1995. The Standard Model is the theoretical framework describing all the known elementary particles. This model contains six flavors of quarks, named up, down, strange, charm, bottom, antiparticles of quarks are called antiquarks, and are denoted by a bar over the symbol for the corresponding quark, such as u for an up antiquark. As with antimatter in general, antiquarks have the mass, mean lifetime, and spin as their respective quarks. Quarks are spin- 1⁄2 particles, implying that they are fermions according to the spin-statistics theorem and they are subject to the Pauli exclusion principle, which states that no two identical fermions can simultaneously occupy the same quantum state. This is in contrast to bosons, any number of which can be in the same state, unlike leptons, quarks possess color charge, which causes them to engage in the strong interaction. The resulting attraction between different quarks causes the formation of composite particles known as hadrons, there are two families of hadrons, baryons, with three valence quarks, and mesons, with a valence quark and an antiquark. The most common baryons are the proton and the neutron, the blocks of the atomic nucleus. A great number of hadrons are known, most of them differentiated by their quark content, the existence of exotic hadrons with more valence quarks, such as tetraquarks and pentaquarks, has been conjectured but not proven. However, on 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states, elementary fermions are grouped into three generations, each comprising two leptons and two quarks
31.
String theory
–
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. It describes how strings propagate through space and interact with each other. On distance scales larger than the scale, a string looks just like an ordinary particle, with its mass, charge. In string theory, one of the vibrational states of the string corresponds to the graviton. Thus string theory is a theory of quantum gravity, String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. Despite much work on problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows to choose the details. String theory was first studied in the late 1960s as a theory of the nuclear force. Subsequently, it was realized that the properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. In late 1997, theorists discovered an important relationship called the AdS/CFT correspondence, one of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, and these issues have led some in the community to criticize these approaches to physics and question the value of continued research on string theory unification. In the twentieth century, two theoretical frameworks emerged for formulating the laws of physics, one of these frameworks was Albert Einsteins general theory of relativity, a theory that explains the force of gravity and the structure of space and time. The other was quantum mechanics, a different formalism for describing physical phenomena using probability. In spite of successes, there are still many problems that remain to be solved. One of the deepest problems in physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, in addition to the problem of developing a consistent theory of quantum gravity, there are many other fundamental problems in the physics of atomic nuclei, black holes, and the early universe. String theory is a framework that attempts to address these questions
32.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics
33.
Continuum mechanics
–
Continuum mechanics is a branch of mechanics that deals with the analysis of the kinematics and the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century, research in the area continues till today. Modeling an object as a continuum assumes that the substance of the object completely fills the space it occupies, Continuum mechanics deals with physical properties of solids and fluids which are independent of any particular coordinate system in which they are observed. These physical properties are represented by tensors, which are mathematical objects that have the required property of being independent of coordinate system. These tensors can be expressed in coordinate systems for computational convenience, Materials, such as solids, liquids and gases, are composed of molecules separated by space. On a microscopic scale, materials have cracks and discontinuities, a continuum is a body that can be continually sub-divided into infinitesimal elements with properties being those of the bulk material. More specifically, the continuum hypothesis/assumption hinges on the concepts of an elementary volume. This condition provides a link between an experimentalists and a viewpoint on constitutive equations as well as a way of spatial and statistical averaging of the microstructure. The latter then provide a basis for stochastic finite elements. The levels of SVE and RVE link continuum mechanics to statistical mechanics, the RVE may be assessed only in a limited way via experimental testing, when the constitutive response becomes spatially homogeneous. Specifically for fluids, the Knudsen number is used to assess to what extent the approximation of continuity can be made, consider car traffic on a highway---with just one lane for simplicity. Somewhat surprisingly, and in a tribute to its effectiveness, continuum mechanics effectively models the movement of cars via a differential equation for the density of cars. The familiarity of this situation empowers us to understand a little of the continuum-discrete dichotomy underlying continuum modelling in general. To start modelling define that, x measure distance along the highway, t is time, ρ is the density of cars on the highway, cars do not appear and disappear. Consider any group of cars, from the car at the back of the group located at x = a to the particular car at the front located at x = b. The total number of cars in this group N = ∫ a b ρ d x, since cars are conserved d N / d t =0. The only way an integral can be zero for all intervals is if the integrand is zero for all x, consequently, conservation derives the first order nonlinear conservation PDE ∂ ρ ∂ t + ∂ ∂ x =0 for all positions on the highway. This conservation PDE applies not only to car traffic but also to fluids, solids, crowds, animals, plants, bushfires, financial traders and this PDE is one equation with two unknowns, so another equation is needed to form a well posed problem
34.
Fall factor
–
F = h L The impact force is defined as the maximum tension in the rope when a climber falls. Using the elastic modulus E = k L/q which is a constant, the impact force depends only on the fall factor f, i. e. on the ratio h/L, the cross section q of the rope. The more rope is available, the softer the rope becomes which is just compensating the higher fall energy, the maximum force on the climber is Fmax reduced by the climber’s weight mg. The above formula can be obtained by the law of conservation of energy at the time of maximum tension resp. The mass m0 used in the fall is 80 kg, dry friction leads to an effective rope length smaller than the available length L and thus increases the impact force. Dry friction is also responsible for the drag a climber has to overcome in order to move forward. It can be expressed by a mass of the rope that the climber has to pull which is always larger than the rope mass itself. It depends exponentially on the sum of the angles of the changes the climber has made. A fall factor of two is the maximum that is possible in a lead climbing fall, since the length of an arrested fall cannot exceed two times the length of the rope. Normally, a fall can occur only when a lead climber who has placed no protection falls past the belayer. As soon as the climber clips the rope into protection above the belay, the distance of the fall as a function of rope length is lessened. In falls occurring on a via ferrata, fall factors can be much higher and this is possible because the length of rope between harness and carabiner is short and fixed, while the distance the climber can fall depends on the gaps between anchor points of the safety cable. The Standard Equation for Impact Force, climbing Physics - Understanding Fall Factors. Contains full derivation of equation in Notes
35.
Surface tension
–
Surface tension is the elastic tendency of a fluid surface which makes it acquire the least surface area possible. Surface tension allows insects, usually denser than water, to float, at liquid-air interfaces, surface tension results from the greater attraction of liquid molecules to each other than to the molecules in the air. The net effect is a force at its surface that causes the liquid to behave as if its surface were covered with a stretched elastic membrane. Thus, the surface becomes under tension from the imbalanced forces, because of the relatively high attraction of water molecules for each other through a web of hydrogen bonds, water has a higher surface tension compared to that of most other liquids. Surface tension is an important factor in the phenomenon of capillarity, Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the surface energy. In materials science, surface tension is used for either surface stress or surface free energy, the cohesive forces among liquid molecules are responsible for the phenomenon of surface tension. In the bulk of the liquid, each molecule is pulled equally in every direction by neighboring liquid molecules, the molecules at the surface do not have the same molecules on all sides of them and therefore are pulled inwards. This creates some internal pressure and forces liquid surfaces to contract to the minimal area, Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a shape by the imbalance in cohesive forces of the surface layer. In the absence of forces, including gravity, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary wall tension of the surface according to Laplaces law. Another way to view surface tension is in terms of energy, a molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, for the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized quantity of boundary molecules results in a surface area. As a result of surface area minimization, a surface will assume the smoothest shape it can, since any curvature in the surface shape results in greater area, a higher energy will also result. Consequently, the surface will push back against any curvature in much the way as a ball pushed uphill will push back to minimize its gravitational potential energy. Bubbles in pure water are unstable, the addition of surfactants, however, can have a stabilizing effect on the bubbles
36.
Ultimate tensile strength
–
In other words, tensile strength resists tension, whereas compressive strength resists compression. Ultimate tensile strength is measured by the stress that a material can withstand while being stretched or pulled before breaking. In the study of strength of materials, tensile strength, compressive strength, some materials break very sharply, without plastic deformation, in what is called a brittle failure. Others, which are more ductile, including most metals, experience some plastic deformation, the UTS is usually found by performing a tensile test and recording the engineering stress versus strain. The highest point of the curve is the UTS. It is a property, therefore its value does not depend on the size of the test specimen. However, it is dependent on other factors, such as the preparation of the specimen, the presence or otherwise of surface defects, Tensile strengths are rarely used in the design of ductile members, but they are important in brittle members. They are tabulated for common materials such as alloys, composite materials, ceramics, plastics, Tensile strength can be defined for liquids as well as solids under certain conditions. Tensile strength is defined as a stress, which is measured as force per unit area, for some non-homogeneous materials it can be reported just as a force or as a force per unit width. In the International System of Units, the unit is the pascal, or, equivalently to pascals, Many materials can display linear elastic behavior, defined by a linear stress–strain relationship, as shown in the left figure up to point 3. Beyond this elastic region, for materials, such as steel. A plastically deformed specimen does not completely return to its original size, for many applications, plastic deformation is unacceptable, and is used as the design limitation. The reversal point is the stress on the engineering stress–strain curve. The UTS is not used in the design of static members because design practices dictate the use of the yield stress. It is, however, used for quality control, because of the ease of testing and it is also used to roughly determine material types for unknown samples. The UTS is a common engineering parameter to design members made of material because such materials have no yield point. Typically, the testing involves taking a sample with a fixed cross-sectional area. When testing some metals, indentation hardness correlates linearly with tensile strength and this important relation permits economically important nondestructive testing of bulk metal deliveries with lightweight, even portable equipment, such as hand-held Rockwell hardness testers