Wear is the damaging, gradual removal or deformation of material at solid surfaces. Causes of wear can be mechanical or chemical; the study of wear and related processes is referred to as tribology. Wear in machine elements, together with other processes such as fatigue and creep, causes functional surfaces to degrade leading to material failure or loss of functionality. Thus, wear has large economic relevance as first outlined in the Jost Report. Wear of metals occurs by plastic displacement of surface and near-surface material and by detachment of particles that form wear debris; the particle size may vary from millimeters to nanometers. This process may occur by contact with other metals, nonmetallic solids, flowing liquids, solid particles or liquid droplets entrained in flowing gasses; the wear rate is affected by factors such as type of loading, type of motion, temperature. Depending on the tribosystem, different wear types and wear mechanisms can be observed. Wear is classified according to so-called wear types, which occur in isolation or complex interaction.
Common types of wear include: Adhesive wear Abrasive wear Surface fatigue Fretting wear Erosive wear Corrosion and oxidation wearOther, less common types of wear are impact-, cavitation- and diffusive wear. Each wear type is caused by one or more wear mechanisms. For example, the primary wear mechanism of adhesive wear is adhesion. Wear mechanisms and/or sub-mechanisms overlap and occur in a synergistic manner, producing a greater rate of wear than the sum of the individual wear mechanisms. Adhesive wear can be found between surfaces during frictional contact and refers to unwanted displacement and attachment of wear debris and material compounds from one surface to another. Two adhesive wear types can be distinguished: Adhesive wear is caused by relative motion, "direct contact" and plastic deformation which create wear debris and material transfer from one surface to another. Cohesive adhesive forces, holds two surfaces together though they are separated by a measurable distance, with or without any actual transfer of material.
Adhesive wear occurs when two bodies slide over or are pressed into each other, which promote material transfer. This can be described as plastic deformation of small fragments within the surface layers; the asperities or microscopic high points found on each surface affect the severity of how fragments of oxides are pulled off and added to the other surface due to strong adhesive forces between atoms, but due to accumulation of energy in the plastic zone between the asperities during relative motion. The type of mechanism and the amplitude of surface attraction varies between different materials but are amplified by an increase in the density of "surface energy". Most solids will adhere on contact to some extent. However, oxidation films and contaminants occurring suppress adhesion, spontaneous exothermic chemical reactions between surfaces produce a substance with low energy status in the absorbed species. Adhesive wear can lead to an increase in roughness and the creation of protrusions above the original surface.
In industrial manufacturing, this is referred to as galling, which breaches the oxidized surface layer and connects to the underlying bulk material, enhancing the possibility for a stronger adhesion and plastic flow around the lump. Abrasive wear occurs. ASTM International defines it as the loss of material due to hard particles or hard protuberances that are forced against and move along a solid surface. Abrasive wear is classified according to the type of contact and the contact environment; the type of contact determines the mode of abrasive wear. The two modes of abrasive wear are known as three-body abrasive wear. Two-body wear occurs when hard particles remove material from the opposite surface; the common analogy is that of material being displaced by a cutting or plowing operation. Three-body wear occurs when the particles are not constrained, are free to roll and slide down a surface; the contact environment determines whether the wear is classified as closed. An open contact environment occurs when the surfaces are sufficiently displaced to be independent of one another There are a number of factors which influence abrasive wear and hence the manner of material removal.
Several different mechanisms have been proposed to describe the manner in which the material is removed. Three identified mechanisms of abrasive wear are: Plowing Cutting FragmentationPlowing occurs when material is displaced to the side, away from the wear particles, resulting in the formation of grooves that do not involve direct material removal; the displaced material forms ridges adjacent to grooves, which may be removed by subsequent passage of abrasive particles. Cutting occurs when material is separated from the surface in the form of primary debris, or microchips, with little or no material displaced to the sides of the grooves; this mechanism resembles conventional machining. Fragmentation occurs when material is separated from a surface by a cutting process and the indenting abrasive causes localized fracture of the wear material; these cracks freely propagate locally around the wear groove, resulting in additional material removal by spalling. Abrasive wear can be measured as loss of mass by the Taber Abrasion Test according to ISO 9352 or ASTM D 4060.
Surface fatigue is a process by which the surface of a material is weakened by cyclic loading, one type of general material fatigue. Fatigue wear is produ
Lubrication is the process or technique of using a lubricant to reduce friction and/or wear in a contact between two surfaces. The study of lubrication is a discipline in the field of tribology. Lubricants can be solids, solid/liquid dispersions, liquid-liquid dispersions or gases. Fluid-lubricated systems are designed so that the applied load is or carried by hydrodynamic or hydrostatic pressure, which reduces solid body interactions. Depending on the degree of surface separation, different lubrication regimes can be distinguished. Adequate lubrication allows smooth, continuous operation of machine elements, reduces the rate of wear, prevents excessive stresses or seizures at bearings; when lubrication breaks down, components can rub destructively against each other, causing heat, local welding, destructive damage and failure. As the load increases on the contacting surfaces, three distinct situations can be observed with respect to the mode of lubrication, which are called lubrication regimes: Fluid film lubrication is the lubrication regime in which, through viscous forces, the load is supported by the lubricant within the space or gap between the parts in motion relative to one another object and solid–solid contact is avoided.
In hydrostatic lubrication, external pressure is applied to the lubricant in the bearing to maintain the fluid lubricant film where it would otherwise be squeezed out. In hydrodynamic lubrication the motion of the contacting surfaces as well as the design of the bearing pump lubricant around the bearing to maintain the lubricating film; this design of bearing may wear when started, stopped or reversed, as the lubricant film breaks down. The basis of the hydrodynamic theory of lubrication is the Reynolds equation; the governing equations of the hydrodynamic theory of lubrication and some analytical solutions can be found in the reference. Elastohydrodynamic lubrication: Mostly for nonconforming surfaces or higher load conditions, the bodies suffer elastic strains at the contact; such strain creates a load-bearing area, which provides an parallel gap for the fluid to flow through. Much as in hydrodynamic lubrication, the motion of the contacting bodies generates a flow induced pressure, which acts as the bearing force over the contact area.
In such high pressure regimes, the viscosity of the fluid may rise considerably. At full film elastohydrodynamic lubrication the generated lubricant film separates the surfaces. Contact between raised solid features, or asperities, can occur, leading to a mixed-lubrication or boundary lubrication regime. In addition to Reynolds equation, elastohydrodynamic theory considers the elastic deflection equation, since in this regime elastic deformation of the surfaces contributes to the lubricant film thickness. Boundary lubrication: The hydrodynamic effects are negligible; the bodies come into closer contact at their asperities. At the elevated temperature and pressure conditions, chemically reactive constituents of the lubricant react with the contact surface, forming a resistant tenacious layer or film on the moving solid surfaces, capable of supporting the load and major wear or breakdown is avoided. Boundary lubrication is defined as that regime in which the load is carried by the surface asperities rather than by the lubricant.
Mixed lubrication: This regime is in between the full film elastohydrodynamic and boundary lubrication regimes. The generated lubricant film is not enough to separate the bodies but hydrodynamic effects are considerable. Besides supporting the load the lubricant may have to perform other functions as well, for instance it may cool the contact areas and remove wear products. While carrying out these functions the lubricant is replaced from the contact areas either by the relative movement or by externally induced forces. Lubrication is required for correct operation of mechanical systems such as pistons, cams, turbines, cutting tools etc. where without lubrication the pressure between the surfaces in close proximity would generate enough heat for rapid surface damage which in a coarsened condition may weld the surfaces together, causing seizure. In some applications, such as piston engines, the film between the piston and the cylinder wall seals the combustion chamber, preventing combustion gases from escaping into the crankcase.
Automatic lubricator Tribology Lubricant Friction Wear Machinery Lubrication magazine International Council for Machinery Lubrication Engineers Edge Lubrication Forum Journal of Lubrication Science
Tribology is the science and engineering of interacting surfaces in relative motion. It includes the study and application of the principles of friction and wear. Tribology is interdisciplinary, it draws on many academic fields, including physics, materials science, mathematics and engineering. People who work in the field of tribology are referred to as tribologists; the word tribology derives from the Greek root τριβ- of the verb τρίβω, tribo, "I rub" in classic Greek, the suffix -logy from -λογία, -logia "study of", "knowledge of". Peter Jost coined the word in 1966, in the eponymous report which highlighted the cost of friction and corrosion to the UK economy. Despite the recent naming of the field of tribology, quantitative studies of friction can be traced as far back as 1493, when Leonardo da Vinci first noted the two fundamental ‘laws’ of friction. According to da Vinci, frictional resistance was the same for two different objects of the same weight but making contact over different widths and lengths.
He observed that the force needed to overcome friction doubles as weight doubles. However, da Vinci's findings remained unpublished in his notebooks; the two fundamental ‘laws’ of friction were first published by Guillaume Amontons, with whose name they are now associated, they state that: the force of friction acting between two sliding surfaces is proportional to the load pressing the surfaces together the force of friction is independent of the apparent area of contact between the two surfaces. Although not universally applicable, these simple statements hold for a wide range of systems; these laws were further developed by Charles-Augustin de Coulomb, who noticed that static friction force may depend on the contact time and sliding friction may depend on sliding velocity, normal force and contact area. In 1798, Charles Hatchett and Henry Cavendish carried out the first reliable test on frictional wear. In a study commissioned by the Privy Council of the UK, they used a simple reciprocating machine to evaluate the wear rate of gold coins.
They found. In 1860, Theodor Reye proposed Reye's hypothesis. In 1953, John Frederick Archard developed the Archard equation which describes sliding wear and is based on the theory of asperity contact. Other pioneers of tribology research are Australian physicist Frank Philip Bowden and British physicist David Tabor, both of Cavendish Laboratory. Together they wrote the seminal textbook The Lubrication of Solids. Michael J. Neale was another leader in the field during the mid-to-late 1900's, he specialized in solving problems in machine design by applying his knowledge of tribology. Neale was respected as an educator with a gift for integrating theoretical work with his own practical experience to produce easy-to-understand design guides; the Tribology Handbook, which he first edited in 1973 and updated in 1995, is still used around the world and forms the basis of numerous training courses for engineering designers. Duncan Dowson surveyed the history of tribology in his 1997 book History of Tribology.
This covers developments from prehistory, through early civilizations and highlights the key developments up to the end of the twentieth century. The term tribology became used following The Jost Report published in 1966; the report highlighted the huge cost of friction and corrosion to the UK economy. As a result, the UK government established several national centres to address tribological problems. Since the term has diffused into the international community, with many specialists now identifying as "tribologists". Despite considerable research since the Jost Report, the global impact of friction and wear on energy consumption, economic expenditure, carbon dioxide emissions are still considerable. In 2017, Kenneth Holmberg and Ali Erdemir attempted to quantify their impact worldwide, they considered the four main energy consuming sectors: transport, power generation, residential. The following were concluded: In total, ~23% of the world’s energy consumption originates from tribological contacts.
Of that, 20% is to overcome friction and 3% to remanufacture worn parts and spare equipment due to wear and wear-related. By taking advantage of the new technologies for friction reduction and wear protection, energy losses due to friction and wear in vehicles and other equipment worldwide could be reduced by 40% in the long term and 18% in the short term. On a global scale, these savings would amount to 1.4% of GDP annually and 8.7% of total energy consumption in the long term. The largest short term energy savings are envisioned in transport and in power generation while the potential savings in the manufacturing and residential sectors are estimated to be ~10%. In the longer term, savings would be 55%, 40%, 25%, 20%, respectively. Implementing advanced tribological technologies can reduce global carbon dioxide emissions by as much as 1,460 million tons of carbon dioxide equivalent and result in 450,000 million Euros cost savings in the short term. In the long term, the reduction could be as large as 3,140 MtCO2 and the cost savings 970,000 million Euros.
Classical tribology covering such applications as ball bearings, gear drives, brakes, etc. was developed in the context of mechanical engineering. But in the last decades tribology expanded to qualitatively new fields of applications, in particular micro- and nanotechnology as well as biology and medicine; the word friction comes from the Latin "frictionem", which m
Interferometry is a family of techniques in which waves electromagnetic waves, are superimposed, causing the phenomenon of interference, used to extract information. Interferometry is an important investigative technique in the fields of astronomy, fiber optics, engineering metrology, optical metrology, seismology, quantum mechanics and particle physics, plasma physics, remote sensing, biomolecular interactions, surface profiling, mechanical stress/strain measurement and optometry. Interferometers are used in science and industry for the measurement of small displacements, refractive index changes and surface irregularities. In most interferometers, light from a single source is split into two beams that travel in different optical paths, which are combined again to produce interference; the resulting interference fringes give information about the difference in optical path lengths. In analytical science, interferometers are used to measure lengths and the shape of optical components with nanometer precision.
In Fourier transform spectroscopy they are used to analyze light containing features of absorption or emission associated with a substance or mixture. An astronomical interferometer consists of two or more separate telescopes that combine their signals, offering a resolution equivalent to that of a telescope of diameter equal to the largest separation between its individual elements. Interferometry makes use of the principle of superposition to combine waves in a way that will cause the result of their combination to have some meaningful property, diagnostic of the original state of the waves; this works because when two waves with the same frequency combine, the resulting intensity pattern is determined by the phase difference between the two waves—waves that are in phase will undergo constructive interference while waves that are out of phase will undergo destructive interference. Waves which are not in phase nor out of phase will have an intermediate intensity pattern, which can be used to determine their relative phase difference.
Most interferometers use some other form of electromagnetic wave. A single incoming beam of coherent light will be split into two identical beams by a beam splitter; each of these beams travels a different route, called a path, they are recombined before arriving at a detector. The path difference, the difference in the distance traveled by each beam, creates a phase difference between them, it is this introduced phase difference that creates the interference pattern between the identical waves. If a single beam has been split along two paths the phase difference is diagnostic of anything that changes the phase along the paths; this could be a physical change in the path length itself or a change in the refractive index along the path. As seen in Fig. 2a and 2b, the observer has a direct view of mirror M1 seen through the beam splitter, sees a reflected image M′2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images S′1 and S′2 of the original source S.
The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 2a, the optical elements are oriented so that S′1 and S′2 are in line with the observer, the resulting interference pattern consists of circles centered on the normal to M1 and M'2. If, as in Fig. 2b, M1 and M′2 are tilted with respect to each other, the interference fringes will take the shape of conic sections, but if M′1 and M′2 overlap, the fringes near the axis will be straight and spaced. If S is an extended source rather than a point source as illustrated, the fringes of Fig. 2a must be observed with a telescope set at infinity, while the fringes of Fig. 2b will be localized on the mirrors. Use of white light will result in a pattern of colored fringes; the central fringe representing equal path length may be light or dark depending on the number of phase inversions experienced by the two beams as they traverse the optical system.
Interferometers and interferometric techniques may be categorized by a variety of criteria: In homodyne detection, the interference occurs between two beams at the same wavelength. The phase difference between the two beams results in a change in the intensity of the light on the detector; the resulting intensity of the light after mixing of these two beams is measured, or the pattern of interference fringes is viewed or recorded. Most of the interferometers discussed in this article fall into this category; the heterodyne technique is used for shifting an input signal into a new frequency range as well as amplifying a weak input signal. A weak input signal of frequency f1 is mixed with a strong reference frequency f2 from a local oscillator; the nonlinear combination of the input signals creates two new signals, one at the sum f1 + f2 of the two frequencies, the other at the difference f1 − f2. These new frequencies are called heterodynes. Only one of the new frequencies is desired, the other signal is filtered out of the output of the mixer.
The output signal will have an intensity proportional to the product of the amplitudes of the input signals. The most important and used application of th
Pieter van Musschenbroek
Pieter van Musschenbroek was a Dutch scientist. He was a professor in Duisburg and Leiden, where he held positions in mathematics, philosophy and astronomy, he is credited with the invention of the first capacitor in 1746: the Leyden jar. He performed pioneering work on the buckling of compressed struts. Musschenbroek was one of the first scientists to provide detailed descriptions of testing machines for tension and flexure testing. An early example of a problem in dynamic plasticity was described in the 1739 paper. Pieter van Musschenbroek was born on 14 March 1692 in Leiden, Dutch Republic, his father was Johannes van Musschenbroek and his mother was Margaretha van Straaten. The Van Musschenbroeks from Flanders, had lived in the city of Leiden since circa 1600, his father was an instrument maker, who made scientific instruments such as air pumps and telescopes. Van Musschenbroek attended Latin school until 1708, where he studied Greek, French, High German and Spanish, he studied medicine at Leiden University and received his doctorate in 1715.
He attended lectures by John Theophilus Desaguliers and Isaac Newton in London. He finished his study in philosophy in 1719. Musschenbroek belonged to the tradition of Dutch thinkers who popularised the ontological argument of God's design, he is author of Oratio de sapientia divina. In 1719, he became professor of philosophy at the University of Duisburg. In 1721, he became professor of medicine. In 1723, he became professor at the University of Utrecht. In 1726 he became professor in astronomy. Musschenbroek's Elementa Physica played an important part in the transmission of Isaac Newton's ideas in physics to Europe. In November 1734 he was elected a Fellow of the Royal Society. In 1739, he returned to Leiden. During his studies at Leiden University Van Musschenbroek became interested in electrostatics. At that time, transient electrical energy could be generated by friction machines but there was no way to store it. Musschenbroek and his student Andreas Cunaeus discovered that the energy could be stored, in work that involved Jean-Nicolas-Sébastien Allamand as collaborator.
The apparatus was a glass jar filled with water into. Van Musschenbroek communicated this discovery to René Réaumur in January 1746, it was Abbé Nollet, the translator of Musschenbroek's letter from Latin, who named the invention the'Leyden jar'. Soon afterwards, it transpired that a German scientist, Ewald von Kleist, had independently constructed a similar device in late 1745, shortly before Musschenbroek. In 1754, he became an honorary professor at the Imperial Academy of Science in Saint Petersburg, he was elected a foreign member of the Royal Swedish Academy of Sciences in 1747. Van Musschenbroek died on 19 September 1761 in Leiden. Elementa Physica Dissertationes physicae experimentalis et geometricae de magnete Tentamina experimentorum naturalium in Accademia del Cimento Institutiones physicae Beginsels der Natuurkunde, Beschreeven ten dienste der Landgenooten, door Petrus van Musschenbroek, Waar by gevoegd is eene beschryving Der nieuwe en onlangs uitgevonden Luchtpompen, met haar gebruik tot veel proefnemingen Aeris praestantia in humoribus corporis humani Oratio de sapientia divina Institutiones logicae Biography by Eugenii Katz Biography at Adventures in Cybersound Leiden jar, Leiden University List of Ph.
D. students of Pieter van Musschenbroek
Internal combustion engine
An internal combustion engine is a heat engine where the combustion of a fuel occurs with an oxidizer in a combustion chamber, an integral part of the working fluid flow circuit. In an internal combustion engine, the expansion of the high-temperature and high-pressure gases produced by combustion applies direct force to some component of the engine; the force is applied to pistons, turbine blades, rotor or a nozzle. This force moves the component over a distance, transforming chemical energy into useful mechanical energy; the first commercially successful internal combustion engine was created by Étienne Lenoir around 1859 and the first modern internal combustion engine was created in 1876 by Nikolaus Otto. The term internal combustion engine refers to an engine in which combustion is intermittent, such as the more familiar four-stroke and two-stroke piston engines, along with variants, such as the six-stroke piston engine and the Wankel rotary engine. A second class of internal combustion engines use continuous combustion: gas turbines, jet engines and most rocket engines, each of which are internal combustion engines on the same principle as described.
Firearms are a form of internal combustion engine. In contrast, in external combustion engines, such as steam or Stirling engines, energy is delivered to a working fluid not consisting of, mixed with, or contaminated by combustion products. Working fluids can be air, hot water, pressurized water or liquid sodium, heated in a boiler. ICEs are powered by energy-dense fuels such as gasoline or diesel fuel, liquids derived from fossil fuels. While there are many stationary applications, most ICEs are used in mobile applications and are the dominant power supply for vehicles such as cars and boats. An ICE is fed with fossil fuels like natural gas or petroleum products such as gasoline, diesel fuel or fuel oil. There is a growing usage of renewable fuels like biodiesel for CI engines and bioethanol or methanol for SI engines. Hydrogen is sometimes used, can be obtained from either fossil fuels or renewable energy. Various scientists and engineers contributed to the development of internal combustion engines.
In 1791, John Barber developed the gas turbine. In 1794 Thomas Mead patented a gas engine. In 1794, Robert Street patented an internal combustion engine, the first to use liquid fuel, built an engine around that time. In 1798, John Stevens built the first American internal combustion engine. In 1807, French engineers Nicéphore and Claude Niépce ran a prototype internal combustion engine, using controlled dust explosions, the Pyréolophore; this engine powered a boat on France. The same year, the Swiss engineer François Isaac de Rivaz built an internal combustion engine ignited by an electric spark. In 1823, Samuel Brown patented the first internal combustion engine to be applied industrially. In 1854 in the UK, the Italian inventors Eugenio Barsanti and Felice Matteucci tried to patent "Obtaining motive power by the explosion of gases", although the application did not progress to the granted stage. In 1860, Belgian Jean Joseph Etienne Lenoir produced a gas-fired internal combustion engine. In 1864, Nikolaus Otto patented the first atmospheric gas engine.
In 1872, American George Brayton invented the first commercial liquid-fuelled internal combustion engine. In 1876, Nikolaus Otto, working with Gottlieb Daimler and Wilhelm Maybach, patented the compressed charge, four-cycle engine. In 1879, Karl Benz patented a reliable two-stroke gasoline engine. In 1886, Karl Benz began the first commercial production of motor vehicles with the internal combustion engine. In 1892, Rudolf Diesel developed compression ignition engine. In 1926, Robert Goddard launched the first liquid-fueled rocket. In 1939, the Heinkel He 178 became the world's first jet aircraft. At one time, the word engine meant any piece of machinery—a sense that persists in expressions such as siege engine. A "motor" is any machine. Traditionally, electric motors are not referred to as "engines". In boating an internal combustion engine, installed in the hull is referred to as an engine, but the engines that sit on the transom are referred to as motors. Reciprocating piston engines are by far the most common power source for land and water vehicles, including automobiles, ships and to a lesser extent, locomotives.
Rotary engines of the Wankel design are used in some automobiles and motorcycles. Where high power-to-weight ratios are required, internal combustion engines appear in the form of combustion turbines or Wankel engines. Powered aircraft uses an ICE which may be a reciprocating engine. Airplanes can instead use jet engines and helicopters can instead employ turboshafts. In addition to providing propulsion, airliners may employ a separate ICE as an auxiliary power unit. Wankel engines are fitted to many unmanned aerial vehicles. ICEs drive some of the large electric generators, they are found in the form of combustion turbines in combined cycle power plants with a typical electrical output in the range of 100 MW to 1 GW. The high temperature exhaust is used to superheat water to run a steam turbine. Thus, the efficiency is higher because more energy is extracted from the fuel than what could be extracted by the co
In vertebrate anatomy, hip refers to either an anatomical region or a joint. The hip region is located lateral and anterior to the gluteal region, inferior to the iliac crest, overlying the greater trochanter of the femur, or "thigh bone". In adults, three of the bones of the pelvis have fused into the hip bone or acetabulum which forms part of the hip region; the hip joint, scientifically referred to as the acetabulofemoral joint, is the joint between the femur and acetabulum of the pelvis and its primary function is to support the weight of the body in both static and dynamic postures. The hip joints have important roles in retaining balance, for maintaining the pelvic inclination angle. Pain of the hip may be the result of numerous causes, including nervous, infectious, trauma-related, genetic; the proximal femur is covered by muscles and, as a consequence, the greater trochanter is the only palpable bony structure in the hip region. The hip joint is a synovial joint formed by the articulation of the rounded head of the femur and the cup-like acetabulum of the pelvis.
It forms the primary connection between the bones of the lower limb and the axial skeleton of the trunk and pelvis. Both joint surfaces are covered with a strong but lubricated layer called articular hyaline cartilage; the cuplike acetabulum forms at the union of three pelvic bones — the ilium and ischium. The Y-shaped growth plate that separates them, the triradiate cartilage, is fused definitively at ages 14–16, it is a special type of spheroidal or ball and socket joint where the spherical femoral head is contained within the acetabulum and has an average radius of curvature of 2.5 cm. The acetabulum grasps half the femoral ball, a grip augmented by a ring-shaped fibrocartilaginous lip, the acetabular labrum, which extends the joint beyond the equator; the joint space between the femoral head and the superior acetabulum is between 2 and 7 mm. The head of the femur is attached to the shaft by a thin neck region, prone to fracture in the elderly, due to the degenerative effects of osteoporosis.
The acetabulum is oriented inferiorly and anteriorly, while the femoral neck is directed superiorly and anteriorly. The transverse angle of the acetabular inlet can be determined by measuring the angle between a line passing from the superior to the inferior acetabular rim and the horizontal plane; the sagittal angle of the acetabular inlet is an angle between a line passing from the anterior to the posterior acetabular rim and the sagittal plane. It measures 7° at birth and increases to 17° in adults. Wiberg's centre-edge angle is an angle between a vertical line and a line from the centre of the femoral head to the most lateral part of the acetabulum, as seen on an anteroposterior radiograph; the vertical-centre-anterior margin angle is an angle formed from a vertical line and a line from the centre of the femoral head and the anterior edge of the dense shadow of the subchondral bone posterior to the anterior edge of the acetabulum, with the radiograph being taken from the false angle, that is, a lateral view rotated 25 degrees towards becoming frontal.
The articular cartilage angle is an angle formed parallel to the weight bearing dome, that is, the acetabular sourcil or "roof", the horizontal plane, or a line connecting the corner of the triangular cartilage and the lateral acetabular rim. In normal hips in children aged between 11 and 24 months, it has been estimated to be on average 20°, ranging between 18° to 25°, it becomes progressively lower with age. Suggested cutoff values to classify the angle as abnormally increased include:30° up to 4 months of age. 25° up to 2 years of age. The angle between the longitudinal axes of the femoral neck and shaft, called the caput-collum-diaphyseal angle or CCD angle measures 150° in newborn and 126° in adults. An abnormally small angle is known as an abnormally large angle as coxa valga; because changes in shape of the femur affects the knee, coxa valga is combined with genu varum, while coxa vara leads to genu valgum. Changes in CCD angle is the result of changes in the stress patterns applied to the hip joint.
Such changes, caused for example by a dislocation, changes the trabecular patterns inside the bones. Two continuous trabecular systems emerging on auricular surface of the sacroiliac joint meander and criss-cross each other down through the hip bone, the femoral head and shaft. In the hip bone, one system arises on the upper part of auricular surface to converge onto the posterior surface of the greater sciatic notch, from where its trabeculae are reflected to the inferior part of the acetabulum; the other system emerges on the lower part of the auricular surface, converges at the level of the superior gluteal line, is reflected laterally onto the upper part of the acetabulum. In the femur, the first system lines up with a system arising from the lateral part of the femoral shaft to stretch to the inferior portion of the femoral neck and head; the other system lines up with a system in the femur stretching from the medial part of the femoral shaft to the superior part of the femoral head. On the lateral side of the hip joint the fascia lata is strengthened to