1.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
2.
Measurement
–
Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of a measurement is dependent on the context, however, in other fields such as statistics as well as the social and behavioral sciences, measurements can have multiple levels, which would include nominal, ordinal, interval, and ratio scales. Measurement is a cornerstone of trade, science, technology, historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators, since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units. This system reduces all physical measurements to a combination of seven base units. The science of measurement is pursued in the field of metrology, the measurement of a property may be categorized by the following criteria, type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements, the type or level of measurement is a taxonomy for the methodological character of a comparison. For example, two states of a property may be compared by ratio, difference, or ordinal preference, the type is commonly not explicitly expressed, but implicit in the definition of a measurement procedure. The magnitude is the value of the characterization, usually obtained with a suitably chosen measuring instrument. A unit assigns a mathematical weighting factor to the magnitude that is derived as a ratio to the property of a used as standard or a natural physical quantity. An uncertainty represents the random and systemic errors of the measurement procedure, errors are evaluated by methodically repeating measurements and considering the accuracy and precision of the measuring instrument. Measurements most commonly use the International System of Units as a comparison framework, the system defines seven fundamental units, kilogram, metre, candela, second, ampere, kelvin, and mole. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to and this directly influenced the Michelson–Morley experiment, Michelson and Morley cite Peirce, and improve on his method. With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements, nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of history, however, first for convenience and then for necessity. Laws regulating measurement were originally developed to prevent fraud in commerce.9144 metres, in the United States, the National Institute of Standards and Technology, a division of the United States Department of Commerce, regulates commercial measurements. Before SI units were adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth. The system came to be known as U. S. customary units in the United States and is still in use there and in a few Caribbean countries. S
3.
Electric charge
–
Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of charges, positive and negative. Like charges repel and unlike attract, an absence of net charge is referred to as neutral. An object is charged if it has an excess of electrons. The SI derived unit of charge is the coulomb. In electrical engineering, it is common to use the ampere-hour. The symbol Q often denotes charge, early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that dont require consideration of quantum effects. The electric charge is a conserved property of some subatomic particles. Electrically charged matter is influenced by, and produces, electromagnetic fields, the interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces. 602×10−19 coulombs. The proton has a charge of +e, and the electron has a charge of −e, the study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics. Charge is the property of forms of matter that exhibit electrostatic attraction or repulsion in the presence of other matter. Electric charge is a property of many subatomic particles. The charges of free-standing particles are integer multiples of the charge e. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge, robert Millikans oil drop experiment demonstrated this fact directly, and measured the elementary charge. By convention, the charge of an electron is −1, while that of a proton is +1, charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. The charge of an antiparticle equals that of the corresponding particle, quarks have fractional charges of either −1/3 or +2/3, but free-standing quarks have never been observed. The electric charge of an object is the sum of the electric charges of the particles that make it up. An ion is an atom that has lost one or more electrons, giving it a net charge, or that has gained one or more electrons
4.
Speed of light
–
The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its exact value is 299792458 metres per second, it is exact because the unit of length, the metre, is defined from this constant, according to special relativity, c is the maximum speed at which all matter and hence information in the universe can travel. It is the speed at which all particles and changes of the associated fields travel in vacuum. Such particles and waves travel at c regardless of the motion of the source or the reference frame of the observer. In the theory of relativity, c interrelates space and time, the speed at which light propagates through transparent materials, such as glass or air, is less than c, similarly, the speed of radio waves in wire cables is slower than c. The ratio between c and the speed v at which light travels in a material is called the index n of the material. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, the light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light limits the theoretical maximum speed of computers. The speed of light can be used time of flight measurements to measure large distances to high precision. Ole Rømer first demonstrated in 1676 that light travels at a speed by studying the apparent motion of Jupiters moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, in 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units as the distance travelled by light in vacuum in 1/299792458 of a second, as a result, the numerical value of c in metres per second is now fixed exactly by the definition of the metre. The speed of light in vacuum is usually denoted by a lowercase c, historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant later shown to equal √2 times the speed of light in vacuum, in 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, sometimes c is used for the speed of waves in any material medium, and c0 for the speed of light in vacuum. This article uses c exclusively for the speed of light in vacuum, since 1983, the metre has been defined in the International System of Units as the distance light travels in vacuum in 1⁄299792458 of a second
5.
Speed
–
In everyday use and in kinematics, the speed of an object is the magnitude of its velocity, it is thus a scalar quantity. Speed has the dimensions of distance divided by time, the SI unit of speed is the metre per second, but the most common unit of speed in everyday usage is the kilometre per hour or, in the US and the UK, miles per hour. For air and marine travel the knot is commonly used, the fastest possible speed at which energy or information can travel, according to special relativity, is the speed of light in a vacuum c =299792458 metres per second. Matter cannot quite reach the speed of light, as this would require an amount of energy. In relativity physics, the concept of rapidity replaces the classical idea of speed, italian physicist Galileo Galilei is usually credited with being the first to measure speed by considering the distance covered and the time it takes. Galileo defined speed as the distance covered per unit of time, in equation form, this is v = d t, where v is speed, d is distance, and t is time. A cyclist who covers 30 metres in a time of 2 seconds, objects in motion often have variations in speed. If s is the length of the path travelled until time t, in the special case where the velocity is constant, this can be simplified to v = s / t. The average speed over a time interval is the total distance travelled divided by the time duration. Speed at some instant, or assumed constant during a short period of time, is called instantaneous speed. By looking at a speedometer, one can read the speed of a car at any instant. A car travelling at 50 km/h generally goes for less than one hour at a constant speed, if the vehicle continued at that speed for half an hour, it would cover half that distance. If it continued for one minute, it would cover about 833 m. Different from instantaneous speed, average speed is defined as the distance covered divided by the time interval. For example, if a distance of 80 kilometres is driven in 1 hour, likewise, if 320 kilometres are travelled in 4 hours, the average speed is also 80 kilometres per hour. When a distance in kilometres is divided by a time in hours, average speed does not describe the speed variations that may have taken place during shorter time intervals, and so average speed is often quite different from a value of instantaneous speed. If the average speed and the time of travel are known, using this equation for an average speed of 80 kilometres per hour on a 4-hour trip, the distance covered is found to be 320 kilometres. Linear speed is the distance travelled per unit of time, while speed is the linear speed of something moving along a circular path
6.
System of measurement
–
A system of measurement is a collection of units of measurement and rules relating them to each other. Systems of measurement have historically been important, regulated and defined for the purposes of science and commerce, systems of measurement in modern use include the metric system, the imperial system, and United States customary units. The French Revolution gave rise to the system, and this has spread around the world. In most systems, length, mass, and time are base quantities, later science developments showed that either electric charge or electric current could be added to extend the set of base quantities by which many other metrological units could be easily defined. Other quantities, such as power and speed, are derived from the set, for example. Such arrangements were satisfactory in their own contexts, the preference for a more universal and consistent system only gradually spread with the growth of science. Changing a measurement system has substantial financial and cultural costs which must be offset against the advantages to be obtained using a more rational system. However pressure built up, including scientists and engineers for conversion to a more rational. The unifying characteristic is that there was some definition based on some standard, eventually cubits and strides gave way to customary units to met the needs of merchants and scientists. In the metric system and other recent systems, a basic unit is used for each base quantity. Often secondary units are derived from the units by multiplying by powers of ten. Thus the basic unit of length is the metre, a distance of 1.234 m is 1,234 millimetres. Metrication is complete or nearly complete in almost all countries, US customary units are heavily used in the United States and to some degree in Liberia. Traditional Burmese units of measurement are used in Burma, U. S. units are used in limited contexts in Canada due to the large volume of trade, there is also considerable use of Imperial weights and measures, despite de jure Canadian conversion to metric. In the United States, metric units are used almost universally in science, widely in the military, and partially in industry, but customary units predominate in household use. At retail stores, the liter is a used unit for volume, especially on bottles of beverages. Some other standard non-SI units are still in use, such as nautical miles and knots in aviation. Metric systems of units have evolved since the adoption of the first well-defined system in France in 1795, during this evolution the use of these systems has spread throughout the world, first to non-English-speaking countries, and then to English speaking countries
7.
Dimensional analysis
–
Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra. The concept of physical dimension was introduced by Joseph Fourier in 1822, Physical quantities that are measurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If physical quantities have different dimensions, they cannot be compared by similar units, hence, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour. Any physically meaningful equation will have the dimensions on their left and right sides. Checking for dimensional homogeneity is an application of dimensional analysis. Dimensional analysis is routinely used as a check of the plausibility of derived equations and computations. It is generally used to categorize types of quantities and units based on their relationship to or dependence on other units. Many parameters and measurements in the sciences and engineering are expressed as a concrete number – a numerical quantity. Often a quantity is expressed in terms of other quantities, for example, speed is a combination of length and time. Compound relations with per are expressed with division, e. g.60 mi/1 h, other relations can involve multiplication, powers, or combinations thereof. A base unit is a unit that cannot be expressed as a combination of other units, for example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the units of length. Sometimes the names of units obscure that they are derived units, for example, an ampere is a unit of electric current, which is equivalent to electric charge per unit time and is measured in coulombs per second, so 1 A =1 C/s. Similarly, one newton is 1 kg⋅m/s2, percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as 1/100, derivatives with respect to a quantity add the dimensions of the variable one is differentiating with respect to on the denominator. Thus, position has the dimension L, derivative of position with respect to time has dimension LT−1 – length from position, time from the derivative, the second derivative has dimension LT−2. In economics, one distinguishes between stocks and flows, a stock has units of units, while a flow is a derivative of a stock, in some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions
8.
Nondimensionalization
–
Nondimensionalization is the partial or full removal of units from an equation involving physical quantities by a suitable substitution of variables. This technique can simplify and parameterize problems where measured units are involved and it is closely related to dimensional analysis. In some physical systems, the scaling is used interchangeably with nondimensionalization. These units refer to quantities intrinsic to the system, rather than such as SI units. Nondimensionalization is not the same as converting extensive quantities in an equation to intensive quantities, nondimensionalization can also recover characteristic properties of a system. For example, if a system has a resonance frequency, length, or time constant. The technique is useful for systems that can be described by differential equations. One important use is in the analysis of control systems, many illustrative examples of nondimensionalization originate from simplifying differential equations. This is because a body of physical problems can be formulated in terms of differential equations. An example of an application is dimensional analysis, another example is normalization in statistics. Measuring devices are examples of nondimensionalization occurring in everyday life. Measuring devices are calibrated relative to some known unit, subsequent measurements are made relative to this standard. Then, the value of the measurement is recovered by scaling with respect to the standard. Suppose a pendulum is swinging with a particular period T, for such a system, it is advantageous to perform calculations relating to the swinging relative to T. In some sense, this is normalizing the measurement with respect to the period, measurements made relative to an intrinsic property of a system will apply to other systems which also have the same intrinsic property. It also allows one to compare a common property of different implementations of the same system, nondimensionalization determines in a systematic manner the characteristic units of a system to use, without relying heavily on prior knowledge of the systems intrinsic properties. In fact, nondimensionalization can suggest the parameters which should be used for analyzing a system, however, it is necessary to start with an equation that describes the system appropriately. The last three steps are usually specific to the problem where nondimensionalization is applied, however, almost all systems require the first two steps to be performed
9.
Algebraic expression
–
In mathematics, an algebraic expression is an expression built up from integer constants, variables, and the algebraic operations. For example,3 x 2 −2 x y + c is an algebraic expression, since taking the square root is the same as raising to the power 12,1 − x 21 + x 2 is also an algebraic expression. By contrast, transcendental numbers like π and e are not algebraic, a rational expression is an expression that may be rewritten to a rational fraction by using the properties of the arithmetic operations. In other words, an expression is an expression which may be constructed from the variables. Thus,3 x 2 −2 x y + c y 3 −1 is a rational expression, a rational equation is an equation in which two rational fractions of the form P Q are set equal to each other. These expressions obey the rules as fractions. The equations can be solved by cross-multiplying, division by zero is undefined, so that a solution causing formal division by zero is rejected. Such a solution of an equation is called an algebraic solution, but the Abel-Ruffini theorem states that algebraic solutions do not exist for all such equations if n ≥5. By convention, letters at the beginning of the alphabet are used to represent constants. They are usually written in italics, by convention, terms with the highest power, are written on the left, for example, x 2 is written to the left of x. When a coefficient is one, it is usually omitted, likewise when the exponent is one, and, when the exponent is zero, the result is always 1. The table below summarizes how algebraic expressions compare with other types of mathematical expressions by the type of elements they may contain. A rational algebraic expression is an expression that can be written as a quotient of polynomials. An irrational algebraic expression is one that is not rational, such as √x +4, Algebraic equation Algebraic function Analytical expression Arithmetic expression Closed-form expression Expression Polynomial Term James, Robert Clarke, James, Glenn
10.
Elementary particle
–
In particle physics, an elementary particle or fundamental particle is a particle whose substructure is unknown, thus, it is unknown whether it is composed of other particles. A particle containing two or more elementary particles is a composite particle, soon, subatomic constituents of the atom were identified. As the 1930s opened, the electron and the proton had been observed, along with the photon, via quantum theory, protons and neutrons were found to contain quarks—up quarks and down quarks—now considered elementary particles. And within a molecule, the three degrees of freedom can separate via wavefunction into three quasiparticles. Yet a free electron—which, not orbiting a nucleus, lacks orbital motion—appears unsplittable. Meanwhile, an elementary boson mediating gravitation—the graviton—remains hypothetical, all elementary particles are—depending on their spin—either bosons or fermions. These are differentiated via the theorem of quantum statistics. Particles of half-integer spin exhibit Fermi–Dirac statistics and are fermions, Particles of integer spin, in other words full-integer, exhibit Bose–Einstein statistics and are bosons. In the Standard Model, elementary particles are represented for predictive utility as point particles, though extremely successful, the Standard Model is limited to the microcosm by its omission of gravitation and has some parameters arbitrarily added but unexplained. According to the current models of big bang nucleosynthesis, the composition of visible matter of the universe should be about 75% hydrogen. Neutrons are made up of one up and two down quark, while protons are made of two up and one down quark. Since the other elementary particles are so light or so rare when compared to atomic nuclei. Therefore, one can conclude that most of the mass of the universe consists of protons and neutrons. Some estimates imply that there are roughly 1080 baryons in the observable universe, the number of protons in the observable universe is called the Eddington number. Other estimates imply that roughly 1097 elementary particles exist in the universe, mostly photons, gravitons. However, the Standard Model is widely considered to be a theory rather than a truly fundamental one. The 12 fundamental fermionic flavours are divided into three generations of four particles each, six of the particles are quarks. The remaining six are leptons, three of which are neutrinos, and the three of which have an electric charge of −1, the electron and its two cousins, the muon and the tau
11.
Nature
–
Nature, in the broadest sense, is the natural, physical, or material world or universe. Nature can refer to the phenomena of the world. The study of nature is a part of science. Although humans are part of nature, human activity is understood as a separate category from other natural phenomena. The word nature is derived from the Latin word natura, or essential qualities, innate disposition, and in ancient times, literally meant birth. Natura is a Latin translation of the Greek word physis, which related to the intrinsic characteristics that plants, animals. This usage continued during the advent of scientific method in the last several centuries. Within the various uses of the word today, nature often refers to geology, for example, manufactured objects and human interaction generally are not considered part of nature, unless qualified as, for example, human nature or the whole of nature. Depending on the context, the term natural might also be distinguished from the unnatural or the supernatural. Earth is the planet known to support life, and its natural features are the subject of many fields of scientific research. Within the solar system, it is third closest to the sun, it is the largest terrestrial planet and its most prominent climatic features are its two large polar regions, two relatively narrow temperate zones, and a wide equatorial tropical to subtropical region. Precipitation varies widely with location, from several metres of water per year to less than a millimetre,71 percent of the Earths surface is covered by salt-water oceans. The remainder consists of continents and islands, with most of the land in the Northern Hemisphere. Earth has evolved through geological and biological processes that have left traces of the original conditions, the outer surface is divided into several gradually migrating tectonic plates. The interior remains active, with a layer of plastic mantle. This iron core is composed of a solid phase. Convective motion in the core generates electric currents through dynamo action, the atmospheric conditions have been significantly altered from the original conditions by the presence of life-forms, which create an ecological balance that stabilizes the surface conditions. Geology is the science and study of the solid and liquid matter that constitutes the Earth, the geology of an area evolves through time as rock units are deposited and inserted and deformational processes change their shapes and locations
12.
Prototype
–
A prototype is an early sample, model, or release of a product built to test a concept or process or to act as a thing to be replicated or learned from. It is a used in a variety of contexts, including semantics, design, electronics. A prototype is used to evaluate a new design to enhance precision by system analysts. Prototyping serves to provide specifications for a real, working system rather than a theoretical one, in some design workflow models, creating a prototype is the step between the formalization and the evaluation of an idea. The word prototype derives from the Greek πρωτότυπον prototypon, primitive form, neutral of πρωτότυπος prototypos, original, primitive, from πρῶτος protos, first and τύπος typos, a Working Prototype represents all or nearly all of the functionality of the final product. A Visual Prototype represents the size and appearance, but not the functionality, a User Experience Prototype represents enough of the appearance and function of the product that it can be used for user research. A Functional Prototype captures both function and appearance of the design, though it may be created with different techniques. A Paper Prototype is a printed or hand-drawn representation of the interface of a software product. In some cases, the final production materials may still be undergoing development themselves, process - Mass-production processes are often unsuitable for making a small number of parts, so prototypes may be made using different fabrication processes than the final product. Differences in fabrication process may lead to differences in the appearance of the prototype as compared to the final product, verification - The final product may be subject to a number of quality assurance tests to verify conformance with drawings or specifications. These tests may involve custom inspection fixtures, statistical sampling methods, prototypes are generally made with much closer individual inspection and the assumption that some adjustment or rework will be part of the fabrication process. Prototypes may also be exempted from some requirements that apply to the final product. Engineers and prototype specialists will attempt to minimize the impact of these differences on the role for the prototype. Engineers and prototyping specialists seek to understand the limitations of prototypes to exactly simulate the characteristics of their intended design and it is important to realize that by their very definition, prototypes will represent some compromise from the final production design. Due to differences in materials, processes and design fidelity, it is possible that a prototype may fail to perform acceptably whereas the design may have been sound. In general, it can be expected that individual prototype costs will be greater than the final production costs due to inefficiencies in materials. Prototypes are also used to revise the design for the purposes of reducing costs through optimization and it is possible to use prototype testing to reduce the risk that a design may not perform as intended, however prototypes generally cannot eliminate all risk. As an alternative, rapid prototyping or rapid application development techniques are used for the prototypes, which implement part
13.
Subatomic particle
–
In the physical sciences, subatomic particles are particles much smaller than atoms. There are two types of particles, elementary particles, which according to current theories are not made of other particles. Particle physics and nuclear physics study these particles and how they interact, in particle physics, the concept of a particle is one of several concepts inherited from classical physics. But it also reflects the understanding that at the quantum scale matter. The idea of a particle underwent serious rethinking when experiments showed that light could behave like a stream of particles as well as exhibit wave-like properties and this led to the new concept of wave–particle duality to reflect that quantum-scale particles behave like both particles and waves. Another new concept, the uncertainty principle, states that some of their properties taken together, such as their simultaneous position and momentum, in more recent times, wave–particle duality has been shown to apply not only to photons but to increasingly massive particles as well. Interactions of particles in the framework of field theory are understood as creation and annihilation of quanta of corresponding fundamental interactions. This blends particle physics with field theory, any subatomic particle, like any particle in the 3-dimensional space that obeys laws of quantum mechanics, can be either a boson or a fermion. Various extensions of the Standard Model predict the existence of a graviton particle. Composite subatomic particles are bound states of two or more elementary particles, for example, a proton is made of two up quarks and one down quark, while the atomic nucleus of helium-4 is composed of two protons and two neutrons. The neutron is made of two quarks and one up quark. Composite particles include all hadrons, these include baryons and mesons, in special relativity, the energy of a particle at rest equals its mass times the speed of light squared, E = mc2. That is, mass can be expressed in terms of energy, if a particle has a frame of reference where it lies at rest, then it has a positive rest mass and is referred to as massive. Baryons tend to have greater mass than mesons, which in turn tend to be heavier than leptons and it is also certain that any particle with an electric charge is massive. These include the photon and gluon, although the latter cannot be isolated, through the work of Albert Einstein, Satyendra Nath Bose, Louis de Broglie, and many others, current scientific theory holds that all particles also have a wave nature. This has been verified not only for elementary particles but also for compound particles like atoms, interactions between particles have been scrutinized for many centuries, and a few simple laws underpin how particles behave in collisions and interactions. These are the basics of Newtonian mechanics, a series of statements and equations in Philosophiae Naturalis Principia Mathematica. The negatively charged electron has an equal to 1⁄1837 or 1836 of that of a hydrogen atom
14.
Vacuum
–
Vacuum is space void of matter. The word stems from the Latin adjective vacuus for vacant or void, an approximation to such vacuum is a region with a gaseous pressure much less than atmospheric pressure. In engineering and applied physics on the hand, vacuum refers to any space in which the pressure is lower than atmospheric pressure. The Latin term in vacuo is used to describe an object that is surrounded by a vacuum, the quality of a partial vacuum refers to how closely it approaches a perfect vacuum. Other things equal, lower gas pressure means higher-quality vacuum, for example, a typical vacuum cleaner produces enough suction to reduce air pressure by around 20%. Ultra-high vacuum chambers, common in chemistry, physics, and engineering, operate below one trillionth of atmospheric pressure, outer space is an even higher-quality vacuum, with the equivalent of just a few hydrogen atoms per cubic meter on average. In the electromagnetism in the 19th century, vacuum was thought to be filled with a medium called aether, in modern particle physics, the vacuum state is considered the ground state of matter. Vacuum has been a frequent topic of debate since ancient Greek times. Evangelista Torricelli produced the first laboratory vacuum in 1643, and other techniques were developed as a result of his theories of atmospheric pressure. A torricellian vacuum is created by filling a glass container closed at one end with mercury. Vacuum became an industrial tool in the 20th century with the introduction of incandescent light bulbs and vacuum tubes. The recent development of human spaceflight has raised interest in the impact of vacuum on human health, the word vacuum comes from Latin an empty space, void, noun use of neuter of vacuus, meaning empty, related to vacare, meaning be empty. Vacuum is one of the few words in the English language that contains two consecutive letters u. Historically, there has been dispute over whether such a thing as a vacuum can exist. Ancient Greek philosophers debated the existence of a vacuum, or void, in the context of atomism, Aristotle believed that no void could occur naturally, because the denser surrounding material continuum would immediately fill any incipient rarity that might give rise to a void. Almost two thousand years after Plato, René Descartes also proposed a geometrically based alternative theory of atomism, without the problematic nothing–everything dichotomy of void, by the ancient definition however, directional information and magnitude were conceptually distinct. The explanation of a clepsydra or water clock was a topic in the Middle Ages. Although a simple wine skin sufficed to demonstrate a partial vacuum, in principle and he concluded that airs volume can expand to fill available space, and he suggested that the concept of perfect vacuum was incoherent. However, according to Nader El-Bizri, the physicist Ibn al-Haytham and the Mutazili theologians disagreed with Aristotle and Al-Farabi, using geometry, Ibn al-Haytham mathematically demonstrated that place is the imagined three-dimensional void between the inner surfaces of a containing body
15.
Length
–
In geometric measurements, length is the most extended dimension of an object. In the International System of Quantities, length is any quantity with dimension distance, in other contexts length is the measured dimension of an object. For example, it is possible to cut a length of a wire which is shorter than wire thickness. Length may be distinguished from height, which is vertical extent, and width or breadth, length is a measure of one dimension, whereas area is a measure of two dimensions and volume is a measure of three dimensions. In most systems of measurement, the unit of length is a base unit, measurement has been important ever since humans settled from nomadic lifestyles and started using building materials, occupying land and trading with neighbours. As society has become more technologically oriented, much higher accuracies of measurement are required in a diverse set of fields. One of the oldest units of measurement used in the ancient world was the cubit which was the length of the arm from the tip of the finger to the elbow. This could then be subdivided into shorter units like the foot, hand or finger, the cubit could vary considerably due to the different sizes of people. After Albert Einsteins special relativity, length can no longer be thought of being constant in all reference frames. Thus a ruler that is one meter long in one frame of reference will not be one meter long in a frame that is travelling at a velocity relative to the first frame. This means length of an object is variable depending on the observer, in the physical sciences and engineering, when one speaks of units of length, the word length is synonymous with distance. There are several units that are used to measure length, in the International System of Units, the basic unit of length is the metre and is now defined in terms of the speed of light. The centimetre and the kilometre, derived from the metre, are commonly used units. In U. S. customary units, English or Imperial system of units, commonly used units of length are the inch, the foot, the yard, and the mile. Units used to denote distances in the vastness of space, as in astronomy, are longer than those typically used on Earth and include the astronomical unit, the light-year. Dimension Distance Orders of magnitude Reciprocal length Smoot Unit of length
16.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
17.
Time
–
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is often referred to as the dimension, along with the three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. Two contrasting viewpoints on time divide prominent philosophers, one view is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is referred to as Newtonian time. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, Time in physics is unambiguously operationally defined as what a clock reads. Time is one of the seven fundamental physical quantities in both the International System of Units and International System of Quantities, Time is used to define other quantities—such as velocity—so defining time in terms of such quantities would result in circularity of definition. The operational definition leaves aside the question there is something called time, apart from the counting activity just mentioned, that flows. Investigations of a single continuum called spacetime bring questions about space into questions about time, questions that have their roots in the works of early students of natural philosophy. Furthermore, it may be there is a subjective component to time. Temporal measurement has occupied scientists and technologists, and was a motivation in navigation. Periodic events and periodic motion have long served as standards for units of time, examples include the apparent motion of the sun across the sky, the phases of the moon, the swing of a pendulum, and the beat of a heart. Currently, the unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Time is also of significant social importance, having economic value as well as value, due to an awareness of the limited time in each day. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar is consulted for periods longer than a day, increasingly, personal electronic devices display both calendars and clocks simultaneously. The number that marks the occurrence of an event as to hour or date is obtained by counting from a fiducial epoch—a central reference point. Artifacts from the Paleolithic suggest that the moon was used to time as early as 6,000 years ago. Lunar calendars were among the first to appear, either 12 or 13 lunar months, without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months
18.
Temperature
–
A temperature is an objective comparative measurement of hot or cold. It is measured by a thermometer, several scales and units exist for measuring temperature, the most common being Celsius, Fahrenheit, and, especially in science, Kelvin. Absolute zero is denoted as 0 K on the Kelvin scale, −273.15 °C on the Celsius scale, the kinetic theory offers a valuable but limited account of the behavior of the materials of macroscopic bodies, especially of fluids. Temperature is important in all fields of science including physics, geology, chemistry, atmospheric sciences, medicine. The Celsius scale is used for temperature measurements in most of the world. Because of the 100 degree interval, it is called a centigrade scale.15, the United States commonly uses the Fahrenheit scale, on which water freezes at 32°F and boils at 212°F at sea-level atmospheric pressure. Many scientific measurements use the Kelvin temperature scale, named in honor of the Scottish physicist who first defined it and it is a thermodynamic or absolute temperature scale. Its zero point, 0K, is defined to coincide with the coldest physically-possible temperature and its degrees are defined through thermodynamics. The temperature of zero occurs at 0K = −273. 15°C. For historical reasons, the triple point temperature of water is fixed at 273.16 units of the measurement increment, Temperature is one of the principal quantities in the study of thermodynamics. There is a variety of kinds of temperature scale and it may be convenient to classify them as empirically and theoretically based. Empirical temperature scales are historically older, while theoretically based scales arose in the middle of the nineteenth century, empirically based temperature scales rely directly on measurements of simple physical properties of materials. For example, the length of a column of mercury, confined in a capillary tube, is dependent largely on temperature. Such scales are only within convenient ranges of temperature. For example, above the point of mercury, a mercury-in-glass thermometer is impracticable. A material is of no use as a thermometer near one of its phase-change temperatures, in spite of these restrictions, most generally used practical thermometers are of the empirically based kind. Especially, it was used for calorimetry, which contributed greatly to the discovery of thermodynamics, nevertheless, empirical thermometry has serious drawbacks when judged as a basis for theoretical physics. Theoretically based temperature scales are based directly on theoretical arguments, especially those of thermodynamics, kinetic theory and they rely on theoretical properties of idealized devices and materials
19.
Electric current
–
An electric current is a flow of electric charge. In electric circuits this charge is carried by moving electrons in a wire. It can also be carried by ions in an electrolyte, or by both ions and electrons such as in an ionised gas. The SI unit for measuring a current is the ampere. Electric current is measured using a device called an ammeter, electric currents cause Joule heating, which creates light in incandescent light bulbs. They also create magnetic fields, which are used in motors, inductors and generators, the particles that carry the charge in an electric current are called charge carriers. In metals, one or more electrons from each atom are loosely bound to the atom and these conduction electrons are the charge carriers in metal conductors. The conventional symbol for current is I, which originates from the French phrase intensité de courant, current intensity is often referred to simply as current. The I symbol was used by André-Marie Ampère, after whom the unit of current is named, in formulating the eponymous Ampères force law. The notation travelled from France to Great Britain, where it became standard, in a conductive material, the moving charged particles which constitute the electric current are called charge carriers. In other materials, notably the semiconductors, the carriers can be positive or negative. Positive and negative charge carriers may even be present at the same time, a flow of positive charges gives the same electric current, and has the same effect in a circuit, as an equal flow of negative charges in the opposite direction. Since current can be the flow of positive or negative charges. The direction of current is arbitrarily defined as the same direction as positive charges flow. This is called the direction of current I. If the current flows in the direction, the variable I has a negative value. When analyzing electrical circuits, the direction of current through a specific circuit element is usually unknown. Consequently, the directions of currents are often assigned arbitrarily
20.
Boltzmann constant
–
The Boltzmann constant, which is named after Ludwig Boltzmann, is a physical constant relating the average kinetic energy of particles in a gas with the temperature of the gas. It is the gas constant R divided by the Avogadro constant NA, the Boltzmann constant has the dimension energy divided by temperature, the same as entropy. The accepted value in SI units is 6977138064851999999♠1. 38064852×10−23 J/K, the Boltzmann constant, k, is a bridge between macroscopic and microscopic physics. Introducing the Boltzmann constant transforms the gas law into an alternative form, p V = N k T. For n =1 mol, N is equal to the number of particles in one mole, given a thermodynamic system at an absolute temperature T, the average thermal energy carried by each microscopic degree of freedom in the system is on the order of magnitude of 1/2kT. In classical statistical mechanics, this average is predicted to hold exactly for homogeneous ideal gases, monatomic ideal gases possess three degrees of freedom per atom, corresponding to the three spatial directions, which means a thermal energy of 3/2kT per atom. This corresponds very well with experimental data, the thermal energy can be used to calculate the root-mean-square speed of the atoms, which turns out to be inversely proportional to the square root of the atomic mass. The root mean square speeds found at room temperature accurately reflect this, ranging from 7003137000000000000♠1370 m/s for helium, kinetic theory gives the average pressure p for an ideal gas as p =13 N V m v 2 ¯. Combination with the gas law p V = N k T shows that the average translational kinetic energy is 12 m v 2 ¯ =32 k T. Considering that the translational motion velocity vector v has three degrees of freedom gives the energy per degree of freedom equal to one third of that. Diatomic gases, for example, possess a total of six degrees of freedom per molecule that are related to atomic motion. Again, it is the energy-like quantity kT that takes central importance, consequences of this include the Arrhenius equation in chemical kinetics. This equation, which relates the details, or microstates. Such is its importance that it is inscribed on Boltzmanns tombstone, the constant of proportionality k serves to make the statistical mechanical entropy equal to the classical thermodynamic entropy of Clausius, Δ S = ∫ d Q T. One could choose instead a rescaled dimensionless entropy in terms such that S ′ = ln W, Δ S ′ = ∫ d Q k T. This is a natural form and this rescaled entropy exactly corresponds to Shannons subsequent information entropy. The characteristic energy kT is thus the required to increase the rescaled entropy by one nat. The iconic terse form of the equation S = k ln W on Boltzmanns tombstone is in due to Planck
21.
International System of Units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version
22.
Coulomb's law
–
Coulombs law, or Coulombs inverse-square law, is a law of physics that describes force interacting between static electrically charged particles. The force of interaction between the charges is attractive if the charges have opposite signs and repulsive if like-signed, the law was first published in 1784 by French physicist Charles Augustin de Coulomb and was essential to the development of the theory of electromagnetism. It is analogous to Isaac Newtons inverse-square law of universal gravitation, Coulombs law can be used to derive Gausss law, and vice versa. The law has been tested extensively, and all observations have upheld the laws principle, ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cats fur to attract light objects like feathers. Thales was incorrect in believing the attraction was due to a magnetic effect and he coined the New Latin word electricus to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words electric and electricity, however, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, in the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law and this publication was essential to the development of the theory of electromagnetism. The torsion balance consists of a bar suspended from its middle by a thin fiber, the fiber acts as a very weak torsion spring. In Coulombs experiment, the balance was an insulating rod with a metal-coated ball attached to one end. The ball was charged with a charge of static electricity. The two charged balls repelled one another, twisting the fiber through an angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, the force is along the straight line joining them. If the two charges have the sign, the electrostatic force between them is repulsive, if they have different signs, the force between them is attractive. Coulombs law can also be stated as a mathematical expression. The vector form of the equation calculates the force F1 applied on q1 by q2, if r12 is used instead, then the effect on q2 can be found. It can be calculated using Newtons third law, F2 = −F1
23.
Planck constant
–
The Planck constant is a physical constant that is the quantum of action, central in quantum mechanics. The light quantum behaved in some respects as a neutral particle. It was eventually called the photon, the Planck–Einstein relation connects the particulate photon energy E with its associated wave frequency f, E = h f This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency f, wavelength λ, and speed of c are related by f = c λ. This leads to another relationship involving the Planck constant, with p denoting the linear momentum of a particle, the de Broglie wavelength λ of the particle is given by λ = h p. In applications where it is natural to use the frequency it is often useful to absorb a factor of 2π into the Planck constant. The resulting constant is called the reduced Planck constant or Dirac constant and it is equal to the Planck constant divided by 2π, and is denoted ħ, ℏ = h 2 π. The energy of a photon with angular frequency ω, where ω = 2πf, is given by E = ℏ ω, while its linear momentum relates to p = ℏ k and this was confirmed by experiments soon afterwards. This holds throughout quantum theory, including electrodynamics and these two relations are the temporal and spatial component parts of the special relativistic expression using 4-Vectors. P μ = = ℏ K μ = ℏ Classical statistical mechanics requires the existence of h, eventually, following upon Plancks discovery, it was recognized that physical action cannot take on an arbitrary value. Instead, it must be multiple of a very small quantity. This is the old quantum theory developed by Bohr and Sommerfeld, in which particle trajectories exist but are hidden. Thus there is no value of the action as classically defined, related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain either quantization of energy or the lack of a particle motion. In many cases, such as for light or for atoms, quantization of energy also implies that only certain energy levels are allowed. The Planck constant has dimensions of physical action, i. e. energy multiplied by time, or momentum multiplied by distance, in SI units, the Planck constant is expressed in joule-seconds or or. The value of the Planck constant is, h =6.626070040 ×10 −34 J⋅s =4.135667662 ×10 −15 eV⋅s. The value of the reduced Planck constant is, ℏ = h 2 π =1.054571800 ×10 −34 J⋅s =6.582119514 ×10 −16 eV⋅s
24.
Quantum Hall effect
–
The prefactor, ν is known as the filling factor, and can take on either integer or fractional values. The quantum Hall effect is referred to as the integer or fractional quantum Hall effect depending on whether ν is an integer or fraction, the striking feature of the integer quantum Hall effect is the persistence of the quantization as the electron density is varied. The fractional quantum Hall effect is more complicated, as its existence relies fundamentally on electron–electron interactions, although the microscopic origins of the fractional quantum Hall effect are unknown, there are several phenomenological approaches that provide accurate approximations. For example, the effect can be thought of as an integer quantum Hall effect, not of electrons, in 1988, it was proposed that there was quantum Hall effect without Landau levels. This quantum Hall effect is referred to as the quantum anomalous Hall effect, there is also a new concept of the quantum spin Hall effect which is an analogue of the quantum Hall effect, where spin currents flow instead of charge currents. The quantization of the Hall conductance has the important property of being exceedingly precise, actual measurements of the Hall conductance have been found to be integer or fractional multiples of e2/h to nearly one part in a billion. This phenomenon, referred to as exact quantization, has shown to be a subtle manifestation of the principle of gauge invariance. It has allowed for the definition of a new standard for electrical resistance. This is named after Klaus von Klitzing, the discoverer of exact quantization, since 1990, a fixed conventional value RK-90 is used in resistance calibrations worldwide. The quantum Hall effect also provides an extremely precise independent determination of the structure constant. Several researchers subsequently observed the effect in experiments carried out on the layer of MOSFETs. For this finding, von Klitzing was awarded the 1985 Nobel Prize in Physics, the link between exact quantization and gauge invariance was subsequently found by Robert Laughlin, who connected the quantized conductivity to the quantized charge transport in Thouless charge pump. Most integer quantum Hall experiments are now performed on gallium arsenide heterostructures, in 2007, the integer quantum Hall effect was reported in graphene at temperatures as high as room temperature, and in the oxide ZnO-MgxZn1−xO. In two dimensions, when electrons are subjected to a magnetic field they follow circular cyclotron orbits. When the system is treated quantum mechanically, these orbits are quantized, the energy levels of these quantized orbitals take on discrete values, E n = ℏ ω c, where ωc = eB/m is the cyclotron frequency. For strong magnetic fields, each Landau level is highly degenerate, the integers that appear in the Hall effect are examples of topological quantum numbers. They are known in mathematics as the first Chern numbers and are related to Berrys phase. A striking model of much interest in this context is the Azbel-Harper-Hofstadter model whose quantum phase diagram is the Hofstadter butterfly shown in the figure, the vertical axis is the strength of the magnetic field and the horizontal axis is the chemical potential, which fixes the electron density
25.
Gravitational constant
–
Its measured value is 6. 67408×10−11 m3⋅kg−1⋅s−2. The constant of proportionality, G, is the gravitational constant, colloquially, the gravitational constant is also called Big G, for disambiguation with small g, which is the local gravitational field of Earth. The two quantities are related by g = GME/r2 E. In the general theory of relativity, the Einstein field equations, R μ ν −12 R g μ ν =8 π G c 4 T μ ν, the scaled gravitational constant κ = 8π/c4G ≈2. 071×10−43 s2·m−1·kg−1 is also known as Einsteins constant. The gravitational constant is a constant that is difficult to measure with high accuracy. This is because the force is extremely weak compared with other fundamental forces. In SI units, the 2014 CODATA-recommended value of the constant is. In cgs, G can be written as G ≈6. 674×10−8 cm3·g−1·s−2, in other words, in Planck units, G has the numerical value of 1. In astrophysics, it is convenient to measure distances in parsecs, velocities in kilometers per second, in these units, the gravitational constant is, G ≈4.302 ×10 −3 p c M ⊙ −12. In orbital mechanics, the period P of an object in orbit around a spherical object obeys G M =3 π V P2 where V is the volume inside the radius of the orbit. It follows that P2 =3 π G V M ≈10.896 h r 2 g c m −3 V M. This way of expressing G shows the relationship between the density of a planet and the period of a satellite orbiting just above its surface. Cavendish measured G implicitly, using a torsion balance invented by the geologist Rev. John Michell and he used a horizontal torsion beam with lead balls whose inertia he could tell by timing the beams oscillation. Their faint attraction to other balls placed alongside the beam was detectable by the deflection it caused, cavendishs aim was not actually to measure the gravitational constant, but rather to measure Earths density relative to water, through the precise knowledge of the gravitational interaction. In modern units, the density that Cavendish calculated implied a value for G of 6. 754×10−11 m3·kg−1·s−2, the accuracy of the measured value of G has increased only modestly since the original Cavendish experiment. G is quite difficult to measure, because gravity is weaker than other fundamental forces. Published values of G have varied rather broadly, and some recent measurements of precision are, in fact. This led to the 2010 CODATA value by NIST having 20% increased uncertainty than in 2006, for the 2014 update, CODATA reduced the uncertainty to less than half the 2010 value
26.
Fine-structure constant
–
It is related to the elementary charge e, which characterizes the strength of the coupling of an elementary charged particle with the electromagnetic field, by the formula 4πε0ħcα = e2. Being a dimensionless quantity, it has the numerical value of about 1⁄137 in all systems of units. Arnold Sommerfeld introduced the fine-structure constant in 1916, the definition reflects the relationship between α and the elementary charge e, which equals √4παε0ħc. In electrostatic cgs units, the unit of charge, the statcoulomb, is defined so that the Coulomb constant, ke, or the permittivity factor, 4πε0, is 1. Then the expression of the constant, as commonly found in older physics literature. In natural units, commonly used in high energy physics, where ε0 = c = ħ =1, the value of the fine-structure constant is α = e 24 π. As such, the constant is just another, albeit dimensionless, quantity determining the elementary charge. The 2014 CODATA recommended value of α is α = e 2 ℏ c =0.0072973525664 and this has a relative standard uncertainty of 0.32 parts per billion. For reasons of convenience, historically the value of the reciprocal of the constant is often specified. The 2014 CODATA recommended value is given by α −1 =137.035999139, the theory of QED predicts a relationship between the dimensionless magnetic moment of the electron and the fine-structure constant α.035999173. This measurement of α has a precision of 0.25 parts per billion and this value and uncertainty are about the same as the latest experimental results. The fine-structure constant, α, has several physical interpretations, α is, The square of the ratio of the elementary charge to the Planck charge α =2. The ratio of the velocity of the electron in the first circular orbit of the Bohr model of the atom to the speed of light in vacuum and this is Sommerfelds original physical interpretation. Then the square of α is the ratio between the Hartree energy and the electron rest energy, the theory does not predict its value. Therefore, α must be determined experimentally, in fact, α is one of the about 20 empirical parameters in the Standard Model of particle physics, whose value is not determined within the Standard Model. In the electroweak theory unifying the weak interaction with electromagnetism, α is absorbed into two other coupling constants associated with the gauge fields. In this theory, the interaction is treated as a mixture of interactions associated with the electroweak fields. The strength of the electromagnetic interaction varies with the strength of the energy field, the absorption value for normal-incident light on graphene in vacuum would then be given by πα/2 or 2. 24%, and the transmission by 1/2 or 97. 75%
27.
Invariant mass
–
More precisely, it is a characteristic of the systems total energy and momentum that is the same in all frames of reference related by Lorentz transformations. If a center of momentum frame exists for the system, then the invariant mass of a system is equal to its mass in that rest frame. In other reference frames, where the momentum is nonzero, the total mass of the system is greater than the invariant mass. Due to mass-energy equivalence, the rest energy of the system is simply the invariant mass times the speed of light squared, similarly, the total energy of the system is its total mass times the speed of light squared. Systems whose four-momentum is a null vector have zero invariant mass, a physical object or particle moving faster than the speed of light would have space-like four-momenta, and these do not appear to exist. Any time-like four-momentum possesses a frame where the momentum is zero. In this case, invariant mass is positive and is referred to as the rest mass, if objects within a system are in relative motion, then the invariant mass of the whole system will differ from the sum of the objects rest masses. This is also equal to the energy of the system divided by c2. See mass–energy equivalence for a discussion of definitions of mass, for example, a scale would measure the kinetic energy of the molecules in a bottle of gas to be part of invariant mass of the bottle, and thus also its rest mass. The same is true for massless particles in such system, which add invariant mass and also rest mass to systems, for an isolated massive system, the center of mass of the system moves in a straight line with a steady sub-luminal velocity. Thus, an observer can always be placed to move along with it. In this frame, which is the center of momentum frame, the momentum is zero. In this frame, which exists under these assumptions, the invariant mass of the system is equal to the system energy divided by c2. This total energy in the center of momentum frame, is the energy which the system may be observed to have. Note that for reasons above, such a rest frame does not exist for single photons, when two or more photons move in different directions, however, a center of mass frame exists. Thus, the mass of a system of several photons moving in different directions is positive, for example, rest mass and invariant mass are zero for individual photons even though they may add mass to the invariant mass of systems. For this reason, invariant mass is in not an additive quantity. Consider the simple case of system, where object A is moving towards another object B which is initially at rest
28.
Quantum gravity
–
Quantum gravity is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored. The current understanding of gravity is based on Albert Einsteins general theory of relativity, the necessity of a quantum mechanical description of gravity is sometimes said to follow from the fact that one cannot consistently couple a classical system to a quantum one. This is false as is shown, for example, by Walds explicit construction of a consistent semiclassical theory, the problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, a theory of quantum gravity that is also a grand unification of all known interactions is sometimes referred to as The Theory of Everything. As a result, quantum gravity is a mainly theoretical enterprise, much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. Quantum field theory, if conceived of as a theory of particles, General relativity models gravity as a curvature within space-time that changes as a gravitational mass moves. Historically, the most obvious way of combining the two ran quickly into what is known as the renormalization problem, another possibility is to focus on fields rather than on particles, which are just one way of characterizing certain fields in very special spacetimes. This solves worries about consistency, but does not appear to lead to a version of full general theory of relativity. Quantum gravity can be treated as a field theory. Effective quantum field theories come with some high-energy cutoff, beyond which we do not expect that the theory provides a description of nature. The infinities then become large but finite quantities depending on this finite cutoff scale and this same logic works just as well for the highly successful theory of low-energy pions as for quantum gravity. Indeed, the first quantum-mechanical corrections to graviton-scattering and Newtons law of gravitation have been explicitly computed. In fact, gravity is in ways a much better quantum field theory than the Standard Model. Specifically, the problem of combining quantum mechanics and gravity becomes an issue only at high energies. This problem must be put in the context, however. While there is no proof of the existence of gravitons. The predicted find would result in the classification of the graviton as a force similar to the photon of the electromagnetic field. Many of the notions of a unified theory of physics since the 1970s assume, and to some degree depend upon
29.
Atomic units
–
Atomic units form a system of natural units which is especially convenient for atomic physics calculations. There are two different kinds of units, Hartree atomic units and Rydberg atomic units, which differ in the choice of the unit of mass. In Hartree units, the speed of light is approximately 137, atomic units are often abbreviated a. u. or au, not to be confused with the same abbreviation used also for astronomical units, arbitrary units, and absorbance units in different contexts. Atomic units, like SI units, have a unit of mass, a unit of length, however, the use and notation is somewhat different from SI. Suppose a particle with a mass of m has 3.4 times the mass of electron, the value of m can be written in three ways, m =3.4 m e. This is the clearest notation, where the unit is included explicitly as a symbol. This notation is ambiguous, Here, it means that the m is 3.4 times the atomic unit of mass. But if a length L were 3.4 times the unit of length. The dimension needs to be inferred from context and this notation is similar to the previous one, and has the same dimensional ambiguity. It comes from setting the atomic units to 1, in this case m e =1. These four fundamental constants form the basis of the atomic units, therefore, their numerical values in the atomic units are unity by definition. Dimensionless physical constants retain their values in any system of units, of particular importance is the fine-structure constant α = e 2 ℏ c ≈1 /137. This immediately gives the value of the speed of light, expressed in atomic units, below are given a few derived units. Some of them have names and symbols assigned, as indicated in the table. There are two variants of atomic units, one where they are used in conjunction with SI units for electromagnetism. Although the units written above are the same way, the units related to magnetism are not. In the SI system, the unit for magnetic field is 1 a. u. = ℏ e a 02 =2. 35×105 T =2. 35×109 G, and in the Gaussian-cgs unit system, = e a 02 c =1. 72×103 T =1. 72×107 G
30.
Hydrogen atom
–
A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the mass of the universe. In everyday life on Earth, isolated hydrogen atoms are extremely rare, instead, hydrogen tends to combine with other atoms in compounds, or with itself to form ordinary hydrogen gas, H2. Atomic hydrogen and hydrogen atom in ordinary English use have overlapping, yet distinct, for example, a water molecule contains two hydrogen atoms, but does not contain atomic hydrogen. Attempts to develop an understanding of the hydrogen atom have been important to the history of quantum mechanics. The most abundant isotope, hydrogen-1, protium, or light hydrogen, contains no neutrons and is just a proton, protium is stable and makes up 99. 9885% of naturally occurring hydrogen by absolute number. Deuterium contains one neutron and one proton, deuterium is stable and makes up 0. 0115% of naturally occurring hydrogen and is used in industrial processes like nuclear reactors and Nuclear Magnetic Resonance. Tritium contains two neutrons and one proton and is not stable, decaying with a half-life of 12.32 years, because of the short half life, Tritium does not exist in nature except in trace amounts. Higher isotopes of hydrogen are only created in artificial accelerators and reactors and have half lives around the order of 10−22 seconds, the formulas below are valid for all three isotopes of hydrogen, but slightly different values of the Rydberg constant must be used for each hydrogen isotope. Hydrogen is not found without its electron in ordinary chemistry, as ionized hydrogen is highly chemically reactive. When ionized hydrogen is written as H+ as in the solvation of classical acids such as hydrochloric acid, in that case, the acid transfers the proton to H2O to form H3O+. Ionized hydrogen without its electron, or free protons, are common in the interstellar medium, experiments by Ernest Rutherford in 1909 showed the structure of the atom to be a dense, positive nucleus with a light, negative charge orbiting around it. This immediately caused problems on how such a system could be stable, classical electromagnetism had shown that any accelerating charge radiates energy described through the Larmor formula. If this were true, all atoms would instantly collapse, however seem to be stable. Furthermore, the spiral inward would release a smear of electromagnetic frequencies as the orbit got smaller, instead, atoms were observed to only emit discrete frequencies of radiation. The resolution would lie in the development of quantum mechanics, in 1913, Niels Bohr obtained the energy levels and spectral frequencies of the hydrogen atom after making a number of simple assumptions in order to correct the failed classical model. The assumptions included, Electrons can only be in certain, discrete circular orbits or stationary states, thereby having a set of possible radii
31.
Kilogram
–
The kilogram or kilogramme is the base unit of mass in the International System of Units and is defined as being equal to the mass of the International Prototype of the Kilogram. The avoirdupois pound, used in both the imperial and US customary systems, is defined as exactly 0.45359237 kg, making one kilogram approximately equal to 2.2046 avoirdupois pounds. Other traditional units of weight and mass around the world are also defined in terms of the kilogram, the gram, 1/1000 of a kilogram, was provisionally defined in 1795 as the mass of one cubic centimeter of water at the melting point of ice. The final kilogram, manufactured as a prototype in 1799 and from which the IPK was derived in 1875, had an equal to the mass of 1 dm3 of water at its maximum density. The kilogram is the only SI base unit with an SI prefix as part of its name and it is also the only SI unit that is still directly defined by an artifact rather than a fundamental physical property that can be reproduced in different laboratories. Three other base units and 17 derived units in the SI system are defined relative to the kilogram, only 8 other units do not require the kilogram in their definition, temperature, time and frequency, length, and angle. At its 2011 meeting, the CGPM agreed in principle that the kilogram should be redefined in terms of the Planck constant, the decision was originally deferred until 2014, in 2014 it was deferred again until the next meeting. There are currently several different proposals for the redefinition, these are described in the Proposed Future Definitions section below, the International Prototype Kilogram is rarely used or handled. In the decree of 1795, the term gramme thus replaced gravet, the French spelling was adopted in the United Kingdom when the word was used for the first time in English in 1797, with the spelling kilogram being adopted in the United States. In the United Kingdom both spellings are used, with kilogram having become by far the more common, UK law regulating the units to be used when trading by weight or measure does not prevent the use of either spelling. In the 19th century the French word kilo, a shortening of kilogramme, was imported into the English language where it has used to mean both kilogram and kilometer. In 1935 this was adopted by the IEC as the Giorgi system, now known as MKS system. In 1948 the CGPM commissioned the CIPM to make recommendations for a practical system of units of measurement. This led to the launch of SI in 1960 and the subsequent publication of the SI Brochure, the kilogram is a unit of mass, a property which corresponds to the common perception of how heavy an object is. Mass is a property, that is, it is related to the tendency of an object at rest to remain at rest, or if in motion to remain in motion at a constant velocity. Accordingly, for astronauts in microgravity, no effort is required to hold objects off the cabin floor, they are weightless. However, since objects in microgravity still retain their mass and inertia, the ratio of the force of gravity on the two objects, measured by the scale, is equal to the ratio of their masses. On April 7,1795, the gram was decreed in France to be the weight of a volume of pure water equal to the cube of the hundredth part of the metre
32.
Second
–
The second is the base unit of time in the International System of Units. It is qualitatively defined as the division of the hour by sixty. SI definition of second is the duration of 9192631770 periods of the corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Seconds may be measured using a mechanical, electrical or an atomic clock, SI prefixes are combined with the word second to denote subdivisions of the second, e. g. the millisecond, the microsecond, and the nanosecond. Though SI prefixes may also be used to form multiples of the such as kilosecond. The second is also the unit of time in other systems of measurement, the centimetre–gram–second, metre–kilogram–second, metre–tonne–second. Absolute zero implies no movement, and therefore zero external radiation effects, the second thus defined is consistent with the ephemeris second, which was based on astronomical measurements. The realization of the second is described briefly in a special publication from the National Institute of Standards and Technology. 1 international second is equal to, 1⁄60 minute 1⁄3,600 hour 1⁄86,400 day 1⁄31,557,600 Julian year 1⁄, more generally, = 1⁄, the Hellenistic astronomers Hipparchus and Ptolemy subdivided the day into sixty parts. They also used an hour, simple fractions of an hour. No sexagesimal unit of the day was used as an independent unit of time. The modern second is subdivided using decimals - although the third remains in some languages. The earliest clocks to display seconds appeared during the last half of the 16th century, the second became accurately measurable with the development of mechanical clocks keeping mean time, as opposed to the apparent time displayed by sundials. The earliest spring-driven timepiece with a hand which marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection. During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute, in 1579, Jost Bürgi built a clock for William of Hesse that marked seconds. In 1581, Tycho Brahe redesigned clocks that displayed minutes at his observatory so they also displayed seconds, however, they were not yet accurate enough for seconds. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds, in 1670, London clockmaker William Clement added this seconds pendulum to the original pendulum clock of Christiaan Huygens. From 1670 to 1680, Clement made many improvements to his clock and this clock used an anchor escapement mechanism with a seconds pendulum to display seconds in a small subdial
33.
Atomic clock
–
The principle of operation of an atomic clock is not based on nuclear physics, but rather on atomic physics, it uses the microwave signal that electrons in atoms emit when they change energy levels. Early atomic clocks were based on masers at room temperature, currently, the most accurate atomic clocks first cool the atoms to near absolute zero temperature by slowing them with lasers and probing them in atomic fountains in a microwave-filled cavity. An example of this is the NIST-F1 atomic clock, one of the primary time. The accuracy of an atomic clock depends on two factors, the first factor is temperature of the sample atoms—colder atoms move much more slowly, allowing longer probe times. The second factor is the frequency and intrinsic width of the electronic transition, higher frequencies and narrow lines increase the precision. National standards agencies in many countries maintain a network of atomic clocks which are intercompared and these clocks collectively define a continuous and stable time scale, International Atomic Time. For civil time, another time scale is disseminated, Coordinated Universal Time, UTC is derived from TAI, but approximately synchronised, by using leap seconds, to UT1, which is based on actual rotation of the Earth with respect to the solar time. The idea of using atomic transitions to measure time was suggested by Lord Kelvin in 1879, magnetic resonance, developed in the 1930s by Isidor Rabi, became the practical method for doing this. In 1945, Rabi first publicly suggested that atomic beam magnetic resonance might be used as the basis of a clock, the first atomic clock was an ammonia maser device built in 1949 at the U. S. National Bureau of Standards. It was less accurate than existing quartz clocks, but served to demonstrate the concept, calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time. This led to the agreed definition of the latest SI second being based on atomic time. Equality of the ET second with the SI second has been verified to within 1 part in 1010, the SI second thus inherits the effect of decisions by the original designers of the ephemeris time scale, determining the length of the ET second. Since the beginning of development in the 1950s, atomic clocks have been based on the transitions in hydrogen-1, caesium-133. The first commercial atomic clock was the Atomichron, manufactured by the National Company, more than 50 were sold between 1956 and 1960. This bulky and expensive instrument was replaced by much smaller rack-mountable devices, such as the Hewlett-Packard model 5060 caesium frequency standard. In August 2004, NIST scientists demonstrated a chip-scale atomic clock, according to the researchers, the clock was believed to be one-hundredth the size of any other. It requires no more than 125 mW, making it suitable for battery-driven applications and this technology became available commercially in 2011. Ion trap experimental optical clocks are more precise than the current caesium standard, in March 2017, NASA plans to deploy the Deep Space Atomic Clock, a miniaturized, ultra-precise mercury-ion atomic clock, into outer space
34.
SI units
–
The International System of Units is the modern form of the metric system, and is the most widely used system of measurement. It comprises a coherent system of units of measurement built on seven base units, the system also establishes a set of twenty prefixes to the unit names and unit symbols that may be used when specifying multiples and fractions of the units. The system was published in 1960 as the result of an initiative began in 1948. It is based on the system of units rather than any variant of the centimetre-gram-second system. The motivation for the development of the SI was the diversity of units that had sprung up within the CGS systems, the International System of Units has been adopted by most developed countries, however, the adoption has not been universal in all English-speaking countries. The metric system was first implemented during the French Revolution with just the metre and kilogram as standards of length, in the 1830s Carl Friedrich Gauss laid the foundations for a coherent system based on length, mass, and time. In the 1860s a group working under the auspices of the British Association for the Advancement of Science formulated the requirement for a coherent system of units with base units and derived units. Meanwhile, in 1875, the Treaty of the Metre passed responsibility for verification of the kilogram, in 1921, the Treaty was extended to include all physical quantities including electrical units originally defined in 1893. The units associated with these quantities were the metre, kilogram, second, ampere, kelvin, in 1971, a seventh base quantity, amount of substance represented by the mole, was added to the definition of SI. On 11 July 1792, the proposed the names metre, are, litre and grave for the units of length, area, capacity. The committee also proposed that multiples and submultiples of these units were to be denoted by decimal-based prefixes such as centi for a hundredth, on 10 December 1799, the law by which the metric system was to be definitively adopted in France was passed. Prior to this, the strength of the magnetic field had only been described in relative terms. The technique used by Gauss was to equate the torque induced on a magnet of known mass by the earth’s magnetic field with the torque induced on an equivalent system under gravity. The resultant calculations enabled him to assign dimensions based on mass, length, a French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention. Initially the convention only covered standards for the metre and the kilogram, one of each was selected at random to become the International prototype metre and International prototype kilogram that replaced the mètre des Archives and kilogramme des Archives respectively. Each member state was entitled to one of each of the prototypes to serve as the national prototype for that country. Initially its prime purpose was a periodic recalibration of national prototype metres. The official language of the Metre Convention is French and the version of all official documents published by or on behalf of the CGPM is the French-language version