A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in the natural sciences and engineering disciplines, as well as in the social sciences. A model may help to explain a system and to study the effects of different components, to make predictions about behaviour. Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models; these and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements leads to important advances as better theories are developed.
In the physical sciences, a traditional mathematical model contains most of the following elements: Governing equations Supplementary sub-models Defining equations Constitutive equations Assumptions and constraints Initial and boundary conditions Classical constraints and kinematic equations Mathematical models are composed of relationships and variables. Relationships can be described by operators, such as algebraic operators, differential operators, etc. Variables are abstractions of system parameters of interest. Several classification criteria can be used for mathematical models according to their structure: Linear vs. nonlinear: If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise; the definition of linearity and nonlinearity is dependent on context, linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables.
A differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented by linear equations the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation the model is known as a nonlinear model. Nonlinearity in simple systems, is associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are tied to nonlinearity. Static vs. dynamic: A dynamic model accounts for time-dependent changes in the state of the system, while a static model calculates the system in equilibrium, thus is time-invariant.
Dynamic models are represented by differential equations or difference equations. Explicit vs. implicit: If all of the input parameters of the overall model are known, the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties. Discrete vs. continuous: A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model.
Deterministic vs. probabilistic: A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, variable states are not described by unique values, but rather by probability distributions. Deductive, inductive, or floating: A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them; the floating model rests on neither theory nor observation, but is the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model. Mathematical models are of great importance in the natural sciences in physics. Physical theories are invariably expressed using mathematic
Physics is the natural science that studies matter, its motion, behavior through space and time, that studies the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, its main goal is to understand how the universe behaves. Physics is one of the oldest academic disciplines and, through its inclusion of astronomy the oldest. Over much of the past two millennia, chemistry and certain branches of mathematics, were a part of natural philosophy, but during the scientific revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, the boundaries of physics which are not rigidly defined. New ideas in physics explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in academic disciplines such as mathematics and philosophy. Advances in physics enable advances in new technologies.
For example, advances in the understanding of electromagnetism and nuclear physics led directly to the development of new products that have transformed modern-day society, such as television, domestic appliances, nuclear weapons. Astronomy is one of the oldest natural sciences. Early civilizations dating back to beyond 3000 BCE, such as the Sumerians, ancient Egyptians, the Indus Valley Civilization, had a predictive knowledge and a basic understanding of the motions of the Sun and stars; the stars and planets were worshipped, believed to represent gods. While the explanations for the observed positions of the stars were unscientific and lacking in evidence, these early observations laid the foundation for astronomy, as the stars were found to traverse great circles across the sky, which however did not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, all Western efforts in the exact sciences are descended from late Babylonian astronomy.
Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey. Natural philosophy has its origins in Greece during the Archaic period, when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause, they proposed ideas verified by reason and observation, many of their hypotheses proved successful in experiment. The Western Roman Empire fell in the fifth century, this resulted in a decline in intellectual pursuits in the western part of Europe. By contrast, the Eastern Roman Empire resisted the attacks from the barbarians, continued to advance various fields of learning, including physics. In the sixth century Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. In sixth century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noting its flaws.
He introduced the theory of impetus. Aristotle's physics was not scrutinized until John Philoponus appeared, unlike Aristotle who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics John Philoponus wrote: “But this is erroneous, our view may be corroborated by actual observation more than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a small one, and so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the other”John Philoponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries during the Scientific Revolution.
Galileo cited Philoponus in his works when arguing that Aristotelian physics was flawed. In the 1300s Jean Buridan, a teacher in the faculty of arts at the University of Paris, developed the concept of impetus, it was a step toward the modern ideas of momentum. Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further placing emphasis on observation and a priori reasoning, developing early forms of the scientific method; the most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn al-Haytham, in which he conclusively disproved the ancient Greek idea about vision, but came up with a new theory. In the book, he presented a study of the phenomenon of the camera obscura (his thousand-year-old
Large Hadron Collider
The Large Hadron Collider is the world's largest and most powerful particle collider and the largest machine in the world. It was built by the European Organization for Nuclear Research between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories, as well as more than 100 countries, it lies in a tunnel 27 kilometres in circumference and as deep as 175 metres beneath the France–Switzerland border near Geneva. First collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts per beam, about four times the previous world record. After upgrades it reached 6.5 TeV per beam. At the end of 2018, it entered a two-year shutdown period for further upgrades; the collider has four crossing points, around which are positioned seven detectors, each designed for certain kinds of research. The LHC collides proton beams, but it can use beams of heavy ions: Lead–lead collisions and proton-lead collisions are done for one month per year; the aim of the LHC's detectors is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson and searching for the large family of new particles predicted by supersymmetric theories, as well as other unsolved questions of physics.
The term hadron refers to composite particles composed of quarks held together by the strong force. The best-known hadrons are the baryons such as neutrons. A collider is a type of a particle accelerator with two directed beams of particles. In particle physics, colliders are used as a research tool: they accelerate particles to high kinetic energies and let them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. Many of these byproducts are produced only by high-energy collisions, they decay after short periods of time, thus many of them are nearly impossible to study in other ways. Physicists hope that the Large Hadron Collider will help answer some of the fundamental open questions in physics, concerning the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, in particular the interrelation between quantum mechanics and general relativity.
Data are needed from high-energy particle experiments to suggest which versions of current scientific models are more to be correct – in particular to choose between the Standard Model and Higgsless model and to validate their predictions and allow further theoretical development. Many theorists expect new physics beyond the Standard Model to emerge at the TeV energy level, as the Standard Model appears to be unsatisfactory. Issues explored by LHC collisions include: is the mass of elementary particles being generated by the Higgs mechanism via electroweak symmetry breaking? It was expected that the collider experiments will either demonstrate or rule out the existence of the elusive Higgs boson, thereby allowing physicists to consider whether the Standard Model or its Higgsless alternatives are more to be correct. Is supersymmetry, an extension of the Standard Model and Poincaré symmetry, realized in nature, implying that all known particles have supersymmetric partners? Are there extra dimensions, as predicted by various models based on string theory, can we detect them?
What is the nature of the dark matter that appears to account for 27% of the mass-energy of the universe? Other open questions that may be explored using high-energy particle collisions: It is known that electromagnetism and the weak nuclear force are different manifestations of a single force called the electroweak force; the LHC may clarify whether the electroweak force and the strong nuclear force are just different manifestations of one universal unified force, as predicted by various Grand Unification Theories. Why is the fourth fundamental force so many orders of magnitude weaker than the other three fundamental forces? See Hierarchy problem. Are there additional sources of quark flavour mixing, beyond those present within the Standard Model? Why are there apparent violations of the symmetry between matter and antimatter? See CP violation. What are the nature and properties of quark–gluon plasma, thought to have existed in the early universe and in certain compact and strange astronomical objects today?
This will be investigated by heavy ion collisions in ALICE, but in CMS, ATLAS and LHCb. First observed in 2010, findings published in 2012 confirmed the phenomenon of jet quenching in heavy-ion collisions; the LHC is the world's largest and highest-energy particle accelerator. The collider is contained in a circular tunnel, with a circumference of 26.7 kilometres, at a depth ranging from 50 to 175 metres underground. The 3.8-metre wide concrete-lined tunnel, constructed between 1983 and 1988, was used to house the Large Electron–Positron Collider. It crosses the border between Switzerland and France with most of it in France. Surface buildings hold ancillary equipment such as compressors, ventilation equipment, control electronics and refrigeration plants; the collider tunnel contains two adjacent parallel beamlines each containing a beam, which travel in opposite directions around the ring. The beams intersect at four points around the ring, where the particle collisio
Lisa Randall is an American theoretical physicist working in particle physics and cosmology. She is the Frank B. Baird, Jr. Professor of Science on the physics faculty of Harvard University, her research includes fundamental forces and dimensions of space. She studies the Standard Model, possible solutions to the hierarchy problem concerning the relative weakness of gravity, cosmology of dimensions, cosmological inflation, dark matter, she contributed to the Randall–Sundrum model, first published in 1999 with Raman Sundrum. Randall was born in New York, she is an alumna of Hampshire College Summer Studies in Mathematics. She won first place in the 1980 Westinghouse Science Talent Search at the age of 18. Randall researches particle physics and cosmology at Harvard, where she is a professor of theoretical physics, her research concerns elementary particles and fundamental forces, has involved the study of a wide variety of models, the most recent involving dimensions. She has worked on supersymmetry, Standard Model observables, cosmological inflation, grand unified theories, general relativity.
After her graduate work at Harvard, Randall held professorships at MIT and Princeton University before returning to Harvard in 2001. Professor Randall was the first tenured woman in the Princeton physics department and the first tenured female theoretical physicist at Harvard. Randall's books Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions and Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World have both been on New York Times 100 notable books lists. Between the hardback and paperback release of Knocking on Heaven's Door, the quest for the discovery of the Higgs boson was completed, a subject discussed in the book. Scientists at the Large Hadron Collider found a particle identified as the Higgs boson, she said about the discovery, that if people don't understand everything about it, "what an exciting thing it is that people are excited that there is something fundamentally new, discovered." Randall has an e-book entitled Higgs Discovery: The Power of Empty Space.
Before the Large Hadron Collider was operating, she wrote an article explaining the discoveries that were expected from using it. She was asked about the misconception that the LHC could make black holes that could destroy the planet, she answered that it was "not conceivable unless space and gravity are different from what we thought."Randall wrote the libretto of the opera Hypermusic Prologue: A Projective Opera in Seven Planes on the invitation of the composer, Hèctor Parra, inspired by her book Warped Passages. Randall is a member of the American Academy of Arts and Sciences and the National Academy of Sciences, a fellow of the American Physical Society. Randall has helped organize numerous conferences and has been on the editorial board of several major theoretical physics journals. In autumn 2004, she was the most cited theoretical physicist of the previous five years. Professor Randall was featured in Seed magazine's "2005 Year in Science Icons" and in Newsweek's "Who's Next in 2006" as "one of the most promising theoretical physicists of her generation".
In 2007, Randall was named one of Time magazine's 100 Most Influential People under the section for "Scientists & Thinkers". Randall was given this honor for her work regarding the evidence of a higher dimension. Other honors: J. J. Sakurai Prize for Theoretical Particle Physics 2019 Phi Beta Kappa Andrew Gemant Award, 2012 Lilienfeld Prize, 2007 E. A. Wood Science Writing Award, 2007 Klopsteg Memorial Award from the American Association of Physics Teachers, 2006 Premio Caterina Tomassoni e Felice Pietro Chisesi Award, from the Sapienza University of Rome, 2003 National Science Foundation Young Investigator Award, 1992 Alfred P. Sloan Foundation Research Fellowship DOE Outstanding Junior Investigator Award. In an interview she was asked. "... I don't believe in God. I think. That's just not true; this might earn me some enemies, but in some ways they may be more moral. If you do something for a religious reason, you do it because you'll be rewarded in an afterlife or in this world. That's not quite as good as something you do for purely generous reasons."
Randall's sister, Dana Randall, is a professor of computer science at Georgia Tech. Randall, Lisa. Warped Passages: Unraveling the Mysteries of the Universe's Hidden Dimensions. Ecco Press. ISBN 0-06-053108-8. Randall, Lisa. Knocking on Heaven’s Door: How Physics and Scientific Thinking Illuminate the Universe and the Modern World. Ecco. ISBN 0-06-172372-X. Randall, Lisa. Higgs Discovery: The Power of Empty Space. Ecco. ISBN 978-0062300478. Randall, Lisa. Dark Matter and the Dinosaurs: The Astounding Interconnectedness of the Universe. Ecco. ISBN 978-0-06-232847-2. Professor Randall's website at Harvard Reprinted Op-Ed from The New York Times of Sunday, September 18, 2005 Lisa Randall's Bio Page, Edge Foundation On Gravity, Oreos and a Theory of Everything from This Week in Science May 9, 2006 Broadcast Profile in Scientific American, October 2005 Attiyeh, Jenn
In cosmology, the cosmological constant is the energy density of space, or vacuum energy, that arises in Albert Einstein's field equations of general relativity. It is associated to the concepts of dark energy and quintessence. Einstein introduced the concept in 1917 to counterbalance the effects of gravity and achieve a static universe, a notion, the accepted view at the time. Einstein abandoned the concept in 1931 after Hubble's discovery of the expanding universe. From the 1930s until the late 1990s, most physicists assumed the cosmological constant to be equal to zero; that changed with the surprising discovery in 1998 that the expansion of the universe is accelerating, implying the possibility of a positive nonzero value for the cosmological constant. Since the 1990s, studies have shown that around 68% of the mass–energy density of the universe can be attributed to so-called dark energy; the cosmological constant Λ is the simplest possible explanation for dark energy, is used in the current standard model of cosmology known as the ΛCDM model.
While dark energy is poorly understood at a fundamental level, the main required properties of dark energy are that it functions as a type of anti-gravity, it dilutes much more than matter as the universe expands, it clusters much more weakly than matter, or not at all. According to quantum field theory which underlies modern particle physics, empty space is defined by the vacuum state, a collection of quantum fields. All these quantum fields exhibit fluctuations in their ground state arising from the zero-point energy present everywhere in space; these zero-point fluctuations should act as a contribution to the cosmological constant Λ, but when calculations are performed these fluctuations give rise to an enormous vacuum energy. The discrepancy between theorized vacuum energy from QFT and observed vacuum energy from cosmology is a source of major contention, with the values predicted exceeding observation by some 120 orders of magnitude, a discrepancy, called "the worst theoretical prediction in the history of physics!".
This issue is called the cosmological constant problem and it is one of the greatest unsolved mysteries in science with many physicists believing that "the vacuum holds the key to a full understanding of nature". Einstein included the cosmological constant as a term in his field equations for general relativity because he was dissatisfied that otherwise his equations did not allow for a static universe: gravity would cause a universe, at dynamic equilibrium to contract. To counteract this possibility, Einstein added the cosmological constant. However, soon after Einstein developed his static theory, observations by Edwin Hubble indicated that the universe appears to be expanding. Einstein referred to his failure to accept the validation of his equations—when they had predicted the expansion of the universe in theory, before it was demonstrated in observation of the cosmological red shift—as his "biggest blunder". In fact, adding the cosmological constant to Einstein's equations does not lead to a static universe at equilibrium because the equilibrium is unstable: if the universe expands then the expansion releases vacuum energy, which causes yet more expansion.
A universe that contracts will continue contracting. However, the cosmological constant remained a subject of empirical interest. Empirically, the onslaught of cosmological data in the past decades suggests that our universe has a positive cosmological constant; the explanation of this small but positive value is an outstanding theoretical challenge, the so-called cosmological constant problem. Some early generalizations of Einstein's gravitational theory, known as classical unified field theories, either introduced a cosmological constant on theoretical grounds or found that it arose from the mathematics. For example, Sir Arthur Stanley Eddington claimed that the cosmological constant version of the vacuum field equation expressed the "epistemological" property that the universe is "self-gauging", Erwin Schrödinger's pure-affine theory using a simple variational principle produced the field equation with a cosmological term; the cosmological constant Λ appears in Einstein's field equation in the form R μ ν − 1 2 R g μ ν + Λ g μ ν = 8 π G c 4 T μ ν, where the Ricci tensor/scalar R and the metric tensor g describe the structure of spacetime, the stress-energy tensor T describes the energy and momentum density and flux of the matter in that point in spacetime, the universal constants G and c are conversion factors that arise from using traditional units of measurement.
When Λ is zero, this reduces to the field equation of general relativity used in the mid-20th century. When T is zero, the field equation describes empty space; the cosmological constant has the same effect as an intrinsic energy density of ρvac. In this context, it is moved onto the right-hand side of the equation, defined with a pr
Compact Muon Solenoid
The Compact Muon Solenoid experiment is one of two large general-purpose particle physics detectors built on the Large Hadron Collider at CERN in Switzerland and France. The goal of CMS experiment is to investigate a wide range of physics, including the search for the Higgs boson, extra dimensions, particles that could make up dark matter. CMS is 21.6 metres long, 15 m in diameter, weighs about 14,000 tonnes. 3,800 people, representing 199 scientific institutes and 43 countries, form the CMS collaboration who built and now operate the detector. It is located in an underground cavern at Cessy in France, just across the border from Geneva. In July 2012, along with ATLAS, CMS tentatively discovered the Higgs boson.. By March 2013 its existence was confirmed. Recent collider experiments such as the now-dismantled Large Electron-Positron Collider and the newly renovated Large Hadron Collider at CERN, as well as the closed Tevatron at Fermilab have provided remarkable insights into, precision tests of, the Standard Model of Particle Physics.
A principle achievement of these experiments is the discovery of a particle consistent with the Standard Model Higgs boson, the particle resulting from the Higgs mechanism, which provides an explanation for the masses of elementary particles. However, there are still many questions; these include uncertainties in the mathematical behaviour of the Standard Model at high energies, tests of proposed theories of dark matter, the reasons for the imbalance of matter and antimatter observed in the Universe. The main goals of the experiment are: to explore physics at the TeV scale to further study the properties of the Higgs boson discovered by CMS and ATLAS to look for evidence of physics beyond the standard model, such as supersymmetry, or extra dimensions to study aspects of heavy ion collisions; the ATLAS experiment, at the other side of the LHC ring is designed with similar goals in mind, the two experiments are designed to complement each other both to extend reach and to provide corroboration of findings.
CMS and ATLAS uses different technical solutions and design of its detector magnet system to achieve the goals. CMS is designed as a general-purpose detector, capable of studying many aspects of proton collisions at 0.9-13 TeV, the center-of-mass energy of the LHC particle accelerator. The CMS detector is built around a huge solenoid magnet; this takes the form of a cylindrical coil of superconducting cable that generates a magnetic field of 4 teslas, about 100 000 times that of the Earth. The magnetic field is confined by a steel'yoke' that forms the bulk of the detector's weight of 12 500 tonnes. An unusual feature of the CMS detector is that instead of being built in-situ underground, like the other giant detectors of the LHC experiments, it was constructed on the surface, before being lowered underground in 15 sections and reassembled, it contains subsystems which are designed to measure the energy and momentum of photons, electrons and other products of the collisions. The innermost layer is a silicon-based tracker.
Surrounding it is a scintillating crystal electromagnetic calorimeter, itself surrounded with a sampling calorimeter for hadrons. The tracker and the calorimetry are compact enough to fit inside the CMS Solenoid which generates a powerful magnetic field of 3.8 T. Outside the magnet are the large muon detectors, which are inside the return yoke of the magnet. For full technical details about the CMS detector, please see the Technical Design Report; this is the point in the centre of the detector at which proton-proton collisions occur between the two counter-rotating beams of the LHC. At each end of the detector magnets focus the beams into the interaction point. At collision each beam has a radius of 17 μm and the crossing angle between the beams is 285 μrad. At full design luminosity each of the two LHC beams will contain 2,808 bunches of 1.15×1011 protons. The interval between crossings is 25 ns, although the number of collisions per second is only 31.6 million due to gaps in the beam as injector magnets are activated and deactivated.
At full luminosity each collision will produce an average of 20 proton-proton interactions. The collisions occur at a centre of mass energy of 8 TeV. But, it is worth noting that for studies of physics at the electroweak scale, the scattering events are initiated by a single quark or gluon from each proton, so the actual energy involved in each collision will be lower as the total centre of mass energy is shared by these quarks and gluons; the first test which ran in September 2008 was expected to operate at a lower collision energy of 10 TeV but this was prevented by the 19 September 2008 shutdown. When at this target level, the LHC will have a reduced luminosity, due to both fewer proton bunches in each beam and fewer protons per bunch; the reduced bunch frequency does allow the crossing angle to be reduced to zero however, as bunches are far enough spaced to prevent secondary collisions in the experimental beampipe. Momentum of particles is crucial in helping us to build up a picture of events at the heart of the collision.
One method to calculate the momentum of a particle is to track its path through a magnetic field. The CMS tracker records the paths taken by charged particles by finding their positions at a number of key points; the tracker can reconstruct the paths of high-energy muons and hadrons as well as see tracks coming from the decay of short-lived particles such as beauty or “b quarks” that will be used to
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are composed of atoms, which are made up of interacting subatomic particles, in everyday as well as scientific usage, "matter" includes atoms and anything made up of them, any particles that act as if they have both rest mass and volume; however it does not include massless particles such as photons, or other energy phenomena or waves such as light or sound. Matter exists in various states; these include classical everyday phases such as solid and gas – for example water exists as ice, liquid water, gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, quark–gluon plasma. Atoms can be imagined as a nucleus of protons and neutrons, a surrounding "cloud" of orbiting electrons which "take up space"; however this is only somewhat correct, because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles and they do not have well-defined sizes or positions.
In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions, many composites and atoms, are forced to keep a distance from other particles under everyday conditions. For much of the history of the natural sciences people have contemplated the exact nature of matter; the idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus and Democritus. Matter should not be confused with mass. Matter is a general term describing any'physical substance'. By contrast, mass is not a substance but rather a quantitative property of matter and other substances or systems. While there are different views on what should be considered matter, the mass of a substance has exact scientific definitions.
Another difference is that matter has an "opposite" called antimatter, but mass has no opposite—there is no such thing as "anti-mass" or negative mass, so far as is known, although scientists do discuss the concept. Antimatter has the same mass property as its normal matter counterpart. Different fields of science use the term matter in different, sometimes incompatible, ways; some of these ways are based on loose historical meanings, from a time when there was no reason to distinguish mass from a quantity of matter. As such, there is no single universally agreed scientific meaning of the word "matter". Scientifically, the term "mass" is well-defined. Sometimes in the field of physics "matter" is equated with particles that exhibit rest mass, such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality. A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms.
Such atomic matter is sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules are matter under this definition because they are made of atoms; this definition can be extended to include charged atoms and molecules, so as to include plasmas and electrolytes, which are not included in the atoms definition. Alternatively, one can adopt the protons and electrons definition. A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, negatively charged electrons; this definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not atoms or molecules, for example electron beams in an old cathode ray tube television, or white dwarf matter—typically and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons and electrons obey the laws of quantum mechanics and exhibit wave–particle duality.
At an deeper level and neutrons are made up of quarks and the force fields that bind them together, leading to the next definition. As seen in the above discussion, many early definitions of what can be called "ordinary matter" were based upon its structure or "building blocks". On the scale of elementary particles, a definition that follows this tradition can be stated as: "ordinary matter is everything, composed of quarks and leptons", or "ordinary matter is everything, composed of any elementary fermions except antiquarks and antileptons"; the connection between these formulations follows. Leptons, quarks combine to form atoms, which in turn form molecules; because atoms and molecules are said to be matter, it is natural to phrase the definition as: "ordinary matter is anything