Theoretical chemistry

Theoretical chemistry is a branch of chemistry, which develops theoretical generalizations that are part of the theoretical arsenal of modern chemistry, for example, the concept of chemical bonding, chemical reaction, the surface of potential energy, molecular orbitals, orbital interactions, molecule activation etc. Theoretical chemistry unites concepts common to all branches of chemistry. Within the framework of theoretical chemistry, there is a systematization of chemical laws and rules, their refinement and detailing, the construction of a hierarchy; the central place in theoretical chemistry is occupied by the doctrine of the interconnection of the structure and properties of molecular systems. It uses mathematical and physical methods to explain the structures and dynamics of chemical systems and to correlate and predict their thermodynamic and kinetic properties. In the most general sense, it is explanation of chemical phenomena by methods of theoretical physics. In contrast to theoretical physics, in connection with the high complexity of chemical systems, theoretical chemistry, in addition to approximate mathematical methods uses semi-empirical and empirical methods.

In recent years, it has consisted of quantum chemistry, i.e. the application of quantum mechanics to problems in chemistry. Other major components include molecular dynamics, statistical thermodynamics and theories of electrolyte solutions, reaction networks, catalysis, molecular magnetism and spectroscopy. Modern theoretical chemistry may be divided into the study of chemical structure and the study of chemical dynamics; the former includes studies of: electronic structure, potential energy surfaces, force fields. Chemical dynamics includes: bimolecular kinetics and the collision theory of reactions and energy transfer. Quantum chemistry The application of quantum mechanics or fundamental interactions to chemical and physico-chemical problems. Spectroscopic and magnetic properties are between the most modelled. Computational chemistry The application of computer codes to chemistry, involving approximation schemes such as Hartree–Fock, post-Hartree–Fock, density functional theory, semiempirical methods or force field methods.

Molecular shape is the most predicted property. Computers can predict vibrational spectra and vibronic coupling, but acquire and Fourier transform Infra-red Data into frequency information; the comparison with predicted vibrations supports the predicted shape. Molecular modelling Methods for modelling molecular structures without referring to quantum mechanics. Examples are protein-protein docking, drug design, combinatorial chemistry; the fitting of shape and electric potential are the driving factor in this graphical approach. Molecular dynamics Application of classical mechanics for simulating the movement of the nuclei of an assembly of atoms and molecules; the rearrangement of molecules within an ensemble is controlled by Van der Waals forces and promoted by temperature. Molecular mechanics Modeling of the intra- and inter-molecular interaction potential energy surfaces via potentials; the latter are parameterized from ab initio calculations. Mathematical chemistry Discussion and prediction of the molecular structure using mathematical methods without referring to quantum mechanics.

Topology is a branch of mathematics that allows researchers to predict properties of flexible finite size bodies like clusters. Theoretical chemical kinetics Theoretical study of the dynamical systems associated to reactive chemicals, the activated complex and their corresponding differential equations. Cheminformatics The use of computer and informational techniques, applied to crop information to solve problems in the field of chemistry; the major field of application of theoretical chemistry has been in the following fields of research: Atomic physics: The discipline dealing with electrons and atomic nuclei. Molecular physics: The discipline of the electrons surrounding the molecular nuclei and of movement of the nuclei; this term refers to the study of molecules made of a few atoms in the gas phase. But some consider that molecular physics is the study of bulk properties of chemicals in terms of molecules. Physical chemistry and chemical physics: Chemistry investigated via physical methods like laser techniques, scanning tunneling microscope, etc.

The formal distinction between both fields is that physical chemistry is a branch of chemistry while chemical physics is a branch of physics. In practice this distinction is quite vague. Many-body theory: The discipline studying the effects which appear in systems with large number of constituents, it is based on quantum physics – second quantization formalism – and quantum electrodynamics. Hence, theoretical chemistry has emerged as a branch of research. With the rise of the density functional theory and other methods like molecular mechanics, the range of application has been extended to chemical systems which are relevant to other fields of chemistry and physics, including biochemistry, condensed matter physics, nanotechnology or molecular biology. List of unsolved problems in chemistry Attila Szabo and Neil S. Ostlund, Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory, Dover Publications.

Special relativity

In physics, special relativity is the accepted and experimentally well-confirmed physical theory regarding the relationship between space and time. In Albert Einstein's original pedagogical treatment, it is based on two postulates: the laws of physics are invariant in all inertial systems. Special relativity was proposed by Albert Einstein in a paper published 26 September 1905 titled "On the Electrodynamics of Moving Bodies"; the inconsistency of Newtonian mechanics with Maxwell's equations of electromagnetism and the lack of experimental confirmation for a hypothesized luminiferous aether led to the development of special relativity, which corrects mechanics to handle situations involving all motions and those at a significant fraction of the speed of light. Today, special relativity is the most accurate model of motion at any speed when gravitational effects are negligible. So, the Newtonian mechanics model is still valid as a simple and high accuracy approximation at low velocities relative to the speed of light.

Special relativity implies a wide range of consequences, which have been experimentally verified, including length contraction, time dilation, relativistic mass, mass–energy equivalence, a universal speed limit, the speed of causality and relativity of simultaneity. It has replaced the conventional notion of an absolute universal time with the notion of a time, dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula E = mc2, where c is the speed of light in a vacuum. A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other. Rather and time are interwoven into a single continuum known as "spacetime".

Events that occur at the same time for one observer can occur at different times for another. Not until Einstein developed general relativity, introducing a curved spacetime to incorporate gravity, was the phrase "special relativity" employed. A translation, used is "restricted relativity"; the theory is "special" in that it only applies in the special case where the spacetime is flat, i.e. the curvature of spacetime, described by the energy-momentum tensor and causing gravity, is negligible. In order to accommodate gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some outdated descriptions, is capable of handling accelerations as well as accelerated frames of reference; as Galilean relativity is now accepted to be an approximation of special relativity, valid for low speeds, special relativity is considered an approximation of general relativity, valid for weak gravitational fields, i.e. at a sufficiently small scale and in conditions of free fall. Whereas general relativity incorporates noneuclidean geometry in order to represent gravitational effects as the geometric curvature of spacetime, special relativity is restricted to the flat spacetime known as Minkowski space.

As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime. Galileo Galilei had postulated that there is no absolute and well-defined state of rest, a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon, observed in the Michelson–Morley experiment, he postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics. Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the known laws of either mechanics or electrodynamics; these propositions were the constancy of the speed of light in a vacuum and the independence of physical laws from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as: The Principle of Relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other.

The Principle of Invariant Light Speed – "... light is always propagated in empty space with a definite velocity c, independent of the state of motion of the emitting body". That is, light in vacuum propagates with the speed c in at least one system of inertial coordinates, regardless of the state of motion of the light source; the constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acce

Quantum electrodynamics

In particle physics, quantum electrodynamics is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction. In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Richard Feynman called it "the jewel of physics" for its accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen; the first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, able to compute the coefficient of spontaneous emission of an atom.

Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and an elegant formulation of quantum electrodynamics due to Enrico Fermi, physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics.

Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift and magnetic moment of the electron; these experiments exposed discrepancies. A first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite the limitations of the computation, agreement was excellent; the idea was to attach infinities to corrections of mass and charge that were fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments; this procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was possible to get covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics.

Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965 for their work in this area. Their contributions, those of Freeman Dyson, were about covariant and gauge invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams seemed different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Though renormalization works well in practice, Feynman was never comfortable with its mathematical validity referring to renormalization as a "shell game" and "hocus pocus".

QED has served as the template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on the pioneering work of Schwinger, Gerald Guralnik, Dick Hagen, Tom Kibble, Peter Higgs, Jeffrey Goldstone, others, Sheldon Lee Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public; these lectures were transcribed and published as Feynman, QED: The strange theory of light and matter, a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions. A photon goes from time to another place and time. An electron goes from time to another place and time.

An electron absorbs a photon at a certain place and time. These actions are represented in the form of visual shorthand by the three basic elements of Feynman diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing em

Atomic orbital

In atomic theory and quantum mechanics, an atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus; the term atomic orbital may refer to the physical region or space where the electron can be calculated to be present, as defined by the particular mathematical form of the orbital. Each orbital in an atom is characterized by a unique set of values of the three quantum numbers n, ℓ, m, which correspond to the electron's energy, angular momentum, an angular momentum vector component; each such orbital can be occupied by a maximum of two electrons, each with its own spin quantum number s. The simple names s orbital, p orbital, d orbital and f orbital refer to orbitals with angular momentum quantum number ℓ = 0, 1, 2 and 3 respectively; these names, together with the value of n, are used to describe the electron configurations of atoms.

They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal and fundamental. Orbitals for ℓ > 3 continue alphabetically, omitting j because some languages do not distinguish between the letters "i" and "j". Atomic orbitals are the basic building blocks of the atomic orbital model, a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of a multi-electron atom may be seen as being built up in an electron configuration, a product of simpler hydrogen-like atomic orbitals; the repeating periodicity of the blocks of 2, 6, 10, 14 elements within sections of the periodic table arises from the total number of electrons that occupy a complete set of s, p, d and f atomic orbitals although for higher values of the quantum number n when the atom in question bears a positive charge, the energies of certain sub-shells become similar and so the order in which they are said to be populated by electrons can only be rationalized somewhat arbitrarily.

With the development of quantum mechanics and experimental findings, it was found that the orbiting electrons around a nucleus could not be described as particles, but needed to be explained by the wave-particle duality. In this sense, the electrons have the following properties: Wave-like properties: The electrons do not orbit the nucleus in the manner of a planet orbiting the sun, but instead exist as standing waves, thus the lowest possible energy an electron can take is similar to the fundamental frequency of a wave on a string. Higher energy states are similar to harmonics of that fundamental frequency; the electrons are never in a single point location, although the probability of interacting with the electron at a single point can be found from the wave function of the electron. The charge on the electron acts like it is smeared out in space in a continuous distribution, proportional at any point to the squared magnitude of the electron's wave function. Particle-like properties: The number of electrons orbiting the nucleus can only be an integer.

Electrons jump between orbitals like particles. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; the electrons retain particle-like properties such as: each wave state has the same electrical charge as its electron particle. Each wave state has a single discrete spin depending on its superposition. Thus, despite the popular analogy to planets revolving around the Sun, electrons cannot be described as solid particles. In addition, atomic orbitals do not resemble a planet's elliptical path in ordinary atoms. A more accurate analogy might be that of a large and oddly shaped "atmosphere", distributed around a tiny planet. Atomic orbitals describe the shape of this "atmosphere" only when a single electron is present in an atom; when more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection tends toward a spherical zone of probability describing the electron's location, because of the uncertainty principle.

Atomic orbitals may be defined more in formal quantum mechanical language. In quantum mechanics, the state of an atom, i.e. an eigenstate of the atomic Hamiltonian, is approximated by an expansion into linear combinations of anti-symmetrized products of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. A state is a function of the coordinates of all the electrons, so that their motion is correlated, but this is approximated by this independent-particle model of products of single electron wave functions. In atomic physics, the atomic spectral lines correspond to transitions between quantum states of an atom; these states are labeled by a set of quantum numbers summarized in the term symbol and associated with particular electron configurations, i.e. by occupation schemes of atomic orbitals (for example, 1s2

Molecular orbital

In chemistry, a molecular orbital is a mathematical function describing the wave-like behavior of an electron in a molecule. This function can be used to calculate chemical and physical properties such as the probability of finding an electron in any specific region; the term orbital was introduced by Robert S. Mulliken in 1932 as an abbreviation for one-electron orbital wave function. At an elementary level, it is used to describe the region of space in which the function has a significant amplitude. Molecular orbitals are constructed by combining atomic orbitals or hybrid orbitals from each atom of the molecule, or other molecular orbitals from groups of atoms, they can be quantitatively calculated using the Hartree -- self-consistent field methods. A molecular orbital can be used to represent the regions in a molecule where an electron occupying that orbital is to be found. Molecular orbitals are obtained from the combination of atomic orbitals, which predict the location of an electron in an atom.

A molecular orbital can specify the electron configuration of a molecule: the spatial distribution and energy of one electron. Most a MO is represented as a linear combination of atomic orbitals in qualitative or approximate usage, they are invaluable in providing a simple model of bonding in molecules, understood through molecular orbital theory. Most present-day methods in computational chemistry begin by calculating the MOs of the system. A molecular orbital describes the behavior of one electron in the electric field generated by the nuclei and some average distribution of the other electrons. In the case of two electrons occupying the same orbital, the Pauli principle demands that they have opposite spin; this is an approximation, accurate descriptions of the molecular electronic wave function do not have orbitals. Molecular orbitals are, in general, delocalized throughout the entire molecule. Moreover, if the molecule has symmetry elements, its nondegenerate molecular orbitals are either symmetric or antisymmetric with respect to any of these symmetries.

In other words, application of a symmetry operation S to molecular orbital ψ results in the molecular orbital being unchanged or reversing its mathematical sign: Sψ = ±ψ. In planar molecules, for example, molecular orbitals are either symmetric or antisymmetric with respect to reflection in the molecular plane. If molecules with degenerate orbital energies are considered, a more general statement that molecular orbitals form bases for the irreducible representations of the molecule's symmetry group holds; the symmetry properties of molecular orbitals means that delocalization is an inherent feature of molecular orbital theory and makes it fundamentally different from valence bond theory, in which bonds are viewed as localized electron pairs, with allowance for resonance to account for delocalization. In contrast to these symmetry-adapted canonical molecular orbitals, localized molecular orbitals can be formed by applying certain mathematical transformations to the canonical orbitals; the advantage of this approach is that the orbitals will correspond more to the "bonds" of a molecule as depicted by a Lewis structure.

As a disadvantage, the energy levels of these localized orbitals no longer have physical meaning. Molecular orbitals arise from allowed interactions between atomic orbitals, which are allowed if the symmetries of the atomic orbitals are compatible with each other. Efficiency of atomic orbital interactions is determined from the overlap between two atomic orbitals, significant if the atomic orbitals are close in energy; the number of molecular orbitals formed must be equal to the number of atomic orbitals in the atoms being combined to form the molecule. For an imprecise, but qualitatively useful, discussion of the molecular structure, the molecular orbitals can be obtained from the "Linear combination of atomic orbitals molecular orbital method" ansatz. Here, the molecular orbitals are expressed as linear combinations of atomic orbitals. Molecular orbitals were first introduced by Friedrich Hund and Robert S. Mulliken in 1927 and 1928; the linear combination of atomic orbitals or "LCAO" approximation for molecular orbitals was introduced in 1929 by Sir John Lennard-Jones.

His ground-breaking paper showed how to derive the electronic structure of the fluorine and oxygen molecules from quantum principles. This qualitative approach to molecular orbital theory is part of the start of modern quantum chemistry. Linear combinations of atomic orbitals can be used to estimate the molecular orbitals that are formed upon bonding between the molecule's constituent atoms. Similar to an atomic orbital, a Schrödinger equation, which describes the behavior of an electron, can be constructed for a molecular orbital as well. Linear combinations of atomic orbitals, or the sums and differences of the atomic wavefunctions, provide approximate solutions to the Hartree–Fock equations which correspond to the independent-particle approximation of the molecular Schrödinger equation. For simple diatomic molecules, the wavefunctions obtained are represented mathematically by the equations Ψ = c a ψ a + c b ψ b {\displaystyle \Psi

Acoustics

Acoustics is the branch of physics that deals with the study of all mechanical waves in gases and solids including topics such as vibration, sound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer; the application of acoustics is present in all aspects of modern society with the most obvious being the audio and noise control industries. Hearing is one of the most crucial means of survival in the animal world, speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, architecture, industrial production and more. Animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or marking territories. Art, craft and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's'Wheel of Acoustics' is a well accepted overview of the various fields in acoustics.

The word "acoustic" is derived from the Greek word ἀκουστικός, meaning "of or for hearing, ready to hear" and that from ἀκουστός, "heard, audible", which in turn derives from the verb ἀκούω, "I hear". The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively. In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, he found answers in terms of numerical ratios representing the harmonic overtone series on a string, he is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers, the tones produced will be harmonious, the smaller the integers the more harmonious the sounds. If, for example, a string of a certain length would sound harmonious with a string of twice the length. In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower.

In one system of musical tuning, the tones in between are given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, 16:15 for B, in ascending order. Aristotle understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air, next to it...", a good expression of the nature of wave motion. In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theaters including discussion of interference and reverberation—the beginnings of architectural acoustics. In Book V of his De architectura Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, when interrupted by obstructions, would flow back and break up following waves, he described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and recommended bronze vessels of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes.

During the Islamic golden age, Abū Rayhān al-Bīrūnī is believed to postulated that the speed of sound was much slower than the speed of light. The physical understanding of acoustical processes advanced during and after the Scientific Revolution. Galileo Galilei but Marin Mersenne, discovered the complete laws of vibrating strings. Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out between 1630 and 1680 by a number of investigators, prominently Mersenne. Meanwhile, Newton derived the relationship for wave velocity in solids, a cornerstone of physical acoustics; the eighteenth century saw major advances in acoustics as mathematicians applied the new techniques of calculus to elaborate theories of sound wave propagation.

In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work The Theory of Sound. In the 19th century, Wheatstone and Henry developed the analogy between electricity and acoustics; the twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge, by in place. The first such application was Sabine’s groundbreaking work in architectural acoustics, many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing; the ultrasonic frequency range enabled wholly new kinds of application in industry.

New kinds of transducers were put to use. Acoustics is defined by ANSI/

Condensed matter physics

Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is large and the interactions between the constituents are strong; the most familiar examples of condensed phases are solids and liquids, which arise from the electromagnetic forces between atoms. Condensed matter physicists seek to understand the behavior of these phases by using physical laws. In particular, they include the laws of quantum mechanics and statistical mechanics; the most familiar condensed phases are solids and liquids while more exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, the Bose–Einstein condensate found in ultracold atomic systems. The study of condensed matter physics involves measuring various material properties via experimental probes along with using methods of theoretical physics to develop mathematical models that help in understanding physical behavior.

The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, nanotechnology, relates to atomic physics and biophysics; the theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, elasticity, etc. were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the new, related specialty of condensed matter physics. According to physicist Philip Warren Anderson, the term was coined by him and Volker Heine, when they changed the name of their group at the Cavendish Laboratories, Cambridge from Solid state theory to Theory of Condensed Matter in 1967, as they felt it did not exclude their interests in the study of liquids, nuclear matter, so on.

Although Anderson and Heine helped popularize the name "condensed matter", it had been present in Europe for some years, most prominently in the form of a journal published in English and German by Springer-Verlag titled Physics of Condensed Matter, launched in 1963. The funding environment and Cold War politics of the 1960s and 1970s were factors that lead some physicists to prefer the name "condensed matter physics", which emphasized the commonality of scientific problems encountered by physicists working on solids, liquids and other complex matter, over "solid state physics", associated with the industrial applications of metals and semiconductors; the Bell Telephone Laboratories was one of the first institutes to conduct a research program in condensed matter physics. References to "condensed" state can be traced to earlier sources. For example, in the introduction to his 1947 book Kinetic Theory of Liquids, Yakov Frenkel proposed that "The kinetic theory of liquids must accordingly be developed as a generalization and extension of the kinetic theory of solid bodies.

As a matter of fact, it would be more correct to unify them under the title of'condensed bodies'". One of the first studies of condensed states of matter was by English chemist Humphry Davy, in the first decades of the nineteenth century. Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre and high electrical and thermal conductivity; this indicated that the atoms in John Dalton's atomic theory were not indivisible as Dalton claimed, but had inner structure. Davy further claimed that elements that were believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would behave as metals. In 1823, Michael Faraday an assistant in Davy's lab liquefied chlorine and went on to liquefy all known gaseous elements, except for nitrogen and oxygen. Shortly after, in 1869, Irish chemist Thomas Andrews studied the phase transition from a liquid to a gas and coined the term critical point to describe the condition where a gas and a liquid were indistinguishable as phases, Dutch physicist Johannes van der Waals supplied the theoretical framework which allowed the prediction of critical behavior based on measurements at much higher temperatures.

By 1908, James Dewar and Heike Kamerlingh Onnes were able to liquefy hydrogen and newly discovered helium, respectively. Paul Drude in 1900 proposed the first theoretical model for a classical electron moving through a metallic solid. Drude's model described properties of metals in terms of a gas of free electrons, was the first microscopic model to explain empirical observations such as the Wiedemann–Franz law. However, despite the success of Drude's free electron model, it had one notable problem: it was unable to explain the electronic contribution to the specific heat and magnetic properties of metals, the temperature dependence of resistivity at low temperatures. In 1911, three years after helium was first liquefied, Onnes working at University of Leiden discovered superconductivity in mercury, when he observed the electrical resistivity of mercury to vanish at temperatures below a certain value; the phenomenon surprised the best theoretical physicists of the time, it remain