1.
Classical physics
–
Classical physics refers to theories of physics that predate modern, more complete, or more widely applicable theories. As such, the definition of a classical theory depends on context, classical physical concepts are often used when modern theories are unnecessarily complex for a particular situation. Classical theory has at least two meanings in physics. In the context of mechanics, classical theory refers to theories of physics that do not use the quantisation paradigm. Likewise, classical field theories, such as general relativity and classical electromagnetism, are those that do not use quantum mechanics, in the context of general and special relativity, classical theories are those that obey Galilean relativity. Modern physics includes quantum theory and relativity, when applicable, a physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid. In practice, physical objects ranging from larger than atoms and molecules, to objects in the macroscopic and astronomical realm. Beginning at the level and lower, the laws of classical physics break down. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales, unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist. Mathematically, classical physics equations are those in which Plancks constant does not appear, according to the correspondence principle and Ehrenfests theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects, however, one of the most vigorous on-going fields of research in physics is classical-quantum correspondence. This field of research is concerned with the discovery of how the laws of physics give rise to classical physics found at the limit of the large scales of the classical level. Computer modeling is essential for quantum and relativistic physics, classic physics is considered the limit of quantum mechanics for large number of particles. On the other hand, classic mechanics is derived from relativistic mechanics, for example, in many formulations from special relativity, a correction factor 2 appears, where v is the velocity of the object and c is the speed of light. For velocities much smaller than that of light, one can neglect the terms with c2 and these formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities, computer modeling has to be as real as possible. Classical physics would introduce an error as in the superfluidity case, in order to produce reliable models of the world, we can not use classic physics. It is true that quantum theories consume time and computer resources, and the equations of physics could be resorted to provide a quick solution
Classical physics
–
A computer model would use quantum theory and relativistic theory only
Classical physics
–
The four major domains of modern physics
2.
Speed of light
–
The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its exact value is 299792458 metres per second, it is exact because the unit of length, the metre, is defined from this constant, according to special relativity, c is the maximum speed at which all matter and hence information in the universe can travel. It is the speed at which all particles and changes of the associated fields travel in vacuum. Such particles and waves travel at c regardless of the motion of the source or the reference frame of the observer. In the theory of relativity, c interrelates space and time, the speed at which light propagates through transparent materials, such as glass or air, is less than c, similarly, the speed of radio waves in wire cables is slower than c. The ratio between c and the speed v at which light travels in a material is called the index n of the material. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, the light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light limits the theoretical maximum speed of computers. The speed of light can be used time of flight measurements to measure large distances to high precision. Ole Rømer first demonstrated in 1676 that light travels at a speed by studying the apparent motion of Jupiters moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, in 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism. After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units as the distance travelled by light in vacuum in 1/299792458 of a second, as a result, the numerical value of c in metres per second is now fixed exactly by the definition of the metre. The speed of light in vacuum is usually denoted by a lowercase c, historically, the symbol V was used as an alternative symbol for the speed of light, introduced by James Clerk Maxwell in 1865. In 1856, Wilhelm Eduard Weber and Rudolf Kohlrausch had used c for a different constant later shown to equal √2 times the speed of light in vacuum, in 1894, Paul Drude redefined c with its modern meaning. Einstein used V in his original German-language papers on special relativity in 1905, but in 1907 he switched to c, sometimes c is used for the speed of waves in any material medium, and c0 for the speed of light in vacuum. This article uses c exclusively for the speed of light in vacuum, since 1983, the metre has been defined in the International System of Units as the distance light travels in vacuum in 1⁄299792458 of a second
Speed of light
–
One of the last and most accurate time of flight measurements, Michelson, Pease and Pearson's 1930-35 experiment used a rotating mirror and a one-mile (1.6 km) long vacuum chamber which the light beam traversed 10 times. It achieved accuracy of ±11 km/s
Speed of light
–
Sunlight takes about 8 minutes 17 seconds to travel the average distance from the surface of the
Sun to the
Earth.
Speed of light
–
Diagram of the
Fizeau apparatus
Speed of light
–
Rømer's observations of the occultations of Io from Earth
3.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
Physics
–
Further information:
Outline of physics
Physics
–
Ancient
Egyptian astronomy is evident in monuments like the
ceiling of Senemut's tomb from the
Eighteenth Dynasty of Egypt.
Physics
–
Sir Isaac Newton (1643–1727), whose
laws of motion and
universal gravitation were major milestones in classical physics
Physics
–
Albert Einstein (1879–1955), whose work on the
photoelectric effect and the
theory of relativity led to a revolution in 20th century physics
4.
Theory of relativity
–
The theory of relativity usually encompasses two interrelated theories by Albert Einstein, special relativity and general relativity. Special relativity applies to particles and their interactions, describing all their physical phenomena except gravity. General relativity explains the law of gravitation and its relation to other forces of nature and it applies to the cosmological and astrophysical realm, including astronomy. The theory transformed theoretical physics and astronomy during the 20th century and it introduced concepts including spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, with relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Max Planck, Hermann Minkowski and others did subsequent work, Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916, the term theory of relativity was based on the expression relative theory used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the paper, Alfred Bucherer used for the first time the expression theory of relativity. By the 1920s, the community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of physics, nuclear physics. By comparison, general relativity did not appear to be as useful and it seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics of general relativity seemed difficult and fully understandable only by a number of people. Around 1960, general relativity became central to physics and astronomy, new mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. Special relativity is a theory of the structure of spacetime and it was introduced in Einsteins 1905 paper On the Electrodynamics of Moving Bodies. Special relativity is based on two postulates which are contradictory in classical mechanics, The laws of physics are the same for all observers in motion relative to one another. The speed of light in a vacuum is the same for all observers, the resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment, moreover, the theory has many surprising and counterintuitive consequences. Some of these are, Relativity of simultaneity, Two events, simultaneous for one observer, time dilation, Moving clocks are measured to tick more slowly than an observers stationary clock
Theory of relativity
–
USSR stamp dedicated to Albert Einstein
Theory of relativity
–
Key concepts
5.
Atom
–
An atom is the smallest constituent unit of ordinary matter that has the properties of a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms, Atoms are very small, typical sizes are around 100 picometers. Atoms are small enough that attempting to predict their behavior using classical physics - as if they were billiard balls, through the development of physics, atomic models have incorporated quantum principles to better explain and predict the behavior. Every atom is composed of a nucleus and one or more bound to the nucleus. The nucleus is made of one or more protons and typically a number of neutrons. Protons and neutrons are called nucleons, more than 99. 94% of an atoms mass is in the nucleus. The protons have an electric charge, the electrons have a negative electric charge. If the number of protons and electrons are equal, that atom is electrically neutral, if an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively, and it is called an ion. The electrons of an atom are attracted to the protons in a nucleus by this electromagnetic force. The number of protons in the nucleus defines to what chemical element the atom belongs, for example, the number of neutrons defines the isotope of the element. The number of influences the magnetic properties of an atom. Atoms can attach to one or more other atoms by chemical bonds to form compounds such as molecules. The ability of atoms to associate and dissociate is responsible for most of the changes observed in nature. The idea that matter is made up of units is a very old idea, appearing in many ancient cultures such as Greece. The word atom was coined by ancient Greek philosophers, however, these ideas were founded in philosophical and theological reasoning rather than evidence and experimentation. As a result, their views on what look like. They also could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. It was not until the 19th century that the idea was embraced and refined by scientists, in the early 1800s, John Dalton used the concept of atoms to explain why elements always react in ratios of small whole numbers
Atom
–
Scanning tunneling microscope image showing the individual atoms making up this
gold (
100) surface. The surface atoms deviate from the bulk
crystal structure and arrange in columns several atoms wide with pits between them (See
surface reconstruction).
Atom
–
Helium atom
6.
Max Planck
–
Max Karl Ernst Ludwig Planck, FRS was a German theoretical physicist whose discovery of energy quanta won him the Nobel Prize in Physics in 1918. However, his name is known on a broader academic basis, through the renaming in 1948 of the German scientific institution. The MPS now includes 83 institutions representing a range of scientific directions. Planck came from a traditional, intellectual family and his paternal great-grandfather and grandfather were both theology professors in Göttingen, his father was a law professor in Kiel and Munich. Planck was born in Kiel, Holstein, to Johann Julius Wilhelm Planck and his second wife and he was baptised with the name of Karl Ernst Ludwig Marx Planck, of his given names, Marx was indicated as the primary name. However, by the age of ten he signed with the name Max and he was the 6th child in the family, though two of his siblings were from his fathers first marriage. Among his earliest memories was the marching of Prussian and Austrian troops into Kiel during the Second Schleswig War in 1864 and it was from Müller that Planck first learned the principle of conservation of energy. Planck graduated early, at age 17 and this is how Planck first came in contact with the field of physics. Planck was gifted when it came to music and he took singing lessons and played piano, organ and cello, and composed songs and operas. However, instead of music he chose to study physics, the Munich physics professor Philipp von Jolly advised Planck against going into physics, saying, in this field, almost everything is already discovered, and all that remains is to fill a few holes. Planck replied that he did not wish to discover new things, but only to understand the fundamentals of the field. Under Jollys supervision, Planck performed the experiments of his scientific career, studying the diffusion of hydrogen through heated platinum. In 1877 he went to the Friedrich Wilhelms University in Berlin for a year of study with physicists Hermann von Helmholtz and Gustav Kirchhoff and mathematician Karl Weierstrass. He wrote that Helmholtz was never quite prepared, spoke slowly, miscalculated endlessly and he soon became close friends with Helmholtz. While there he undertook a program of mostly self-study of Clausiuss writings, in October 1878 Planck passed his qualifying exams and in February 1879 defended his dissertation, Über den zweiten Hauptsatz der mechanischen Wärmetheorie. He briefly taught mathematics and physics at his school in Munich. In June 1880, he presented his thesis, Gleichgewichtszustände isotroper Körper in verschiedenen Temperaturen. With the completion of his thesis, Planck became an unpaid private lecturer in Munich
Max Planck
–
Planck in 1933
Max Planck
–
Max Planck's signature at ten years of age.
Max Planck
–
Plaque at the
Humboldt University of Berlin: "Max Planck, discoverer of the elementary quantum of action h, taught in this building from 1889 to 1928."
Max Planck
–
Planck in 1918, the year he received the
Nobel Prize in Physics for his work on
quantum theory
7.
Pascual Jordan
–
Ernst Pascual Jordan was a theoretical and mathematical physicist who made significant contributions to quantum mechanics and quantum field theory. He contributed much to the form of matrix mechanics. While the Jordan algebra is employed for and is used in studying the mathematical and conceptual foundations of quantum theory. An ancestor of Pascual Jordan named Pascual Jorda was a Spanish nobleman and cavalry officer who served with the British during, Jorda eventually settled in Hanover, which in those days was a possession of the British royal family. The family name was changed to Jordan. A family tradition dictated that the son in each generation be named Pascual. Jordan enrolled in the Hanover Technical University in 1921 where he studied an eclectic mix of zoology, mathematics, as was typical for a German university student of the time, he shifted his studies to another university before obtaining a degree. Göttingen University, his destination in 1923, was then at the zenith of its prowess and fame in mathematics. At Göttingen Jordan became an assistant first to mathematician Richard Courant, together with Max Born and Werner Heisenberg, Jordan was co-author of an important series of papers on quantum mechanics. He went on to early quantum field theory before largely switching his focus to cosmology before World War II. Jordan devised a type of non-associative algebras, now named Jordan algebras in his honor, in an attempt to create an algebra of observables for quantum mechanics, today, von Neumann algebras are also employed for this purpose. In 1966, Jordan published the 182 page work Die Expansion der Erde, the continents having to adapt to the ever flatter surface of the growing ball, the mountain ranges on the Earths surface would, in the course of that, have come into being as constricted folds. Despite the energy Jordan invested in the expanding Earth theory, his work was never taken seriously by either physicists or geologists. In 1933, Jordan joined the Nazi party, like Philipp Lenard and Johannes Stark, and, moreover, but at the same time, he remained a defender of Einstein and other Jewish scientists. Jordan enlisted in the Luftwaffe in 1939 and worked as a weather analyst at the Peenemünde rocket center, during the war he attempted to interest the Nazi party in various schemes for advanced weapons. His suggestions were ignored because he was considered unreliable, probably because of his past associations with Jews. Had Jordan not joined the Nazi party, it is conceivable that he could have won a Nobel Prize in Physics for his work with Max Born, Born would go on to win the 1954 Physics Prize with Walther Bothe. Jordan went against Paulis advice, and reentered politics after the period of came to an end under the pressures of the Cold War
Pascual Jordan
–
Pascual Jordan in the 1920s
8.
Wolfgang Pauli
–
Wolfgang Ernst Pauli was an Austrian-born Swiss and American theoretical physicist and one of the pioneers of quantum physics. The discovery involved spin theory, which is the basis of a theory of the structure of matter, Pauli was born in Vienna to a chemist Wolfgang Joseph Pauli and his wife Bertha Camilla Schütz, his sister was Hertha Pauli, the writer and actress. Paulis middle name was given in honor of his godfather, physicist Ernst Mach, Paulis paternal grandparents were from prominent Jewish families of Prague, his great-grandfather was the Jewish publisher Wolf Pascheles. Paulis father converted from Judaism to Roman Catholicism shortly before his marriage in 1899, Paulis mother, Bertha Schütz, was raised in her own mothers Roman Catholic religion, her father was Jewish writer Friedrich Schütz. Pauli was raised as a Roman Catholic, although eventually he and he is considered to have been a deist and a mystic. Pauli attended the Döblinger-Gymnasium in Vienna, graduating with distinction in 1918, only two months after graduation, he published his first paper, on Albert Einsteins theory of general relativity. He attended the Ludwig-Maximilians University in Munich, working under Arnold Sommerfeld, Sommerfeld asked Pauli to review the theory of relativity for the Encyklopädie der mathematischen Wissenschaften. Two months after receiving his doctorate, Pauli completed the article and it was praised by Einstein, published as a monograph, it remains a standard reference on the subject to this day. From 1923 to 1928, he was a lecturer at the University of Hamburg, during this period, Pauli was instrumental in the development of the modern theory of quantum mechanics. In particular, he formulated the principle and the theory of nonrelativistic spin. In 1928, he was appointed Professor of Theoretical Physics at ETH Zurich in Switzerland where he made significant scientific progress and he held visiting professorships at the University of Michigan in 1931, and the Institute for Advanced Study in Princeton in 1935. He was awarded the Lorentz Medal in 1931, at the end of 1930, shortly after his postulation of the neutrino and immediately following his divorce and the suicide of his mother, Pauli experienced a personal crisis. He consulted psychiatrist and psychotherapist Carl Jung who, like Pauli, Jung immediately began interpreting Paulis deeply archetypal dreams, and Pauli became one of the depth psychologists best students. He soon began to criticize the epistemology of Jungs theory scientifically, a great many of these discussions are documented in the Pauli/Jung letters, today published as Atom and Archetype. Jungs elaborate analysis of more than 400 of Paulis dreams is documented in Psychology, the German annexation of Austria in 1938 made him a German citizen, which became a problem for him in 1939 after the outbreak of World War II. In 1940, he tried in vain to obtain Swiss citizenship, Pauli moved to the United States in 1940, where he was employed as a professor of theoretical physics at the Institute for Advanced Study. In 1946, after the war, he became a citizen of the United States and subsequently returned to Zurich. In 1949, he was granted Swiss citizenship, in 1958, Pauli was awarded the Max Planck medal
Wolfgang Pauli
–
Wolfgang Pauli
Wolfgang Pauli
–
Wolfgang Pauli lecturing
Wolfgang Pauli
–
Niels Bohr,
Werner Heisenberg, and Wolfgang Pauli, ca. 1935
Wolfgang Pauli
–
Wolfgang Pauli, ca. 1945
9.
Ernest Rutherford
–
Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS was a New Zealand-born British physicist who came to be known as the father of nuclear physics. Encyclopædia Britannica considers him to be the greatest experimentalist since Michael Faraday and this work was done at McGill University in Canada. Rutherford moved in 1907 to the Victoria University of Manchester in the UK, Rutherford performed his most famous work after he became a Nobel laureate. He conducted research that led to the first splitting of the atom in 1917 in a reaction between nitrogen and alpha particles, in which he also discovered the proton. Rutherford became Director of the Cavendish Laboratory at the University of Cambridge in 1919, after his death in 1937, he was honoured by being interred with the greatest scientists of the United Kingdom, near Sir Isaac Newtons tomb in Westminster Abbey. The chemical element rutherfordium was named after him in 1997, Ernest Rutherford was the son of James Rutherford, a farmer, and his wife Martha Thompson, originally from Hornchurch, Essex, England. James had emigrated to New Zealand from Perth, Scotland, to raise a little flax, Ernest was born at Brightwater, near Nelson, New Zealand. His first name was mistakenly spelled Earnest when his birth was registered, Rutherfords mother Martha Thompson was a schoolteacher. He studied at Havelock School and then Nelson College and won a scholarship to study at Canterbury College, University of New Zealand, in 1898 Thomson recommended Rutherford for a position at McGill University in Montreal, Canada. He was to replace Hugh Longbourne Callendar who held the chair of Macdonald Professor of physics and was coming to Cambridge, in 1901 he gained a DSc from the University of New Zealand. In 1907 Rutherford returned to Britain to take the chair of physics at the Victoria University of Manchester, during World War I, he worked on a top secret project to solve the practical problems of submarine detection by sonar. In 1916 he was awarded the Hector Memorial Medal, in 1919 he returned to the Cavendish succeeding J. J. Thomson as the Cavendish professor and Director. Between 1925 and 1930 he served as President of the Royal Society, in 1933, Rutherford was one of the two inaugural recipients of the T. K. Sidey Medal, set up by the Royal Society of New Zealand as an award for outstanding scientific research. For some time before his death, Rutherford had a hernia, which he had neglected to have fixed. Despite an emergency operation in London, he died four days afterwards of what physicians termed intestinal paralysis, after cremation at Golders Green Crematorium, he was given the high honour of burial in Westminster Abbey, near Isaac Newton and other illustrious British scientists. At Cambridge, Rutherford started to work with J. J. Thomson on the effects of X-rays on gases. Hearing of Becquerels experience with uranium, Rutherford started to explore its radioactivity, continuing his research in Canada, he coined the terms alpha ray and beta ray in 1899 to describe the two distinct types of radiation. He then discovered that thorium gave off a gas which produced an emanation which was itself radioactive and he found that a sample of this radioactive material of any size invariably took the same amount of time for half the sample to decay – its half-life
Ernest Rutherford
–
The Right Honourable The Lord Rutherford of Nelson
OM FRS
Ernest Rutherford
–
Signature
Ernest Rutherford
–
Rutherford aged 21
Ernest Rutherford
–
A plaque commemorating Rutherford's presence at the
Victoria University, Manchester
10.
Satyendra Nath Bose
–
Satyendra Nath Bose, FRS was an Indian physicist from Bengal specialising in theoretical physics. He is best known for his work on mechanics in the early 1920s, providing the foundation for Bose–Einstein statistics. A Fellow of the Royal Society, he was awarded Indias second highest civilian award, the class of particles that obey Bose–Einstein statistics, bosons, was named after Bose by Paul Dirac. A self-taught scholar and a polymath, he had a range of interests in varied fields including physics, mathematics, chemistry, biology, mineralogy, philosophy, arts, literature. He served on many research and development committees in sovereign India, Bose was born in Calcutta, the eldest of seven children. He was the son, with six sisters after him. His ancestral home was in village Bara Jagulia, in the district of Nadia and his schooling began at the age of five, near his home. When his family moved to Goabagan, he was admitted to the New Indian School, in the final year of school, he was admitted to the Hindu School. He passed his examination in 1909 and stood fifth in the order of merit. Naman Sharma and Meghnad Saha, from Dacca, joined the college two years later. Prasanta Chandra Mahalanobis and Sisir Kumar Mitra were few years senior to Bose, Bose chose mixed mathematics for his BSc and passed the examinations standing first in 1913 and again stood first in the MSc mixed mathematics exam in 1915. It is said that his marks in the MSc examination created a new record in the annals of the University of Calcutta, after completing his MSc, Bose joined the University of Calcutta as a research scholar in 1916 and started his studies in the theory of relativity. It was an era in the history of scientific progress. Quantum theory had just appeared on the horizon and important results had started pouring in and his father, Surendranath Bose, worked in the Engineering Department of the East Indian Railway Company. In 1914, at age 20, Satyendra Nath Bose married Ushabati Ghosh and they had nine children, two of whom died in early childhood. When he died in 1974, he left behind his wife, as a polyglot, Bose was well versed in several languages such as Bengali, English, French, German and Sanskrit as well as the poetry of Lord Tennyson, Rabindranath Tagore and Kalidasa. He could play the esraj, an instrument similar to a violin. He was actively involved in running night schools that came to be known as the Working Mens Institute and he came in contact with teachers such as Jagadish Chandra Bose, Prafulla Chandra Ray and Naman Sharma who provided inspiration to aim high in life
Satyendra Nath Bose
–
Satyendra Nath Bose in 1925
Satyendra Nath Bose
–
Large Hadron Collider tunnel at
CERN
Satyendra Nath Bose
–
Satyendra Nath Bose
Satyendra Nath Bose
–
Bose's letter to Einstein
11.
Matter
–
All the everyday objects that we can bump into, touch or squeeze are ultimately composed of atoms. This ordinary atomic matter is in turn made up of interacting subatomic particles—usually a nucleus of protons and neutrons, typically, science considers these composite particles matter because they have both rest mass and volume. By contrast, massless particles, such as photons, are not considered matter, however, not all particles with rest mass have a classical volume, since fundamental particles such as quarks and leptons are considered point particles with no effective size or volume. Nevertheless, quarks and leptons together make up ordinary matter, Matter exists in states, the classical solid, liquid, and gas, as well as the more exotic plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma. For much of the history of the natural sciences people have contemplated the nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus and Democritus, Matter should not be confused with mass, as the two are not the same in modern physics. Matter is itself a physical substance of which systems may be composed, while mass is not a substance, while there are different views on what should be considered matter, the mass of a substance or system is the same irrespective of any such definition of matter. Another difference is that matter has an opposite called antimatter, antimatter has the same mass property as its normal matter counterpart. Different fields of use the term matter in different, and sometimes incompatible. Some of these ways are based on loose historical meanings, from a time there was no reason to distinguish mass from simply a quantity of matter. As such, there is no universally agreed scientific meaning of the word matter. Scientifically, the mass is well-defined, but matter can be defined in several ways. Sometimes in the field of matter is simply equated with particles that exhibit rest mass, such as quarks. However, in physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality. A definition of based on its physical and chemical structure is. Such atomic matter is sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules are matter under this definition because they are made of atoms and this definition can extend to include charged atoms and molecules, so as to include plasmas and electrolytes, which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition, at a microscopic level, the constituent particles of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality
Matter
–
Matter
Matter
Matter
Matter
12.
Work (physics)
–
In physics, a force is said to do work if, when acting, there is a displacement of the point of application in the direction of the force. For example, when a ball is held above the ground and then dropped, the SI unit of work is the joule. The SI unit of work is the joule, which is defined as the work expended by a force of one newton through a distance of one metre. The dimensionally equivalent newton-metre is sometimes used as the unit for work, but this can be confused with the unit newton-metre. Usage of N⋅m is discouraged by the SI authority, since it can lead to confusion as to whether the quantity expressed in newton metres is a torque measurement, or a measurement of energy. Non-SI units of work include the erg, the foot-pound, the foot-poundal, the hour, the litre-atmosphere. Due to work having the physical dimension as heat, occasionally measurement units typically reserved for heat or energy content, such as therm, BTU. The work done by a constant force of magnitude F on a point that moves a distance s in a line in the direction of the force is the product W = F s. For example, if a force of 10 newtons acts along a point that travels 2 meters and this is approximately the work done lifting a 1 kg weight from ground level to over a persons head against the force of gravity. Notice that the work is doubled either by lifting twice the weight the distance or by lifting the same weight twice the distance. Work is closely related to energy, the work-energy principle states that an increase in the kinetic energy of a rigid body is caused by an equal amount of positive work done on the body by the resultant force acting on that body. Conversely, a decrease in energy is caused by an equal amount of negative work done by the resultant force. From Newtons second law, it can be shown that work on a free, rigid body, is equal to the change in energy of the velocity and rotation of that body. The work of forces generated by a function is known as potential energy. These formulas demonstrate that work is the associated with the action of a force, so work subsequently possesses the physical dimensions. The work/energy principles discussed here are identical to Electric work/energy principles, constraint forces determine the movement of components in a system, constraining the object within a boundary. Constraint forces ensure the velocity in the direction of the constraint is zero and this only applies for a single particle system. For example, in an Atwood machine, the rope does work on each body, there are, however, cases where this is not true
Work (physics)
–
A
baseball pitcher does positive work on the ball by applying a force to it over the distance it moves while in his grip.
Work (physics)
–
A force of constant magnitude and perpendicular to the lever arm
Work (physics)
–
Gravity F = mg does work W = mgh along any descending path
Work (physics)
–
Lotus type 119B gravity racer at Lotus 60th celebration.
13.
Randomness
–
Randomness is the lack of pattern or predictability in events. A random sequence of events, symbols or steps has no order, individual random events are by definition unpredictable, but in many cases the frequency of different outcomes over a large number of events is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will occur twice as often as 4. In this view, randomness is a measure of uncertainty of an outcome, rather than haphazardness, and applies to concepts of chance, probability, the fields of mathematics, probability, and statistics use formal definitions of randomness. In statistics, a variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events, Random variables can appear in random sequences. A random process is a sequence of variables whose outcomes do not follow a deterministic pattern. These and other constructs are extremely useful in probability theory and the applications of randomness. Randomness is most often used in statistics to signify well-defined statistical properties, Monte Carlo methods, which rely on random input, are important techniques in science, as, for instance, in computational science. By analogy, quasi-Monte Carlo methods use quasirandom number generators, Random selection is a method of selecting items from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, note that a random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable and that is, if the selection process is such that each member of a population, of say research subjects, has the same probability of being chosen then we can say the selection process is random. In ancient history, the concepts of chance and randomness were intertwined with that of fate, many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness, the Chinese of 3000 years ago were perhaps the earliest people to formalize odds and chance. The Greek philosophers discussed randomness at length, but only in non-quantitative forms and it was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of the calculus had a impact on the formal study of randomness. The early part of the 20th century saw a growth in the formal analysis of randomness. In the mid- to late-20th century, ideas of information theory introduced new dimensions to the field via the concept of algorithmic randomness
Randomness
–
Ancient
fresco of dice players in
Pompei.
Randomness
–
A pseudorandomly generated
bitmap.
Randomness
–
The ball in a
roulette can be used as a source of apparent randomness, because its behavior is very sensitive to the initial conditions.
14.
Mind
–
The mind is a set of cognitive faculties including consciousness, perception, thinking, judgement, and memory. It is usually defined as the faculty of an entitys reasoning and it holds the power of imagination, recognition, and appreciation, and is responsible for processing feelings and emotions, resulting in attitudes and actions. One open question regarding the nature of the mind is the mind–body problem, pre-scientific viewpoints included dualism and idealism, which considered the mind somehow non-physical. Modern views center around physicalism and functionalism, which hold that the mind is identical with the brain or reducible to physical phenomena such as neuronal activity. Another question concerns which types of beings are capable of having minds, the concept of mind is understood in many different ways by many different cultural and religious traditions. Some see mind as a property exclusive to humans whereas others ascribe properties of mind to non-living entities, to animals, important philosophers of mind include Plato, Descartes, Leibniz, Searle, Dennett, Fodor, Nagel, and Chalmers. Psychologists such as Freud and James, and computer scientists such as Turing, the original meaning of Old English gemynd was the faculty of memory, not of thought in general. Hence call to mind, come to mind, keep in mind, to have mind of, the word retains this sense in Scotland. Old English had other words to mind, such as hyge mind. The meaning of memory is shared with Old Norse, which has munr, the word is originally from a PIE verbal root *men-, meaning to think, remember, whence also Latin mens mind, Sanskrit manas mind and Greek μένος mind, courage, anger. The generalization of mind to all mental faculties, thought, volition, feeling and memory. The attributes that make up the mind is debated, some psychologists argue that only the higher intellectual functions constitute mind, particularly reason and memory. In this view the emotions — love, hate, fear, joy — are more primitive or subjective in nature and should be seen as different from the mind as such. Others argue that rational and emotional states cannot be so separated, that they are of the same nature and origin. In popular usage, mind is frequently synonymous with thought, the conversation with ourselves that we carry on inside our heads. Thus we make up our minds, change our minds or are of two minds about something, one of the key attributes of the mind in this sense is that it is a private sphere to which no one but the owner has access. No one else can know our mind and they can only interpret what we consciously or unconsciously communicate. Broadly speaking, mental faculties are the functions of the mind
Mind
–
A
phrenological mapping of the
brain. Phrenology was among the first attempts to correlate mental functions with specific parts of the brain.
Mind
–
Simplified diagram of Spaun, a 2.5-million-neuron computational model of the brain. (A) The corresponding physical regions and connections of the human brain. (B) The mental architecture of Spaun.
15.
Light
–
Light is electromagnetic radiation within a certain portion of the electromagnetic spectrum. The word usually refers to light, which is visible to the human eye and is responsible for the sense of sight. Visible light is defined as having wavelengths in the range of 400–700 nanometres, or 4.00 × 10−7 to 7.00 × 10−7 m. This wavelength means a range of roughly 430–750 terahertz. The main source of light on Earth is the Sun, sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the used by living things. Historically, another important source of light for humans has been fire, with the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence, for example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey. Visible light, as all types of electromagnetic radiation, is experimentally found to always move at this speed in a vacuum. In physics, the term sometimes refers to electromagnetic radiation of any wavelength. In this sense, gamma rays, X-rays, microwaves and radio waves are also light, like all types of light, visible light is emitted and absorbed in tiny packets called photons and exhibits properties of both waves and particles. This property is referred to as the wave–particle duality, the study of light, known as optics, is an important research area in modern physics. Generally, EM radiation, or EMR, is classified by wavelength into radio, microwave, infrared, the behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths, when EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. There exist animals that are sensitive to various types of infrared, infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it, above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nanometers and the internal lens below 400. Furthermore, the rods and cones located in the retina of the eye cannot detect the very short ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses are able to detect ultraviolet, by quantum photon-absorption mechanisms, various sources define visible light as narrowly as 420 to 680 to as broadly as 380 to 800 nm
Light
–
An example of refraction of light. The straw appears bent, because of refraction of light as it enters liquid from air.
Light
–
A triangular prism dispersing a beam of white light. The longer wavelengths (red) and the shorter wavelengths (blue) get separated.
Light
–
A
cloud illuminated by
sunlight
Light
–
A
city illuminated by
artificial lighting
16.
Particle
–
A particle is a minute fragment or quantity of matter. In the physical sciences, a particle is a small localized object to which can be ascribed several physical or chemical properties such as volume or mass. Particles can also be used to create models of even larger objects depending on their density. The term particle is rather general in meaning, and is refined as needed by various scientific fields, something that is composed of particles may be referred to as being particulate. However, the particulate is most frequently used to refer to pollutants in the Earths atmosphere. The concept of particles is particularly useful when modelling nature, as the treatment of many phenomena can be complex. It can be used to make simplifying assumptions concerning the processes involved, francis Sears and Mark Zemansky, in University Physics, give the example of calculating the landing location and speed of a baseball thrown in the air. The treatment of large numbers of particles is the realm of statistical physics, the term particle is usually applied differently to three classes of sizes. The term macroscopic particle, usually refers to particles much larger than atoms and these are usually abstracted as point-like particles, or even invisible. This is even though they have volumes, shapes, structures, examples of macroscopic particles would include powder, dust, sand, pieces of debris during a car accident, or even objects as big as the stars of a galaxy. Another type, microscopic particles usually refers to particles of sizes ranging from atoms to molecules, such as carbon dioxide, nanoparticles and these particles are studied in chemistry, as well as atomic and molecular physics. The smallest of particles are the particles, which refer to particles smaller than atoms. These particles are studied in particle physics, because of their extremely small size, the study of microscopic and subatomic particles fall in the realm of quantum mechanics. Particles can also be classified according to composition, composite particles refer to particles that have composition – that is particles which are made of other particles. For example, an atom is made of six protons, eight neutrons. By contrast, elementary particles refer to particles that are not made of other particles, according to our current understanding of the world, only a very small number of these exist, such as the leptons, quarks or gluons. However it is possible some of these might turn up to be composite particles after all. While composite particles can very often be considered point-like, elementary particles are truly punctual, both elementary and composite particles, are known to undergo particle decay
Particle
–
Arc welders need to protect themselves from
welding sparks, which are heated metal particles that fly off the welding surface. Different particles are formed at different temperatures.
Particle
–
Galaxies are so large that
stars can be considered particles relative to them
17.
Wave
–
In physics, a wave is an oscillation accompanied by a transfer of energy that travels through a medium. Frequency refers to the addition of time, wave motion transfers energy from one point to another, which displace particles of the transmission medium–that is, with little or no associated mass transport. Waves consist, instead, of oscillations or vibrations, around almost fixed locations, there are two main types of waves. Mechanical waves propagate through a medium, and the substance of this medium is deformed, restoring forces then reverse the deformation. For example, sound waves propagate via air molecules colliding with their neighbors, when the molecules collide, they also bounce away from each other. This keeps the molecules from continuing to travel in the direction of the wave, the second main type, electromagnetic waves, do not require a medium. Instead, they consist of periodic oscillations of electrical and magnetic fields generated by charged particles. These types vary in wavelength, and include radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, waves are described by a wave equation which sets out how the disturbance proceeds over time. The mathematical form of this varies depending on the type of wave. Further, the behavior of particles in quantum mechanics are described by waves, in addition, gravitational waves also travel through space, which are a result of a vibration or movement in gravitational fields. While mechanical waves can be transverse and longitudinal, all electromagnetic waves are transverse in free space. A single, all-encompassing definition for the wave is not straightforward. A vibration can be defined as a back-and-forth motion around a reference value, however, a vibration is not necessarily a wave. An attempt to define the necessary and sufficient characteristics that qualify a phenomenon as a results in a blurred line. The term wave is often understood as referring to a transport of spatial disturbances that are generally not accompanied by a motion of the medium occupying this space as a whole. In a wave, the energy of a vibration is moving away from the source in the form of a disturbance within the surrounding medium and it may appear that the description of waves is closely related to their physical origin for each specific instance of a wave process. For example, acoustics is distinguished from optics in that sound waves are related to a rather than an electromagnetic wave transfer caused by vibration. Concepts such as mass, momentum, inertia, or elasticity and this difference in origin introduces certain wave characteristics particular to the properties of the medium involved
Wave
–
Surface waves in
water
Wave
–
Wavelength λ, can be measured between any two corresponding points on a waveform
Wave
–
Light beam exhibiting reflection, refraction, transmission and dispersion when encountering a prism
18.
Applied physics
–
Applied physics is physics which is intended for a particular technological or practical use. It is usually considered as a bridge or a connection between physics and engineering and this approach is similar to that of applied mathematics. Applied physicists can also be interested in the use of physics for scientific research, for instance, the field of accelerator physics can contribute to research in theoretical physics by working with engineers enabling design and construction of high-energy colliders
Applied physics
–
Experiment using a
laser
Applied physics
–
A
magnetic resonance image
Applied physics
–
Computer modeling of the
space shuttle during re-entry
19.
Mathematical physics
–
Mathematical physics refers to development of mathematical methods for application to problems in physics. It is a branch of applied mathematics, but deals with physical problems, there are several distinct branches of mathematical physics, and these roughly correspond to particular historical periods. The rigorous, abstract and advanced re-formulation of Newtonian mechanics adopting the Lagrangian mechanics, both formulations are embodied in analytical mechanics. These approaches and ideas can be and, in fact, have extended to other areas of physics as statistical mechanics, continuum mechanics, classical field theory. Moreover, they have provided several examples and basic ideas in differential geometry, the theory of partial differential equations are perhaps most closely associated with mathematical physics. These were developed intensively from the half of the eighteenth century until the 1930s. Physical applications of these developments include hydrodynamics, celestial mechanics, continuum mechanics, elasticity theory, acoustics, thermodynamics, electricity, magnetism, and aerodynamics. The theory of atomic spectra developed almost concurrently with the fields of linear algebra. Nonrelativistic quantum mechanics includes Schrödinger operators, and it has connections to atomic, Quantum information theory is another subspecialty. The special and general theories of relativity require a different type of mathematics. This was group theory, which played an important role in quantum field theory and differential geometry. This was, however, gradually supplemented by topology and functional analysis in the description of cosmological as well as quantum field theory phenomena. In this area both homological algebra and category theory are important nowadays, statistical mechanics forms a separate field, which includes the theory of phase transitions. It relies upon the Hamiltonian mechanics and it is related with the more mathematical ergodic theory. There are increasing interactions between combinatorics and physics, in statistical physics. The usage of the mathematical physics is sometimes idiosyncratic. Certain parts of mathematics that arose from the development of physics are not, in fact, considered parts of mathematical physics. The term mathematical physics is sometimes used to research aimed at studying and solving problems inspired by physics or thought experiments within a mathematically rigorous framework
Mathematical physics
–
An example of mathematical physics: solutions of
Schrödinger's equation for
quantum harmonic oscillators (left) with their
amplitudes (right).
20.
Supersymmetry
–
Each particle from one group is associated with a particle from the other, known as its superpartner, the spin of which differs by a half-integer. In a theory with perfectly unbroken supersymmetry, each pair of superpartners would share the same mass, for example, there would be a selectron, a bosonic version of the electron with the same mass as the electron, that would be easy to find in a laboratory. Thus, since no superpartners have been observed, if supersymmetry exists it must be a broken symmetry so that superpartners may differ in mass. Spontaneously-broken supersymmetry could solve many problems in particle physics including the hierarchy problem. The simplest realization of spontaneously-broken supersymmetry, the so-called Minimal Supersymmetric Standard Model, is one of the best studied candidates for physics beyond the Standard Model, there is only indirect evidence and motivation for the existence of supersymmetry. Direct confirmation would entail production of superpartners in collider experiments, such as the Large Hadron Collider, the first run of the LHC found no evidence for supersymmetry, and thus set limits on superpartner masses in supersymmetric theories. While some remain enthusiastic about supersymmetry, this first run at the LHC led some physicists to explore other ideas, the LHC resumed its search for supersymmetry and other new physics in its second run. There are numerous phenomenological motivations for supersymmetry close to the electroweak scale, supersymmetry close to the electroweak scale ameliorates the hierarchy problem that afflicts the Standard Model. In the Standard Model, the electroweak scale receives enormous Planck-scale quantum corrections, the observed hierarchy between the electroweak scale and the Planck scale must be achieved with extraordinary fine tuning. In a supersymmetric theory, on the hand, Planck-scale quantum corrections cancel between partners and superpartners. The hierarchy between the scale and the Planck scale is achieved in a natural manner, without miraculous fine-tuning. The idea that the symmetry groups unify at high-energy is called Grand unification theory. In the Standard Model, however, the weak, strong, in a supersymmetry theory, the running of the gauge couplings are modified, and precise high-energy unification of the gauge couplings is achieved. The modified running also provides a mechanism for radiative electroweak symmetry breaking. TeV-scale supersymmetry typically provides a dark matter particle at a mass scale consistent with thermal relic abundance calculations. Supersymmetry is also motivated by solutions to several problems, for generally providing many desirable mathematical properties. Supersymmetric quantum field theory is much easier to analyze, as many more problems become exactly solvable. When supersymmetry is imposed as a symmetry, Einsteins theory of general relativity is included automatically
Supersymmetry
–
Simulated
Large Hadron Collider CMS particle detector data depicting a
Higgs boson produced by colliding protons decaying into hadron jets and electrons
21.
String theory
–
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. It describes how strings propagate through space and interact with each other. On distance scales larger than the scale, a string looks just like an ordinary particle, with its mass, charge. In string theory, one of the vibrational states of the string corresponds to the graviton. Thus string theory is a theory of quantum gravity, String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. Despite much work on problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows to choose the details. String theory was first studied in the late 1960s as a theory of the nuclear force. Subsequently, it was realized that the properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity. The earliest version of string theory, bosonic string theory, incorporated only the class of known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. In late 1997, theorists discovered an important relationship called the AdS/CFT correspondence, one of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, and these issues have led some in the community to criticize these approaches to physics and question the value of continued research on string theory unification. In the twentieth century, two theoretical frameworks emerged for formulating the laws of physics, one of these frameworks was Albert Einsteins general theory of relativity, a theory that explains the force of gravity and the structure of space and time. The other was quantum mechanics, a different formalism for describing physical phenomena using probability. In spite of successes, there are still many problems that remain to be solved. One of the deepest problems in physics is the problem of quantum gravity. The general theory of relativity is formulated within the framework of classical physics, in addition to the problem of developing a consistent theory of quantum gravity, there are many other fundamental problems in the physics of atomic nuclei, black holes, and the early universe. String theory is a framework that attempts to address these questions
String theory
–
A cross section of a quintic
Calabi–Yau manifold
String theory
–
String theory
String theory
–
A
magnet levitating above a
high-temperature superconductor. Today some physicists are working to understand high-temperature superconductivity using the AdS/CFT correspondence.
String theory
–
A graph of the
j-function in the complex plane
22.
M-theory
–
M-theory is a theory in physics that unifies all consistent versions of superstring theory. The existence of such a theory was first conjectured by Edward Witten at a string theory conference at the University of Southern California in the spring of 1995, Wittens announcement initiated a flurry of research activity known as the second superstring revolution. Prior to Wittens announcement, string theorists had identified five versions of superstring theory, although these theories appeared, at first, to be very different, work by several physicists showed that the theories were related in intricate and nontrivial ways. In particular, physicists found that apparently distinct theories could be unified by mathematical transformations called S-duality and T-duality, Wittens conjecture was based in part on the existence of these dualities and in part on the relationship of the string theories to a field theory called eleven-dimensional supergravity. Modern attempts to formulate M-theory are typically based on theory or the AdS/CFT correspondence. Investigations of the structure of M-theory have spawned important theoretical results in physics and mathematics. More speculatively, M-theory may provide a framework for developing a theory of all of the fundamental forces of nature. One of the deepest problems in physics is the problem of quantum gravity. The current understanding of gravity is based on Albert Einsteins general theory of relativity, however, nongravitational forces are described within the framework of quantum mechanics, a radically different formalism for describing physical phenomena based on probability. String theory is a framework that attempts to reconcile gravity. In string theory, the particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other, in a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the scale, a string will look just like an ordinary particle, with its mass, charge. In this way, all of the different elementary particles may be viewed as vibrating strings, one of the vibrational states of a string gives rise to the graviton, a quantum mechanical particle that carries gravitational force. There are several versions of string theory, type I, type IIA, type IIB, the different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries. For example, the type I theory includes both open strings and closed strings, while types IIA and IIB include only closed strings, each of these five string theories arises as a special limiting case of M-theory. This theory, like its string theory predecessors, is an example of a theory of gravity. It describes a force just like the familiar gravitational force subject to the rules of quantum mechanics, in everyday life, there are three familiar dimensions of space, height, width and depth
M-theory
–
In the 1980s,
Edward Witten contributed to the understanding of
supergravity theories. In 1995, he introduced M-theory, sparking the
second superstring revolution.
M-theory
–
String theory
23.
Standard model
–
The Standard Model of particle physics is a theory concerning the electromagnetic, weak, and strong interactions, as well as classifying all the elementary particles known. It was developed throughout the half of the 20th century. The current formulation was finalized in the mid-1970s upon experimental confirmation of the existence of quarks, since then, discoveries of the top quark, the tau neutrino, and the Higgs boson have given further credence to the Standard Model. Because of its success in explaining a wide variety of experimental results and it does not incorporate the full theory of gravitation as described by general relativity, or account for the accelerating expansion of the Universe. The model does not contain any viable dark matter particle that all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations, the development of the Standard Model was driven by theoretical and experimental particle physicists alike. For theorists, the Standard Model is a paradigm of a field theory. The first step towards the Standard Model was Sheldon Glashows discovery in 1961 of a way to combine the electromagnetic, in 1967 Steven Weinberg and Abdus Salam incorporated the Higgs mechanism into Glashows electroweak interaction, giving it its modern form. The Higgs mechanism is believed to rise to the masses of all the elementary particles in the Standard Model. This includes the masses of the W and Z bosons, the W± and Z0 bosons were discovered experimentally in 1983, and the ratio of their masses was found to be as the Standard Model predicted. The theory of the interaction, to which many contributed, acquired its modern form around 1973–74. At present, matter and energy are best understood in terms of the kinematics, to date, physics has reduced the laws governing the behavior and interaction of all known forms of matter and energy to a small set of fundamental laws and theories. The Standard Model includes members of classes of elementary particles. All particles can be summarized as follows, The Standard Model includes 12 elementary particles of spin known as fermions. According to the theorem, fermions respect the Pauli exclusion principle. Each fermion has a corresponding antiparticle, the fermions of the Standard Model are classified according to how they interact. There are six quarks, and six leptons, pairs from each classification are grouped together to form a generation, with corresponding particles exhibiting similar physical behavior. The defining property of the quarks is that they carry color charge, a phenomenon called color confinement results in quarks being very strongly bound to one another, forming color-neutral composite particles containing either a quark and an antiquark or three quarks
Standard model
–
Large Hadron Collider tunnel at
CERN
Standard model
–
The Standard Model of
elementary particles (more schematic depiction), with the three
generations of matter,
gauge bosons in the fourth column, and the
Higgs boson in the fifth.
24.
Quantum field theory
–
QFT treats particles as excited states of the underlying physical field, so these are called field quanta. In quantum field theory, quantum mechanical interactions among particles are described by interaction terms among the corresponding underlying quantum fields and these interactions are conveniently visualized by Feynman diagrams, which are a formal tool of relativistically covariant perturbation theory, serving to evaluate particle processes. The first achievement of quantum theory, namely quantum electrodynamics, is still the paradigmatic example of a successful quantum field theory. Ordinarily, quantum mechanics cannot give an account of photons which constitute the prime case of relativistic particles, since photons have rest mass zero, and correspondingly travel in the vacuum at the speed c, a non-relativistic theory such as ordinary QM cannot give even an approximate description. Photons are implicit in the emission and absorption processes which have to be postulated, for instance, the formalism of QFT is needed for an explicit description of photons. In fact most topics in the development of quantum theory were related to the interaction of radiation and matter. However, quantum mechanics as formulated by Dirac, Heisenberg, and Schrödinger in 1926–27 started from atomic spectra, as soon as the conceptual framework of quantum mechanics was developed, a small group of theoreticians tried to extend quantum methods to electromagnetic fields. A good example is the paper by Born, Jordan & Heisenberg. The basic idea was that in QFT the electromagnetic field should be represented by matrices in the way that position. The ideas of QM were thus extended to systems having a number of degrees of freedom. The inception of QFT is usually considered to be Diracs famous 1927 paper on The quantum theory of the emission and absorption of radiation, here Dirac coined the name quantum electrodynamics for the part of QFT that was developed first. Employing the theory of the harmonic oscillator, Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Diracs procedure became a model for the quantization of fields as well. These first approaches to QFT were further developed during the three years. P. Jordan introduced creation and annihilation operators for fields obeying Fermi–Dirac statistics and these differ from the corresponding operators for Bose–Einstein statistics in that the former satisfy anti-commutation relations while the latter satisfy commutation relations. The methods of QFT could be applied to derive equations resulting from the treatment of particles, e. g. the Dirac equation, the Klein–Gordon equation. Schweber points out that the idea and procedure of second quantization goes back to Jordan, in a number of papers from 1927, some difficult problems concerning commutation relations, statistics, and Lorentz invariance were eventually solved. The first comprehensive account of a theory of quantum fields, in particular
Quantum field theory
25.
Antimatter
–
In particle physics, antimatter is a material composed of the antiparticle partners to the corresponding particles of ordinary matter. A particle and its antiparticle have the mass as one another. For example, a proton has positive charge while an antiproton has negative charge, the consequence of annihilation is a release of energy available for heat or work, proportional to the total matter and antimatter mass, in accord with the mass–energy equivalence equation, E = mc2. Formally, antimatter particles can be defined by their negative baryon number or lepton number and these two classes of particles are the antiparticle partners of one another. Antimatter particles bind with one other to form antimatter, just as ordinary particles bind to form normal matter, for example, a positron and an antiproton can form an antihydrogen atom. Physical principles indicate that complex antimatter atomic nuclei are possible, as well as corresponding to the known chemical elements. There is considerable speculation as to why the universe is composed almost entirely of ordinary matter. This asymmetry of matter and antimatter in the universe is one of the great unsolved problems in physics. The process by which this inequality between matter antimatter particles developed is called baryogenesis, Antimatter in the form of anti-atoms is one of the most difficult materials to produce. Individual antimatter particles, however, are produced by particle accelerators. The nuclei of antihelium have been produced with difficulty. These are the most complex anti-nuclei so far observed, the idea of negative matter appears in past theories of matter that have now been abandoned. Using the once popular theory of gravity, the possibility of matter with negative gravity was discussed by William Hicks in the 1880s. Between the 1880s and the 1890s, Karl Pearson proposed the existence of squirts, the squirts represented normal matter and the sinks represented negative matter. Pearsons theory required a fourth dimension for the aether to flow from, the term antimatter was first used by Arthur Schuster in two rather whimsical letters to Nature in 1898, in which he coined the term. He hypothesized antiatoms, as well as whole antimatter solar systems, schusters ideas were not a serious theoretical proposal, merely speculation, and like the previous ideas, differed from the modern concept of antimatter in that it possessed negative gravity. The modern theory of antimatter began in 1928, with a paper by Paul Dirac, Dirac realised that his relativistic version of the Schrödinger wave equation for electrons predicted the possibility of antielectrons. These were discovered by Carl D. Anderson in 1932 and named positrons, although Dirac did not himself use the term antimatter, its use follows on naturally enough from antielectrons, antiprotons, etc
Antimatter
26.
Quantum electrodynamics
–
In particle physics, quantum electrodynamics is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved, in technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuum. Dirac described the quantization of the field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. At higher orders in the series emerged, making such computations meaningless. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics, difficulties with the theory increased through the end of 1940. Improvements in microwave technology made it possible to more precise measurements of the shift of the levels of a hydrogen atom, now known as the Lamb shift. These experiments exposed discrepancies which the theory was unable to explain, a first indication of a possible way out was given by Hans Bethe in 1947, after attending the Shelter Island Conference. While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the atom as measured by Lamb. Despite the limitations of the computation, agreement was excellent, the idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants, sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with a Nobel prize in physics in 1965 for their work in this area. Even though renormalization works very well in practice, Feynman was never comfortable with its mathematical validity, even referring to renormalization as a shell game. QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1975 work by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Near the end of his life, Richard P. Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman, QED, The strange theory of light and matter, the key components of Feynmans presentation of QED are three basic actions. A photon goes from one place and time to another place, an electron goes from one place and time to another place and time. An electron emits or absorbs a photon at a certain place and these can all be seen in the adjacent diagram. It is important not to over-interpret these diagrams, nothing is implied about how a particle gets from one point to another
Quantum electrodynamics
–
Paul Dirac
Quantum electrodynamics
Quantum electrodynamics
–
Hans Bethe
Quantum electrodynamics
–
Feynman (center) and
Oppenheimer (right) at
Los Alamos.
27.
Weak interaction
–
In particle physics, the weak interaction is one of the four known fundamental interactions of nature, alongside the strong interaction, electromagnetism, and gravitation. The weak interaction is responsible for radioactive decay, which plays an role in nuclear fission. The theory of the interaction is sometimes called quantum flavourdynamics, in analogy with the terms QCD dealing with the strong interaction. However the term QFD is rarely used because the force is best understood in terms of electro-weak theory. The Standard Model of particle physics, which does not address gravity, provides a framework for understanding how the electromagnetic, weak. An interaction occurs when two particles, typically but not necessarily half-integer spin fermions, exchange integer-spin, force-carrying bosons, the fermions involved in such exchanges can be either elementary or composite, although at the deepest levels, all weak interactions ultimately are between elementary particles. In the case of the interaction, fermions can exchange three distinct types of force carriers known as the W+, W−, and Z bosons. The mass of each of these bosons is far greater than the mass of a proton or neutron, the force is in fact termed weak because its field strength over a given distance is typically several orders of magnitude less than that of the strong nuclear force or electromagnetic force. During the quark epoch of the universe, the electroweak force separated into the electromagnetic. Important examples of the weak interaction include beta decay, and the fusion of hydrogen into deuterium that powers the Suns thermonuclear process, most fermions will decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium illumination, quarks, which make up composite particles like neutrons and protons, come in six flavours – up, down, strange, charm, top and bottom – which give those composite particles their properties. The weak interaction is unique in that it allows for quarks to swap their flavour for another, the swapping of those properties is mediated by the force carrier bosons. Also, the interaction is the only fundamental interaction that breaks parity-symmetry, and similarly. In 1933, Enrico Fermi proposed the first theory of the weak interaction and he suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. However, it is described as a non-contact force field having a finite range. The existence of the W and Z bosons was not directly confirmed until 1983, the weak interaction is unique in a number of respects, It is the only interaction capable of changing the flavour of quarks. It is the interaction that violates P or parity-symmetry
Weak interaction
–
Large Hadron Collider tunnel at
CERN
Weak interaction
–
The radioactive
beta decay is possible due to the weak interaction, which transforms a neutron into: a proton, an electron, and an
electron antineutrino.
28.
Quantum chromodynamics
–
QCD is a type of quantum field theory called a non-abelian gauge theory with symmetry group SU. The QCD analog of electric charge is a property called color, gluons are the force carrier of the theory, like photons are for the electromagnetic force in quantum electrodynamics. The theory is an important part of the Standard Model of particle physics, a large body of experimental evidence for QCD has been gathered over the years. QCD enjoys two peculiar properties, Confinement, which means that the force between quarks does not diminish as they are separated. Although analytically unproven, confinement is widely believed to be true because it explains the consistent failure of free quark searches, asymptotic freedom, which means that in very high-energy reactions, quarks and gluons interact very weakly creating a quark–gluon plasma. This prediction of QCD was first discovered in the early 1970s by David Politzer, Frank Wilczek, for this work they were awarded the 2004 Nobel Prize in Physics. The phase transition temperature between two properties has been measured by the ALICE experiment to be well above 160 MeV. Below this temperature, confinement is dominant, while above it, american physicist Murray Gell-Mann coined the word quark in its present sense. It originally comes from the phrase Three quarks for Muster Mark in Finnegans Wake by James Joyce, Gell-Mann, however, wanted to pronounce the word to rhyme with fork rather than with park, as Joyce seemed to indicate by rhyming words in the vicinity such as Mark. Gell-Mann got around that by supposing that one ingredient of the line Three quarks for Muster Mark was a cry of Three quarts for Mister, earwickers pub, a plausible suggestion given the complex punning in Joyces novel. The three kinds of charge in QCD are usually referred to as color charge by loose analogy to the three kinds of color perceived by humans, other than this nomenclature, the quantum parameter color is completely unrelated to the everyday, familiar phenomenon of color. Since the theory of charge is dubbed electrodynamics, the Greek word χρῶμα chroma color is applied to the theory of color charge. With the invention of bubble chambers and spark chambers in the 1950s, experimental particle physics discovered a large and it seemed that such a large number of particles could not all be fundamental. First, the particles were classified by charge and isospin by Eugene Wigner and Werner Heisenberg, then, in 1953, according to strangeness by Murray Gell-Mann and Kazuhiko Nishijima. To gain greater insight, the hadrons were sorted into groups having similar properties and masses using the way, invented in 1961 by Gell-Mann. The problem considered in this preprint was suggested by Nikolay Bogolyubov, in the beginning of 1965, Nikolay Bogolyubov, Boris Struminsky and Albert Tavkhelidze wrote a preprint with a more detailed discussion of the additional quark quantum degree of freedom. This work was presented by Albert Tavchelidze without obtaining consent of his collaborators for doing so at an international conference in Trieste. A similar mysterious situation was with the Δ++ baryon, in the quark model, han and Nambu noted that quarks might interact via an octet of vector gauge bosons, the gluons
Quantum chromodynamics
–
Large Hadron Collider tunnel at
CERN
29.
Atomic physics
–
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. It is primarily concerned with the arrangement of electrons around the nucleus and this comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions. The term atomic physics can be associated with power and nuclear weapons, due to the synonymous use of atomic. Physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, which considers atomic nuclei alone. As with many fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular. Physics research groups are usually so classified, Atomic physics primarily considers atoms in isolation. Atomic models will consist of a nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules, nor does it examine atoms in a state as condensed matter. It is concerned with such as ionization and excitation by photons or collisions with atomic particles. This means that the atoms can be treated as if each were in isolation. By this consideration atomic physics provides the underlying theory in physics and atmospheric physics. Electrons form notional shells around the nucleus and these are normally in a ground state but can be excited by the absorption of energy from light, magnetic fields, or interaction with a colliding particle. Electrons that populate a shell are said to be in a bound state, the energy necessary to remove an electron from its shell is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy, the atom is said to have undergone the process of ionization. If the electron absorbs a quantity of less than the binding energy. After a certain time, the electron in a state will jump to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, if an inner electron has absorbed more than the binding energy, then a more outer electron may undergo a transition to fill the inner orbital. The Auger effect allows one to multiply ionize an atom with a single photon, there are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however there are no such rules for excitation by collision processes
Atomic physics
–
In the Bohr model, the transition of an electron with n=3 to the shell n=2 is shown, where a photon is emitted. An electron from shell (n=2) must have been removed beforehand by ionization
30.
Particle physics
–
Particle physics is the branch of physics that studies the nature of the particles that constitute matter and radiation. By our current understanding, these particles are excitations of the quantum fields that also govern their interactions. The currently dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model, in more technical terms, they are described by quantum state vectors in a Hilbert space, which is also treated in quantum field theory. All particles and their interactions observed to date can be described almost entirely by a field theory called the Standard Model. The Standard Model, as formulated, has 61 elementary particles. Those elementary particles can combine to form composite particles, accounting for the hundreds of species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the tests conducted to date. However, most particle physicists believe that it is a description of nature. In recent years, measurements of mass have provided the first experimental deviations from the Standard Model. The idea that all matter is composed of elementary particles dates from at least the 6th century BC, in the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Throughout the 1950s and 1960s, a variety of particles were found in collisions of particles from increasingly high-energy beams. It was referred to informally as the particle zoo, the current state of the classification of all elementary particles is explained by the Standard Model. It describes the strong, weak, and electromagnetic fundamental interactions, the species of gauge bosons are the gluons, W−, W+ and Z bosons, and the photons. The Standard Model also contains 24 fundamental particles, which are the constituents of all matter, finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson. Early in the morning on 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson, the worlds major particle physics laboratories are, Brookhaven National Laboratory. Its main facility is the Relativistic Heavy Ion Collider, which collides heavy ions such as gold ions and it is the worlds first heavy ion collider, and the worlds only polarized proton collider. Its main projects are now the electron-positron colliders VEPP-2000, operated since 2006 and its main project is now the Large Hadron Collider, which had its first beam circulation on 10 September 2008, and is now the worlds most energetic collider of protons. It also became the most energetic collider of heavy ions after it began colliding lead ions and its main facility is the Hadron Elektron Ring Anlage, which collides electrons and positrons with protons
Particle physics
–
Large Hadron Collider tunnel at
CERN
31.
Nuclear physics
–
Nuclear physics is the field of physics that studies atomic nuclei and their constituents and interactions. Other forms of matter are also studied. Nuclear physics should not be confused with atomic physics, which studies the atom as a whole, discoveries in nuclear physics have led to applications in many fields. Such applications are studied in the field of nuclear engineering, Particle physics evolved out of nuclear physics and the two fields are typically taught in close association. Nuclear astrophysics, the application of physics to astrophysics, is crucial in explaining the inner workings of stars. The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure, in the years that followed, radioactivity was extensively investigated, notably by Marie and Pierre Curie as well as by Ernest Rutherford and his collaborators. By the turn of the physicists had also discovered three types of radiation emanating from atoms, which they named alpha, beta, and gamma radiation. Experiments by Otto Hahn in 1911 and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energy that were observed in gamma. This was a problem for physics at the time, because it seemed to indicate that energy was not conserved in these decays. The 1903 Nobel Prize in Physics was awarded jointly to Becquerel for his discovery and to Marie, Rutherford was awarded the Nobel Prize in Chemistry in 1908 for his investigations into the disintegration of the elements and the chemistry of radioactive substances. In 1905 Albert Einstein formulated the idea of mass–energy equivalence, in 1906 Ernest Rutherford published Retardation of the α Particle from Radium in passing through matter. Hans Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done, passing alpha particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Ernest Marsden, in 1911–1912 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it. The plum pudding model had predicted that the particles should come out of the foil with their trajectories being at most slightly bent. But Rutherford instructed his team to look for something that shocked him to observe and he likened it to firing a bullet at tissue paper and having it bounce off. As an example, in this model consisted of a nucleus with 14 protons and 7 electrons. The Rutherford model worked well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929
Nuclear physics
–
Nuclear physics
32.
Atomic, molecular, and optical physics
–
Atomic, molecular, and optical physics is the study of matter-matter and light-matter interactions, at the scale of one or a few atoms and energy scales around several electron volts. The three areas are closely interrelated, AMO theory includes classical, semi-classical and quantum treatments. Atomic physics is the subfield of AMO that studies atoms as a system of electrons. The term atomic physics is associated with nuclear power and nuclear bombs, due to the synonymous use of atomic. However, physicists distinguish between atomic physics — which deals with the atom as a system consisting of a nucleus and electrons — and nuclear physics, the important experimental techniques are the various types of spectroscopy. Molecular physics, while related to atomic physics, also overlaps greatly with theoretical chemistry, physical chemistry. Both subfields are primarily concerned with electronic structure and the processes by which these arrangements change. Generally this work involves using quantum mechanics, for molecular physics this approach is known as quantum chemistry. One important aspect of physics is that the essential atomic orbital theory in the field of atomic physics expands to the molecular orbital theory. Molecular physics is concerned with processes in molecules, but it is additionally concerned with effects due to the molecular structure. Additionally to the electronic states which are known from atoms, molecules are able to rotate. These rotations and vibrations are quantized, there are discrete energy levels, the smallest energy differences exist between different rotational states, therefore pure rotational spectra are in the far infrared region of the electromagnetic spectrum. Vibrational spectra are in the infrared and spectra resulting from electronic transitions are mostly in the visible. From measuring rotational and vibrational properties of molecules like the distance between the nuclei can be calculated. As with many fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular. Physics research groups are usually so classified and it differs from general optics and optical engineering in that it is focused on the discovery and application of new phenomena. Often the same people are involved in both the research and the applied technology development. Researchers in optical physics use and develop light sources that span the spectrum from microwaves to X-rays
Atomic, molecular, and optical physics
–
An
optical lattice formed by
laser interference. Optical lattices are used to simulate interacting
condensed matter systems.
33.
Condensed matter physics
–
Condensed matter physics is a branch of physics that deals with the physical properties of condensed phases of matter, where particles adhere to each other. Condensed matter physicists seek to understand the behavior of these phases by using physical laws, in particular, they include the laws of quantum mechanics, electromagnetism and statistical mechanics. The field overlaps with chemistry, materials science, and nanotechnology, the theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc. were treated as distinct areas until the 1940s, when they were grouped together as solid state physics. Around the 1960s, the study of properties of liquids was added to this list, forming the basis for the new. The Bell Telephone Laboratories was one of the first institutes to conduct a program in condensed matter physics. References to condensed state can be traced to earlier sources, as a matter of fact, it would be more correct to unify them under the title of condensed bodies. One of the first studies of condensed states of matter was by English chemist Humphry Davy, Davy observed that of the forty chemical elements known at the time, twenty-six had metallic properties such as lustre, ductility and high electrical and thermal conductivity. This indicated that the atoms in John Daltons atomic theory were not indivisible as Dalton claimed, Davy further claimed that elements that were then believed to be gases, such as nitrogen and hydrogen could be liquefied under the right conditions and would then behave as metals. In 1823, Michael Faraday, then an assistant in Davys lab, successfully liquefied chlorine and went on to all known gaseous elements, except for nitrogen, hydrogen. By 1908, James Dewar and Heike Kamerlingh Onnes were successfully able to hydrogen and then newly discovered helium. Paul Drude in 1900 proposed the first theoretical model for an electron moving through a metallic solid. Drudes model described properties of metals in terms of a gas of free electrons, the phenomenon completely surprised the best theoretical physicists of the time, and it remained unexplained for several decades. Drudes classical model was augmented by Wolfgang Pauli, Arnold Sommerfeld, Felix Bloch, Pauli realized that the free electrons in metal must obey the Fermi–Dirac statistics. Using this idea, he developed the theory of paramagnetism in 1926, shortly after, Sommerfeld incorporated the Fermi–Dirac statistics into the free electron model and made it better able to explain the heat capacity. Two years later, Bloch used quantum mechanics to describe the motion of an electron in a periodic lattice. Magnetism as a property of matter has been known in China since 4000 BC, Pierre Curie studied the dependence of magnetization on temperature and discovered the Curie point phase transition in ferromagnetic materials. In 1906, Pierre Weiss introduced the concept of magnetic domains to explain the properties of ferromagnets
Condensed matter physics
–
Heike Kamerlingh Onnes and
Johannes van der Waals with the
helium "liquefactor" in
Leiden (1908)
Condensed matter physics
–
Condensed matter physics
Condensed matter physics
–
A replica of the first
point-contact transistor in
Bell labs
Condensed matter physics
–
Computer simulation of "nanogears" made of
fullerene molecules. It is hoped that advances in nanoscience will lead to machines working on the molecular scale.
34.
Quantum computation
–
Quantum computing studies theoretical computation systems that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors, a quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968, there exist quantum algorithms, such as Simons algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis, on the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers. A classical computer has a made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits, in general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2 n different states simultaneously. A quantum computer operates by setting the qubits in a drift that represents the problem at hand. The sequence of gates to be applied is called a quantum algorithm, the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information, Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in case to mean probabilistic. An example of an implementation of qubits of a computer could start with the use of particles with two spin states, down and up. This is true because any such system can be mapped onto an effective spin-1/2 system, a quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. This means that when the state of the qubits is measured. To better understand this point, consider a classical computer that operates on a three-bit register, if there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a computer, then there is a possibility of it being in any one of a number of different states. The state of a quantum computer is similarly described by an eight-dimensional vector. Here, however, the coefficients a k are complex numbers, and it is the sum of the squares of the absolute values, ∑ i | a i |2
Quantum computation
–
Photograph of a chip constructed by
D-Wave Systems Inc., mounted and wire-bonded in a sample holder. The D-Wave processor is designed to use 128
superconducting logic elements that exhibit controllable and tunable coupling to perform operations.
Quantum computation
–
The
Bloch sphere is a representation of a
qubit, the fundamental building block of quantum computers.
35.
Photonics
–
Photonics is the physical science of light generation, detection, and manipulation through emission, transmission, modulation, signal processing, switching, amplification, and detection/sensing. Though covering all lights technical applications over the spectrum, most photonic applications are in the range of visible. The term photonics developed as an outgrowth of the first practical semiconductor light emitters invented in the early 1960s, Photonics as a field began with the invention of the laser in 1960. Other developments followed, the diode in the 1970s, optical fibers for transmitting information. These inventions formed the basis for the revolution of the late 20th century. Though coined earlier, the term came into common use in the 1980s as fiber-optic data transmission was adopted by telecommunications network operators. At that time, the term was used widely at Bell Laboratories and its use was confirmed when the IEEE Lasers and Electro-Optics Society established an archival journal named Photonics Technology Letters at the end of the 1980s. During the period leading up to the dot-com crash circa 2001, further growth of photonics is likely if current silicon photonics developments are successful. Photonics is closely related to optics, classical optics long preceded the discovery that light is quantized, when Albert Einstein famously explained the photoelectric effect in 1905. Optics tools include the lens, the reflecting mirror. Photonics is related to optics, optomechanics, electro-optics, optoelectronics. However, each area has different connotations by scientific and government communities. Quantum optics often connotes fundamental research, whereas photonics is used to connote applied research, the term optoelectronics connotes devices or circuits that comprise both electrical and optical functions, i. e. a thin-film semiconductor device. Photonics also relates to the science of quantum information and quantum optics. Included are all areas from everyday life to the most advanced science, just as applications of electronics have expanded dramatically since the first transistor was invented in 1948, the unique applications of photonics continue to emerge. The science of photonics includes investigation of the emission, transmission, amplification, detection, light sources used in photonics are usually far more sophisticated than light bulbs. Photonics commonly uses semiconductor light sources like light-emitting diodes, superluminescent diodes, other light sources include single photon sources, fluorescent lamps, cathode ray tubes, and plasma screens. Characteristic for research on semiconductor light sources is the frequent use of III-V semiconductors instead of the classical semiconductors like silicon and this is due to the special properties of III-V semiconductors that allow for the implementation of light emitting devices
Photonics
–
Dispersion of
light (photons) by a prism.
Photonics
–
A
sea mouse (Aphrodita aculeata), showing colorful spines, a remarkable example of photonic engineering by a living organism
36.
Plasma physics
–
Plasma is one of the four fundamental states of matter, the others being solid, liquid, and gas. Yet unlike these three states of matter, plasma does not naturally exist on the Earth under normal surface conditions, the term was first introduced by chemist Irving Langmuir in the 1920s. However, true plasma production is from the separation of these ions and electrons that produces an electric field. Based on the environmental temperature and density either partially ionised or fully ionised forms of plasma may be produced. The positive charge in ions is achieved by stripping away electrons from atomic nuclei, the number of electrons removed is related to either the increase in temperature or the local density of other ionised matter. Plasma may be the most abundant form of matter in the universe, although this is currently tentative based on the existence. Plasma is mostly associated with the Sun and stars, extending to the rarefied intracluster medium, Plasma was first identified in a Crookes tube, and so described by Sir William Crookes in 1879. The nature of the Crookes tube cathode ray matter was identified by British physicist Sir J. J. The term plasma was coined by Irving Langmuir in 1928, perhaps because the glowing discharge molds itself to the shape of the Crookes tube and we shall use the name plasma to describe this region containing balanced charges of ions and electrons. Plasma is a neutral medium of unbound positive and negative particles. Although these particles are unbound, they are not ‘free’ in the sense of not experiencing forces, in turn this governs collective behavior with many degrees of variation. The average number of particles in the Debye sphere is given by the plasma parameter, bulk interactions, The Debye screening length is short compared to the physical size of the plasma. This criterion means that interactions in the bulk of the plasma are more important than those at its edges, when this criterion is satisfied, the plasma is quasineutral. Plasma frequency, The electron plasma frequency is compared to the electron-neutral collision frequency. When this condition is valid, electrostatic interactions dominate over the processes of ordinary gas kinetics, for plasma to exist, ionization is necessary. The term plasma density by itself refers to the electron density, that is. The degree of ionization of a plasma is the proportion of atoms that have lost or gained electrons, even a partially ionized gas in which as little as 1% of the particles are ionized can have the characteristics of a plasma. The degree of ionization, α, is defined as α = n i n i + n n, where n i is the number density of ions and n n is the number density of neutral atoms
Plasma physics
–
Plasma
Plasma physics
Plasma physics
Plasma physics
37.
Scale relativity
–
Scale relativity is a geometrical and fractal space-time theory. The idea of a fractal space-time theory was first introduced by Garnet Ord, the proposal to combine fractal space-time theory with relativity principles was made by Nottale. The resulting scale relativity theory is an extension of the concept of relativity found in special relativity, in physics, relativity theories have shown that position, orientation, movement and acceleration cannot be defined in an absolute way, but only relative to a system of reference. Noticing the relativity of scales, as noticing the other forms of relativity is just a first step, scale relativity theory proposes to extend this insight by introducing an explicit state of scale in coordinate systems. To describe scale transformations requires the use of fractal geometries, which are concerned with scale changes. Scale relativity is thus an extension of relativity theory to the concept of scale, the construction of the theory is similar to previous relativity theories, with three different levels, Galilean, special and general. The development of a general scale relativity is not finished yet. Richard Feynman developed a path integral formulation of quantum mechanics before 1966, searching for the most important paths relevant for quantum particles, Feynman noticed that such paths were very irregular on small scales, i. e. infinite and non-differentiable. This means that in two points, a particle can have not one path, but an infinity of potential paths. This can be illustrated with a concrete example, imagine that you are hiking in the mountains, and that you are free to walk wherever you like. To go from point A to point B, there is not just one path, scale relativity hypothesizes that quantum behavior comes from the fractal nature of spacetime. Indeed, fractal geometries allow to study such non-differentiable paths and this fractal interpretation of quantum mechanics has been further specified by Abbot and Wise, showing that the paths have a fractal dimension 2. Scale relativity goes one further by asserting that the fractality of these paths is a consequence of the fractality of space-time. There are other pioneers who saw the nature of quantum mechanical paths. Garnet Ord and Laurent Nottale both connected fractal space-time with quantum mechanics, Nottale coined the term scale relativity in 1992. He developed the theory and its applications more than one hundred scientific papers. The principle of relativity says that physical laws should be valid in all coordinate systems and this principle has been applied to states of position, as well as to the states of movement of coordinate systems. Such states are never defined in a manner, but relatively to one another
Scale relativity
38.
Quantum chaos
–
Quantum chaos is a branch of physics which studies how chaotic classical dynamical systems can be described in terms of quantum theory. The primary question that quantum chaos seeks to answer is, What is the relationship between quantum mechanics and classical chaos, the correspondence principle states that classical mechanics is the classical limit of quantum mechanics. If this is true, then there must be quantum mechanisms underlying classical chaos, correlating statistical descriptions of eigenvalues with the classical behavior of the same Hamiltonian. Semiclassical methods such as periodic-orbit theory connecting the classical trajectories of the system with quantum features. Direct application of the correspondence principle, during the first half of the twentieth century, chaotic behavior in mechanics was recognized, but not well-understood. The foundations of quantum mechanics were laid in that period. Other phenomena show up in the evolution of a quantum system. In some contexts, such as acoustics or microwaves, wave patterns are directly observable, Quantum chaos typically deals with systems whose properties need to be calculated using either numerical techniques or approximation schemes. Simple and exact solutions are precluded by the fact that the systems constituents either influence each other in a complex way, finding constants of motion so that this separation can be performed can be a difficult analytical task. Solving the classical problem can give insight into solving the quantum problem. If there are regular classical solutions of the same Hamiltonian, then there are constants of motion. Other approaches have developed in recent years. One is to express the Hamiltonian in different coordinate systems in different regions of space, wavefunctions are obtained in these regions, and eigenvalues are obtained by matching boundary conditions. Another approach is numerical matrix diagonalization, if the Hamiltonian matrix is computed in any complete basis, eigenvalues and eigenvectors are obtained by diagonalizing the matrix. However, all complete basis sets are infinite, and we need to truncate the basis and these techniques boil down to choosing a truncated basis from which accurate wavefunctions can be constructed. A given Hamiltonian shares the same constants of motion for both classical and quantum dynamics, Quantum systems can also have additional quantum numbers corresponding to discrete symmetries. Nevertheless, learning how to solve such problems is an important part of answering the question of quantum chaos. Statistical measures of quantum chaos were born out of a desire to quantify spectral features of complex systems, random matrix theory was developed in an attempt to characterize spectra of complex nuclei
Quantum chaos
–
Quantum chaos is the field of physics attempting to bridge the theories of
quantum mechanics and
classical mechanics. The figure shows the main ideas running in each direction.
Quantum chaos
–
Experimental recurrence
spectra [disambiguation needed] of lithium in an electric field showing birth of quantum recurrences corresponding to
bifurcations of classical orbits.
Quantum chaos
–
Comparison of experimental and theoretical recurrence
spectra [disambiguation needed] of lithium in an electric field at a scaled energy of.
Quantum chaos
–
Computed regular (non-chaotic)
Rydberg atom energy level spectra of hydrogen in an electric field near n=15. Note that energy levels can cross due to underlying symmetries of dynamical motion.
39.
Holographic principle
–
First proposed by Gerard t Hooft, it was given a precise string-theory interpretation by Leonard Susskind who combined his ideas with previous ones of t Hooft and Charles Thorn. As pointed out by Raphael Bousso, Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. Cosmological holography has not been made mathematically precise, partly because the horizon has a non-zero area. The holographic principle was inspired by black hole thermodynamics, which conjectures that the entropy in any region scales with the radius squared. In the case of a hole, the insight was that the informational content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon. The holographic principle resolves the black hole information paradox within the framework of string theory, however, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law, hence in principle larger than those of a black hole. These are the so-called Wheelers bags of gold, the existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood. An object with relatively high entropy is microscopically random, like a hot gas, a known configuration of classical fields has zero entropy, there is nothing random about electric and magnetic fields, or gravitational waves. Since black holes are exact solutions of Einsteins equations, they were not to have any entropy either. But Jacob Bekenstein noted that this leads to a violation of the law of thermodynamics. If one throws a hot gas with entropy into a hole, once it crosses the event horizon. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in random objects with an entropy that increases by an amount greater than the entropy of the consumed gas. Bekenstein assumed that black holes are maximum entropy objects—that they have more entropy than anything else in the same volume, in a sphere of radius R, the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational, when there is too much energy the gas collapses into a black hole, Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is proportional to the area of the event horizon. Stephen Hawking had shown earlier that the horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics, it is those light rays that are just barely unable to escape, if neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole
Holographic principle
–
String theory
40.
Astrophysics
–
Astrophysics is the branch of astronomy that employs the principles of physics and chemistry to ascertain the nature of the heavenly bodies, rather than their positions or motions in space. Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. In practice, modern astronomical research often involves an amount of work in the realms of theoretical and observational physics. Although astronomy is as ancient as recorded history itself, it was separated from the study of terrestrial physics. Their challenge was that the tools had not yet been invented with which to prove these assertions, for much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. Kirchhoff deduced that the lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the elements found in the Sun. Among those who extended the study of solar and stellar spectra was Norman Lockyer and he thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types, most significantly, she discovered that hydrogen and helium were the principal components of stars. This discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication, however, later research confirmed her discovery. By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray and it is the practice of observing celestial objects by using telescopes and other astronomical apparatus. The majority of observations are made using the electromagnetic spectrum. Radio astronomy studies radiation with a greater than a few millimeters. The study of these waves requires very large radio telescopes, infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are made with telescopes similar to the familiar optical telescopes. Objects colder than stars are studied at infrared frequencies. Optical astronomy is the oldest kind of astronomy, telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earths atmosphere interferes somewhat with optical observations, so adaptive optics, in this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies and nebulae
Astrophysics
–
Early 20th-century comparison of elemental, solar, and stellar spectra
Astrophysics
–
Supernova remnant LMC N 63A imaged in x-ray (blue), optical (green) and radio (red) wavelengths. The X-ray glow is from material heated to about ten million degrees Celsius by a shock wave generated by the supernova explosion.
Astrophysics
–
The stream lines on this simulation of a
supernova show the flow of matter behind the shock wave giving clues as to the origin of pulsars
41.
Big Bang
–
The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution. If the known laws of physics are extrapolated to the highest density regime, detailed measurements of the expansion rate of the universe place this moment at approximately 13.8 billion years ago, which is thus considered the age of the universe. After the initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, giant clouds of these primordial elements later coalesced through gravity in halos of dark matter, eventually forming the stars and galaxies visible today. Since Georges Lemaître first noted in 1927 that a universe could be traced back in time to an originating single point. More recently, measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, the known physical laws of nature can be used to calculate the characteristics of the universe in detail back in time to an initial state of extreme density and temperature. American astronomer Edwin Hubble observed that the distances to faraway galaxies were strongly correlated with their redshifts, assuming the Copernican principle, the only remaining interpretation is that all observable regions of the universe are receding from all others. Since we know that the distance between galaxies increases today, it must mean that in the past galaxies were closer together, the continuous expansion of the universe implies that the universe was denser and hotter in the past. Large particle accelerators can replicate the conditions that prevailed after the early moments of the universe, resulting in confirmation, however, these accelerators can only probe so far into high energy regimes. Consequently, the state of the universe in the earliest instants of the Big Bang expansion is still poorly understood, the first subatomic particles to be formed included protons, neutrons, and electrons. Though simple atomic nuclei formed within the first three minutes after the Big Bang, thousands of years passed before the first electrically neutral atoms formed, the majority of atoms produced by the Big Bang were hydrogen, along with helium and traces of lithium. Giant clouds of primordial elements later coalesced through gravity to form stars and galaxies. The framework for the Big Bang model relies on Albert Einsteins theory of relativity and on simplifying assumptions such as homogeneity. The governing equations were formulated by Alexander Friedmann, and similar solutions were worked on by Willem de Sitter, extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This singularity indicates that general relativity is not a description of the laws of physics in this regime. How closely models based on general relativity alone can be used to extrapolate toward the singularity is debated—certainly no closer than the end of the Planck epoch. This primordial singularity is itself called the Big Bang, but the term can also refer to a more generic early hot. The agreement of independent measurements of this age supports the model that describes in detail the characteristics of the universe. The earliest phases of the Big Bang are subject to much speculation, in the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures and was very rapidly expanding and cooling
Big Bang
–
Panoramic view of the entire
near-infrared sky reveals the distribution of galaxies beyond the Milky Way. Galaxies are color-coded by
redshift.
Big Bang
–
According to the Big Bang model, the
universe expanded from an extremely dense and hot state and continues to expand.
Big Bang
–
Abell 2744 galaxy cluster -
Hubble Frontier Fields view.
Big Bang
–
Lambda-CDM, accelerated expansion of the universe. The time-line in this schematic diagram extends from the big bang/inflation era 13.7 Gyr ago to the present cosmological time.
42.
Cosmology
–
Cosmology is the study of the origin, evolution, and eventual fate of the universe. The term cosmology was first used in English in 1656 in Thomas Blounts Glossographia, religious or mythological cosmology is a body of beliefs based on mythological, religious, and esoteric literature and traditions of creation and eschatology. Physical cosmology is studied by scientists, such as astronomers and physicists, as well as philosophers, such as metaphysicians, philosophers of physics, and philosophers of space and time. Because of this scope with philosophy, theories in physical cosmology may include both scientific and non-scientific propositions, and may depend upon assumptions that can not be tested. Cosmology differs from astronomy in that the former is concerned with the Universe as a whole while the latter deals with individual celestial objects. Theoretical astrophysicist David N. Spergel has described cosmology as a science because when we look out in space. Physics and astrophysics have played a role in shaping the understanding of the universe through scientific observation. Physical cosmology was shaped through both mathematics and observation in an analysis of the whole universe, cosmogony studies the origin of the Universe, and cosmography maps the features of the Universe. In Diderots Encyclopédie, cosmology is broken down into uranology, aerology, geology, metaphysical cosmology has also been described as the placing of man in the universe in relationship to all other entities. Physical cosmology is the branch of physics and astrophysics that deals with the study of the physical origins and it also includes the study of the nature of the Universe on a large scale. In its earliest form, it was what is now known as celestial mechanics, greek philosophers Aristarchus of Samos, Aristotle, and Ptolemy proposed different cosmological theories. The geocentric Ptolemaic system was the theory until the 16th century when Nicolaus Copernicus. This is one of the most famous examples of epistemological rupture in physical cosmology, when Isaac Newton published the Principia Mathematica in 1687, he finally figured out how the heavens moved. A fundamental difference between Newtons cosmology and those preceding it was the Copernican principle—that the bodies on earth obey the same laws as all the celestial bodies. This was a crucial advance in physical cosmology. Physicists began changing the assumption that the Universe was static and unchanging, in 1922 Alexander Friedmann introduced the idea of an expanding universe that contained moving matter. In parallel to this approach to cosmology, one long-standing debate about the structure of the cosmos was coming to a climax. This difference of ideas came to a climax with the organization of the Great Debate on 26 April 1920 at the meeting of the U. S. National Academy of Sciences in Washington, D. C
Cosmology
–
The
Hubble eXtreme Deep Field (XDF) was completed in September 2012 and shows the farthest
galaxies ever photographed. Except for the few stars in the foreground (which are bright and easily recognizable because only they have
diffraction spikes), every speck of light in the photo is an individual galaxy, some of them as old as 13.2 billion years; the observable universe is estimated to contain more than 200 billion galaxies.
Cosmology
–
Evidence of
gravitational waves in the
infant universe may have been uncovered by the microscopic examination of the
focal plane of the
BICEP2 radio telescope.
Cosmology
–
Art
43.
Loop quantum gravity
–
Loop quantum gravity is a theory that attempts to describe the quantum properties of the universe and gravity. It is also a theory of quantum spacetime because, according to relativity, gravity is a manifestation of the geometry of spacetime. LQG is an attempt to merge quantum mechanics and general relativity, from the point of view of Einsteins theory, it comes as no surprise that all attempts to treat gravity simply like one more quantum force have failed. According to Einstein, gravity is not a force – it is a property of space-time itself, Loop quantum gravity is an attempt to develop a quantum theory of gravity based directly on Einsteins geometrical formulation. The main output of the theory is a picture of space where space is granular. The granularity is a consequence of the quantization. It has the nature as the granularity of the photons in the quantum theory of electromagnetism. Here, it is itself that is discrete. In other words, there is a minimum distance possible to travel through it, more precisely, space can be viewed as an extremely fine fabric or network woven of finite loops. These networks of loops are called spin networks, the evolution of a spin network over time is called a spin foam. The predicted size of this structure is the Planck length, which is approximately 10−35 meters, According to the theory, there is no meaning to distance at scales smaller than the Planck scale. Therefore, LQG predicts that not just matter, but space itself, has an atomic structure, today LQG is a vast area of research, developing in several directions, which involves about 30 research groups worldwide. They all share the physical assumptions and the mathematical description of quantum space. Research into the consequences of the theory is proceeding in several directions. Among these, the most well-developed is the application of LQG to cosmology, LQC applies LQG ideas to the study of the early universe and the physics of the Big Bang. Its most spectacular consequence is that the evolution of the universe can be continued beyond the Big Bang, the Big Bang appears thus to be replaced by a sort of cosmic Big Bounce. In 1986, Abhay Ashtekar reformulated Einsteins general relativity in a closer to that of the rest of fundamental physics. Carlo Rovelli and Lee Smolin defined a nonperturbative and background-independent quantum theory of gravity in terms of these loop solutions, in 1994, Rovelli and Smolin showed that the quantum operators of the theory associated to area and volume have a discrete spectrum
Loop quantum gravity
–
Simulated
Large Hadron Collider CMS particle detector data depicting a
Higgs boson produced by colliding protons decaying into hadron jets and electrons
Loop quantum gravity
–
Graphical representation of the simplest non-trivial Mandestam identity relating different
Wilson loops.
Loop quantum gravity
–
The action of the Hamiltonian constraint translated to the
path integral or so-called
spin foam description. A single node splits into three nodes, creating a spin foam vertex. is the value of at the vertex and are the matrix elements of the Hamiltonian constraint.
Loop quantum gravity
–
An artist depiction of two
black holes merging, a process in which the
laws of thermodynamics are upheld.
44.
Quantum gravity
–
Quantum gravity is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored. The current understanding of gravity is based on Albert Einsteins general theory of relativity, the necessity of a quantum mechanical description of gravity is sometimes said to follow from the fact that one cannot consistently couple a classical system to a quantum one. This is false as is shown, for example, by Walds explicit construction of a consistent semiclassical theory, the problem is that the theory one gets in this way is not renormalizable and therefore cannot be used to make meaningful physical predictions. As a result, theorists have taken up more radical approaches to the problem of quantum gravity, a theory of quantum gravity that is also a grand unification of all known interactions is sometimes referred to as The Theory of Everything. As a result, quantum gravity is a mainly theoretical enterprise, much of the difficulty in meshing these theories at all energy scales comes from the different assumptions that these theories make on how the universe works. Quantum field theory, if conceived of as a theory of particles, General relativity models gravity as a curvature within space-time that changes as a gravitational mass moves. Historically, the most obvious way of combining the two ran quickly into what is known as the renormalization problem, another possibility is to focus on fields rather than on particles, which are just one way of characterizing certain fields in very special spacetimes. This solves worries about consistency, but does not appear to lead to a version of full general theory of relativity. Quantum gravity can be treated as a field theory. Effective quantum field theories come with some high-energy cutoff, beyond which we do not expect that the theory provides a description of nature. The infinities then become large but finite quantities depending on this finite cutoff scale and this same logic works just as well for the highly successful theory of low-energy pions as for quantum gravity. Indeed, the first quantum-mechanical corrections to graviton-scattering and Newtons law of gravitation have been explicitly computed. In fact, gravity is in ways a much better quantum field theory than the Standard Model. Specifically, the problem of combining quantum mechanics and gravity becomes an issue only at high energies. This problem must be put in the context, however. While there is no proof of the existence of gravitons. The predicted find would result in the classification of the graviton as a force similar to the photon of the electromagnetic field. Many of the notions of a unified theory of physics since the 1970s assume, and to some degree depend upon
Quantum gravity
–
Gravity Probe B (GP-B) has measured spacetime curvature near Earth to test related models in application of Einstein's general theory of relativity.
Quantum gravity
–
Interaction in the subatomic world:
world lines of point-like
particles in the
Standard Model or a
world sheet swept up by closed
strings in string theory
45.
Theory of everything
–
Finding a ToE is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, as a whole, most closely resemble a ToE and these two theories upon which all modern physics rests are general relativity and quantum field theory. GR is a framework that only focuses on gravity for understanding the universe in regions of both large-scale and high-mass, stars, galaxies, clusters of galaxies, etc. QFT successfully implemented the Standard Model and unified the interactions between the three forces, weak, strong, and electromagnetic force. Through years of research, physicists have experimentally confirmed with tremendous accuracy virtually every prediction made by two theories when in their appropriate domains of applicability. In accordance with their findings, scientists learned that GR and QFT. Since the usual domains of applicability of GR and QFT are so different, in pursuit of this goal, quantum gravity has become an area of active research. Eventually a single explanatory framework, called string theory, has emerged that intends to be the theory of the universe. String theory posits that at the beginning of the universe, the four forces were once a single fundamental force. According to string theory, every particle in the universe, at its most microscopic level, string theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created. Initially, the theory of everything was used with an ironic connotation to refer to various overgeneralized theories. For example, a grandfather of Ijon Tichy — a character from a cycle of Stanisław Lems science fiction stories of the 1960s — was known to work on the General Theory of Everything. Physicist John Ellis claims to have introduced the term into the literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research, in ancient Greece, pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of atom, introduced by Democritus, was a philosophical attempt to unify all phenomena observed in nature. Archimedes was possibly the first scientist known to have described nature with axioms and he thus tried to describe everything starting from a few axioms. Any theory of everything is similarly expected to be based on axioms, in the late 17th century, Isaac Newtons description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything, modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplaces vision has to be amended, a theory of everything must include gravitation and quantum mechanics
Theory of everything
–
Simulated
Large Hadron Collider CMS particle detector data depicting a
Higgs boson produced by colliding protons decaying into hadron jets and electrons
46.
Henri Becquerel
–
Antoine Henri Becquerel was a French physicist, Nobel laureate, and the first person to discover evidence of radioactivity. For work in this field he, along with Marie Skłodowska-Curie and Pierre Curie, the SI unit for radioactivity, the becquerel, is named after him. Roentgen discovered X-rays on November 8,1895, the press reported this on January 5,1896, shortly before Becquerel did his famous experiment. Thus radioactivity was discovered the year but within a year of the discovery of X-rays. Becquerel was born in Paris into a family which produced four generations of scientists, Becquerels grandfather, father. He studied engineering at the École Polytechnique and the École des Ponts et Chaussées, in 1890 he married Louise Désirée Lorieux. In 1892, he became the third in his family to occupy the chair at the Muséum National dHistoire Naturelle. In 1894, he became chief engineer in the Department of Bridges, Becquerels earliest works centered on the subject of his doctoral thesis, the plane polarization of light, with the phenomenon of phosphorescence and absorption of light by crystals. Becquerels discovery of radioactivity is a famous example of serendipity. Becquerel had long interested in phosphorescence, the emission of light of one color following a bodys exposure to light of another color. His first experiments appeared to show this, one places on the sheet of paper, on the outside, a slab of the phosphorescent substance, and one exposes the whole to the sun for several hours. When one then develops the photographic plate, one recognizes that the silhouette of the phosphorescent substance appears in black on the negative. If one places between the phosphorescent substance and the paper a piece of money or a metal screen pierced with a cut-out design, one must conclude from these experiments that the phosphorescent substance in question emits rays which pass through the opaque paper and reduce silver salts. But further experiments led him to doubt and then abandon this hypothesis, since the sun did not come out in the following days, I developed the photographic plates on the 1st of March, expecting to find the images very weak. Instead the silhouettes appeared with great intensity, however, the present experiments, without being contrary to this hypothesis, do not warrant this conclusion. I hope that the experiments which I am pursuing at the moment will be able to bring some clarification to this new class of phenomena. In 1903, Becquerel shared the Nobel Prize in Physics with Pierre, by 1861, Niepce de Saint-Victor realized that uranium salts produce a radiation that is invisible to our eyes. Niepce de Saint-Victor knew Edmond Becquerel, Henri Becquerels father, in 1868, Edmond Becquerel published a book, La lumière, ses causes et ses effets
Henri Becquerel
–
Henri Becquerel, French
physicist
Henri Becquerel
–
Becquerel in the lab
Henri Becquerel
–
Image of Becquerel's photographic plate which has been fogged by exposure to radiation from a uranium salt. The shadow of a metal
Maltese Cross placed between the plate and the uranium salt is clearly visible.
47.
Pierre Curie
–
Pierre Curie was a French physicist, a pioneer in crystallography, magnetism, piezoelectricity and radioactivity. Born in Paris on 15 May 1859, Pierre was the son of Eugène Curie and he was educated by his father, a doctor, and in his early teens showed a strong aptitude for mathematics and geometry. When he was 16, he earned his math degree, by the age of 18 he had completed the equivalent of a higher degree, but did not proceed immediately to a doctorate due to lack of money. Instead he worked as a laboratory instructor, in 1880, Pierre and his older brother Jacques demonstrated that an electric potential was generated when crystals were compressed, i. e. piezoelectricity. To provide accurate measurements needed for their work, Pierre created a highly sensitive instrument called the Curie Scale and he used weights, microscopic meter readers, and pneumatic dampeners to create the scale. Also, to aid their work, they invented the Piezoelectric Quartz Electrometer, shortly afterwards, in 1881, they demonstrated the reverse effect, that crystals could be made to deform when subject to an electric field. Almost all digital electronic circuits now rely on this in the form of crystal oscillators, Pierre Curie was introduced to Maria Skłodowska by their friend, physicist Józef Wierusz-Kowalski. Pierre took Maria into his laboratory as his student and his admiration for her grew when he realized that she would not inhibit his research. He began to regard her as his muse and she refused his initial proposal, but finally agreed to marry him on 26 July 1895. It would be a thing, a thing I dare not hope, if we could spend our life near each other hypnotized by our dreams, your patriotic dream, our humanitarian dream. Pierre Curie to Marie Skłodowska Prior to his famous studies on magnetism. Variations on this equipment were used by future workers in that area. Pierre Curie studied ferromagnetism, paramagnetism, and diamagnetism for his doctoral thesis, the material constant in Curies law is known as the Curie constant. He also discovered that ferromagnetic substances exhibited a critical temperature transition and this is now known as the Curie temperature. The Curie temperature is used to plate tectonics, treat hypothermia, measure caffeine. Pierre formulated what is now known as the Curie Dissymmetry Principle, for example, a random mixture of sand in zero gravity has no dissymmetry. Introduce a gravitational field, and there is a dissymmetry because of the direction of the field, then the sand grains can self-sort with the density increasing with depth. But this new arrangement, with the arrangement of sand grains
Pierre Curie
–
Pierre Curie
Pierre Curie
–
Propriétés magnétiques des corps à diverses temperatures (Curie's dissertation, 1895)
Pierre Curie
–
The crypt at the Panthéon in Paris
48.
Wilhelm Wien
–
He also formulated an expression for the black-body radiation which is correct in the photon-gas limit. His arguments were based on the notion of adiabatic invariance, and were instrumental for the formulation of quantum mechanics, Wien received the 1911 Nobel Prize for his work on heat radiation. He was a cousin of Max Wien, inventor of the Wien bridge, Wien was born at Gaffken near Fischhausen, Province of Prussia as the son of landowner Carl Wien. In 1866, his family moved to Drachstein near Rastenburg, in 1879, Wien went to school in Rastenburg and from 1880-1882 he attended the city school of Heidelberg. In 1882 he attended the University of Göttingen and the University of Berlin, from 1896 to 1899, Wien lectured at RWTH Aachen University. In 1900 he went to the University of Würzburg and became successor of Wilhelm Conrad Röntgen, in 1896 Wien empirically determined a distribution law of blackbody radiation, later named after him, Wiens law. However, Wiens law was valid at high frequencies. Planck corrected the theory and proposed what is now called Plancks law, in 1900, he assumed that the entire mass of matter is of electromagnetic origin and proposed the formula m = E / c 2 for the relation between electromagnetic mass and electromagnetic energy. While studying streams of ionized gas, Wien, in 1898, Wien, with this work, laid the foundation of mass spectrometry. J. J. Thomson refined Wiens apparatus and conducted experiments in 1913 then, after work by Ernest Rutherford in 1919. During April 1913, Wien was a lecturer at Columbia University, in 1911, Wien was awarded the Nobel Prize in Physics for his discoveries regarding the laws governing the radiation of heat. Wiens distribution law History of special relativity Mass–energy equivalence ——, ueber die Fragen, welche die translatorische Bewegung des Lichtäthers betreffen. Über die Möglichkeit einer elektromagnetischen Begründung der Mechanik, Über die Differentialgleichungen der Elektrodynamik für bewegte Körper. Über die Differentialgleichungen der Elektrodynamik für bewegte Körper, erwiderung auf die Kritik des Hrn. Aus dem Leben und Wirken eines Physikers, rüchardt, E. Zur Entdeckung der Kanalstrahlen vor fünfzig Jahren. Rüchardt, E. Zur Erinnerung an Wilhelm Wien bei der 25, Wilhelm Wien OConnor, John J. Robertson, Edmund F. Wilhelm Wien, MacTutor History of Mathematics archive, University of St Andrews
Wilhelm Wien
–
Wilhelm Wien
49.
Marie Curie
–
Marie Skłodowska Curie, born Maria Salomea Skłodowska, was a Polish and naturalized-French physicist and chemist who conducted pioneering research on radioactivity. She was also the first woman to become a professor at the University of Paris and she was born in Warsaw, in what was then the Kingdom of Poland, part of the Russian Empire. She studied at Warsaws clandestine Floating University and began her scientific training in Warsaw. In 1891, aged 24, she followed her older sister Bronisława to study in Paris and she shared the 1903 Nobel Prize in Physics with her husband Pierre Curie and with physicist Henri Becquerel. She won the 1911 Nobel Prize in Chemistry and her achievements included the development of the theory of radioactivity, techniques for isolating radioactive isotopes, and the discovery of two elements, polonium and radium. Under her direction, the worlds first studies were conducted into the treatment of neoplasms and she founded the Curie Institutes in Paris and in Warsaw, which remain major centres of medical research today. During World War I, she developed mobile radiography units to provide X-ray services to field hospitals, while a French citizen, Marie Skłodowska Curie never lost her sense of Polish identity. She taught her daughters the Polish language and took them on visits to Poland and she named the first chemical element that she discovered—polonium, which she isolated in 1898—after her native country. Maria Skłodowska was born in Warsaw, in the Russian partition of Poland, on 7 November 1867, the fifth and youngest child of well-known teachers Bronisława, née Boguska, the elder siblings of Maria were Zofia, Józef, Bronisława and Helena. On both the paternal and maternal sides, the family had lost their property and fortunes through patriotic involvements in Polish national uprisings aimed at restoring Polands independence. This condemned the subsequent generation, including Maria, her sisters and her brother. Marias paternal grandfather, Józef Skłodowski, had been a teacher in Lublin, where he taught the young Bolesław Prus. Her father, Władysław Skłodowski, taught mathematics and physics, subjects that Maria was to pursue, after Russian authorities eliminated laboratory instruction from the Polish schools, he brought much of the laboratory equipment home, and instructed his children in its use. Marias mother Bronisława operated a prestigious Warsaw boarding school for girls and she died of tuberculosis in May 1878, when Maria was ten years old. Less than three years earlier, Marias oldest sibling, Zofia, had died of typhus contracted from a boarder, Marias father was an atheist, her mother a devout Catholic. The deaths of Marias mother and sister caused her to give up Catholicism and become agnostic. When she was ten years old, Maria began attending the school of J. Sikorska, next she attended a gymnasium for girls. After a collapse, possibly due to depression, she spent the year in the countryside with relatives of her father, and the next year with her father in Warsaw
Marie Curie
–
Marie Skłodowska Curie, c. 1920
Marie Curie
–
Birthplace on ulica Freta in Warsaw's "
New Town " – now home to the
Maria Skłodowska-Curie Museum
Marie Curie
–
Władysław Skłodowski with daughters (from left) Maria,
Bronisława, Helena, 1890
Marie Curie
–
At a Warsaw laboratory, in 1890–91, Maria Skłodowska did her first scientific work
50.
Arnold Sommerfeld
–
He served as PhD supervisor for many Nobel Prize winners in physics and chemistry. He introduced the 2nd quantum number and the 4th quantum number and he also introduced the fine-structure constant and pioneered X-ray wave theory. Sommerfeld studied mathematics and physical sciences at the Albertina University of his city, Königsberg. His dissertation advisor was the mathematician Ferdinand von Lindemann, and he benefited from classes with mathematicians Adolf Hurwitz and David Hilbert. His participation in the student fraternity Deutsche Burschenschaft resulted in a scar on his face. He received his Ph. D. on October 24,1891, after receiving his doctorate, Sommerfeld remained at Königsberg to work on his teaching diploma. He passed the exam in 1892 and then began a year of military service. He completed his military service in September 1893, and for the next eight years continued voluntary eight-week military service. With his turned up moustache, his build, his Prussian bearing. In October, Sommerfeld went to the University of Göttingen, which was the center of mathematics in Germany, Sommerfelds Habilitationsschrift was completed under Klein, in 1895, which allowed Sommerfeld to become a Privatdozent at Göttingen. As a Privatdozent, Sommerfeld lectured on a range of mathematical and mathematical physics topics. Lectures by Klein in 1895 and 1896 on rotating bodies led Klein and Sommerfeld to write a four-volume text Die Theorie des Kreisels – a 13-year collaboration, the first two volumes were on theory, and the latter two were on applications in geophysics, astronomy, and technology. The association Sommerfeld had with Klein influenced Sommerfelds turn of mind to be applied mathematics, while at Göttingen, Sommerfeld met Johanna Höpfner, daughter of Ernst Höpfner, curator at Göttingen. In October,1897 Sommerfeld began the appointment to the Chair of Mathematics at the Bergakademie in Clausthal-Zellerfeld and this appointment provided enough income to eventually marry Johanna. At Kleins request, Sommerfeld took on the position of editor of Volume V of Enzyklopädie der mathematischen Wissenschaften, in 1900, Sommerfeld started his appointment to the Chair of Applied Mechanics at the Königliche Technische Hochschule Aachen as extraordinarius professor, which was arranged through Kleins efforts. At Aachen, he developed the theory of hydrodynamics, which would retain his interest for a long time, later, at the University of Munich, Sommerfelds students Ludwig Hopf and Werner Heisenberg would write their Ph. D. theses on this topic. From 1906 Sommerfeld established himself as professor of physics and director of the new Theoretical Physics Institute at the University of Munich. He was selected for positions by Wilhelm Röntgen, Director of the Physics Institute at Munich
Arnold Sommerfeld
–
Arnold Sommerfeld, Stuttgart 1935
Arnold Sommerfeld
–
Arnold Johannes Wilhelm Sommerfeld (1868–1951)
51.
Hermann Weyl
–
Hermann Klaus Hugo Weyl, ForMemRS was a German mathematician, theoretical physicist and philosopher. His research has had significance for theoretical physics as well as purely mathematical disciplines including number theory. He was one of the most influential mathematicians of the century. Weyl published technical and some works on space, time, matter, philosophy, logic, symmetry. He was one of the first to conceive of combining general relativity with the laws of electromagnetism, while no mathematician of his generation aspired to the universalism of Henri Poincaré or Hilbert, Weyl came as close as anyone. Michael Atiyah, in particular, has commented that whenever he examined a mathematical topic, Weyl was born in Elmshorn, a small town near Hamburg, in Germany, and attended the gymnasium Christianeum in Altona. From 1904 to 1908 he studied mathematics and physics in both Göttingen and Munich and his doctorate was awarded at the University of Göttingen under the supervision of David Hilbert whom he greatly admired. In September 1913 in Göttingen, Weyl married Friederike Bertha Helene Joseph who went by the name Helene, Helene was a daughter of Dr. Bruno Joseph, a physician who held the position of Sanitätsrat in Ribnitz-Damgarten, Germany. Helene was a philosopher and also a translator of Spanish literature into German and it was through Helenes close connection with Husserl that Hermann became familiar with Husserls thought. Hermann and Helene had two sons, Fritz Joachim Weyl and Michael Weyl, both of whom were born in Zürich, Switzerland, Helene died in Princeton, New Jersey on September 5,1948. A memorial service in her honor was held in Princeton on September 9,1948, speakers at her memorial service included her son Fritz Joachim Weyl and mathematicians Oswald Veblen and Richard Courant. In 1950 Hermann married sculptress Ellen Bär, who was the widow of professor Richard Josef Bär of Zürich, einstein had a lasting influence on Weyl, who became fascinated by mathematical physics. In 1921 Weyl met Erwin Schrödinger, a theoretical physicist who at the time was a professor at the University of Zürich and they were to become close friends over time. Weyl left Zürich in 1930 to become Hilberts successor at Göttingen, leaving when the Nazis assumed power in 1933, particularly as his wife was Jewish. He had been offered one of the first faculty positions at the new Institute for Advanced Study in Princeton, New Jersey, as the political situation in Germany grew worse, he changed his mind and accepted when offered the position again. He remained there until his retirement in 1951, together with his second wife Ellen, he spent his time in Princeton and Zürich, and died from a heart attack on December 8,1955 while living in Zürich. Hermann Weyl was cremated in Zurich on December 12,1955 and his cremains remained in private hands until 1999, at which time they were interred in an outdoor columbarium vault in the Princeton Cemetery, located at 29 Greenview Avenue, Princeton, New Jersey. The remains of Hermanns son Michael Weyl are interred next to Hermanns ashes in the same columbarium vault in the Princeton Cemetery
Hermann Weyl
–
Hermann Weyl
Hermann Weyl
–
Hermann Weyl (left) and
Ernst Peschl (right).
52.
Arthur Compton
–
It was a sensational discovery at the time, the wave nature of light had been well-demonstrated, but the idea that light had both wave and particle properties was not easily accepted. He is also known for his leadership of the Manhattan Projects Metallurgical Laboratory, in 1919, Compton was awarded one of the first two National Research Council Fellowships that allowed students to study abroad. He chose to go to Cambridge Universitys Cavendish Laboratory in England, further research along these lines led to the discovery of the Compton effect. During World War II, Compton was a key figure in the Manhattan Project that developed the first nuclear weapons and his reports were important in launching the project. Compton oversaw Enrico Fermis creation of Chicago Pile-1, the first nuclear reactor, the Metallurgical Laboratory was also responsible for the design and operation of the X-10 Graphite Reactor at Oak Ridge, Tennessee. Plutonium began being produced in the Hanford Site reactors in 1945, after the war, Compton became Chancellor of Washington University in St. Louis. Arthur Compton was born on September 10,1892, in Wooster, Ohio, the son of Elias and Otelia Catherine Compton, Elias was dean of the University of Wooster, which Arthur also attended. Arthurs eldest brother, Karl, who also attended Wooster, earned a PhD in physics from Princeton University in 1912, all three brothers were members of the Alpha Tau Omega fraternity. Compton was initially interested in astronomy, and took a photograph of Halleys Comet in 1910, around 1913, he described an experiment where an examination of the motion of water in a circular tube demonstrated the rotation of the earth. That year, he graduated from Wooster with a Bachelor of Science degree and entered Princeton, where he received his Master of Arts degree in 1914. Compton then studied for his PhD in physics under the supervision of Hereward L. Cooke, writing his dissertation on The intensity of X-ray reflection, and the distribution of the electrons in atoms. When Arthur Compton earned his PhD in 1916, he, Karl, later, they would become the first such trio to simultaneously head American colleges. Their sister Mary married a missionary, C. Herbert Rice, in June 1916, Compton married Betty Charity McCloskey, a Wooster classmate and fellow graduate. They had two sons, Arthur Alan and John Joseph Compton, during World War I he developed aircraft instrumentation for the Signal Corps. In 1919, Compton was awarded one of the first two National Research Council Fellowships that allowed students to study abroad and he chose to go to Cambridge Universitys Cavendish Laboratory in England. Working with George Paget Thomson, the son of J. J. Thomson, Compton studied the scattering and he observed that the scattered rays were more easily absorbed than the original source. Compton was greatly impressed by the Cavendish scientists, especially Ernest Rutherford, Charles Galton Darwin and Arthur Eddington, for a time Compton was a deacon at a Baptist church. Science can have no quarrel, he said, with a religion which postulates a God to whom men are as His children
Arthur Compton
–
Arthur Compton in 1927
Arthur Compton
–
Arthur Compton and
Werner Heisenberg in 1929 in Chicago
Arthur Compton
–
Arthur Holly Compton on the cover of Time Magazine on January 13, 1936, holding his cosmic ray detector
Arthur Compton
–
Compton at the University of Chicago in 1933 with graduate student
Luis Alvarez next to his cosmic ray telescope.
53.
Johannes Diderik van der Waals
–
Johannes Diderik van der Waals was a Dutch theoretical physicist and thermodynamicist famous for his work on an equation of state for gases and liquids. His name is associated with the van der Waals equation of state that describes the behavior of gases. His name is associated with van der Waals forces, with van der Waals molecules. As James Clerk Maxwell said about Van der Waals, there can be no doubt that the name of Van der Waals will soon be among the foremost in molecular science. In his 1873 thesis, van der Waals noted the non-ideality of real gases, spearheaded by Ernst Mach and Wilhelm Ostwald, a strong philosophical current that denied the existence of molecules arose towards the end of the 19th century. The molecular existence was considered unproven and the molecular hypothesis unnecessary, at the time van der Waals thesis was written, the molecular structure of fluids had not been accepted by most physicists, and liquid and vapor were often considered as chemically distinct. But Van der Waalss work affirmed the reality of molecules and allowed an assessment of their size, by comparing his equation of state with experimental data, Van der Waals was able to obtain estimates for the actual size of molecules and the strength of their mutual attraction. The effect of Van der Waalss work on physics in the 20th century was direct. By introducing parameters characterizing molecular size and attraction in constructing his equation of state, with the help of the van der Waalss equation of state, the critical-point parameters of gases could be accurately predicted from thermodynamic measurements made at much higher temperatures. Nitrogen, oxygen, hydrogen, and helium subsequently succumbed to liquefaction, Heike Kamerlingh Onnes was significantly influenced by the pioneer work of van der Waals. In 1908, Onnes became the winner of the race to liquid helium and because of this. A largely self-taught man in mathematics and physics, van der Waals originally worked as a school teacher and he became the first physics professor of the University of Amsterdam when in 1877 the old Athenaeum was upgraded to Municipal University. Van der Waals won the 1910 Nobel Prize in physics for his work on the equation of state for gases, Johannes Diderik van der Waals was born on 23 November 1837 in Leiden in the Netherlands. He was the eldest of ten born to Jacobus van der Waals. His father was a carpenter in Leiden, as was usual for working-class children in the 19th century, he did not go to the kind of secondary school that would have given him the right to enter university. Instead he went to a school of “advanced primary education”, which he finished at the age of fifteen and he then became a teachers apprentice in an elementary school. Between 1856 and 1861 he followed courses and gained the qualifications to become a primary school teacher. However, the University of Leiden had a provision that enabled students to take up to four courses a year
Johannes Diderik van der Waals
–
Johannes van der Waals
54.
Freeman Dyson
–
Freeman John Dyson FRS is an English-born American theoretical physicist and mathematician, known for his work in quantum electrodynamics, solid-state physics, astronomy and nuclear engineering. He is professor emeritus at the Institute for Advanced Study, a Visitor of Ralston College, born on 15 December 1923, at Crowthorne in Berkshire, Dyson is the son of the English composer George Dyson, who was later knighted. His mother had a law degree, and after Dyson was born she worked as a social worker, at the age of five he calculated the number of atoms in the sun. As a child, he showed an interest in numbers and in the solar system. Politically, Dyson says he was brought up as a socialist, from 1936 to 1941, Dyson was a Scholar at Winchester College, where his father was Director of Music. After the war, Dyson was admitted to Trinity College, Cambridge, from 1946 to 1949, he was a Fellow of his college, occupying rooms just below those of the philosopher Ludwig Wittgenstein, who resigned his professorship in 1947. In 1947, he published two papers in number theory, in 1947, Dyson moved to the United States as a Commonwealth Fellow to earn a physics doctorate with Hans Bethe at Cornell University. Within a week, however, he had made the acquaintance of Richard Feynman, the budding English physicist recognized the brilliance of the flamboyant American, and attached himself as quickly as possible. He then moved to the Institute for Advanced Study, before returning to England and he was the first person after their creator to appreciate the power of Feynman diagrams, and his paper written in 1948 and published in 1949 was the first to make use of them. He said in paper that Feynman diagrams were not just a computational tool, but a physical theory. Dysons paper and also his lectures presented Feynmans theories of QED in a form that other physicists could understand, Robert Oppenheimer, in particular, was persuaded by Dyson that Feynmans new theory was as valid as Schwingers and Tomonagas. Oppenheimer rewarded Dyson with an appointment at the Institute for Advanced Study, for proving me wrong. Also in 1949, in related work, Dyson invented the Dyson series and it was this paper that inspired John Ward to derive his celebrated Ward identity. In 1957, he became a citizen of the United States. One reason he gave decades later is that his children born in the US had not been recognized as British subjects, from 1957 to 1961, he worked on the Orion Project, which proposed the possibility of space-flight using nuclear pulse propulsion. A prototype was demonstrated using conventional explosives, but the 1963 Partial Test Ban Treaty permitted only underground nuclear testing, so the project was abandoned. In 1958, he led the team for the TRIGA. In condensed matter physics, Dyson also analysed the phase transition of the Ising model in 1 dimension, Dyson also did work in a variety of topics in mathematics, such as topology, analysis, number theory and random matrices
Freeman Dyson
–
At the
Long Now Seminar in San Francisco, 2005
Freeman Dyson
–
External video
55.
Henry Moseley
–
This stemmed from his development of Moseleys law in X-ray spectra. Moseleys Law justified many concepts in chemistry by sorting the chemical elements of the table of the elements in a logical order based on their physics. This remains the accepted model today, when World War I broke out in Western Europe, Moseley left his research work at the University of Oxford behind to volunteer for the Royal Engineers of the British Army. Moseley was assigned to the force of British Empire soldiers that invaded the region of Gallipoli, Turkey, in April 1915, Moseley was shot and killed during the Battle of Gallipoli on 10 August 1915, at the age of 27. Experts have speculated that Moseley could have been awarded the Nobel Prize in Physics in 1916, had he not been killed, as a consequence, the British government instituted new policies for eligibility for combat duty. Henry G. J. Moseley, known to his friends as Harry, was born in Weymouth in Dorset in 1887, Moseleys mother was Anabel Gwyn Jeffreys Moseley, the daughter of the Welsh biologist and conchologist John Gwyn Jeffreys. Henry Moseley had been a promising schoolboy at Summer Fields School. In 1906 he won the chemistry and physics prizes at Eton, in 1906, Moseley entered Trinity College of the University of Oxford, where he earned his bachelors degree. Immediately after graduation from Oxford in 1910, Moseley became a demonstrator in physics at the University of Manchester under the supervision of Sir Ernest Rutherford. He declined a fellowship offered by Rutherford, preferring to back to Oxford, in November 1913. In 1913, Moseley observed and measured the X-ray spectra of chemical elements that were found by the method of diffraction through crystals. This was a use of the method of X-ray spectroscopy in physics. Moseley discovered a mathematical relationship between the wavelengths of the X-rays produced and the atomic numbers of the metals that were used as the targets in X-ray tubes. This has become known as Moseleys law, in his invention of the Periodic Table of the Elements, Mendeleev had interchanged the orders of a few pairs of elements in order to put them in more appropriate places in this table of the elements. In addition, Moseley showed that there were gaps in the number sequence at numbers 43,61,72. Nothing about these four elements was known of in Moseleys lifetime, henry Moseleys experiments confirmed these predictions, by showing exactly what the missing atomic numbers were,43 and 61. Moseley was able to demonstrate that these elements, i. e. lanthanum through lutetium, must have exactly 15 members – no more. The number of elements in the lanthanides had been a question that was far from being settled by the chemists of the early 20th Century
Henry Moseley
–
Henry G. J. Moseley in the
Balliol-Trinity Laboratories, Oxford (1910).
Henry Moseley
–
Blue plaque erected by the
Royal Society of Chemistry on the
Townsend Building of the
Clarendon Laboratory at Oxford in 2007, commemorating Moseley's early 20th-century research work on X-rays emitted by elements.
56.
David Hilbert
–
David Hilbert was a German mathematician. He is recognized as one of the most influential and universal mathematicians of the 19th, Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory and the axiomatization of geometry. He also formulated the theory of Hilbert spaces, one of the foundations of functional analysis, Hilbert adopted and warmly defended Georg Cantors set theory and transfinite numbers. A famous example of his leadership in mathematics is his 1900 presentation of a collection of problems set the course for much of the mathematical research of the 20th century. Hilbert and his students contributed significantly to establishing rigor and developed important tools used in mathematical physics. Hilbert is known as one of the founders of theory and mathematical logic. In late 1872, Hilbert entered the Friedrichskolleg Gymnasium, but, after a period, he transferred to. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, in early 1882, Hermann Minkowski, returned to Königsberg and entered the university. Hilbert knew his luck when he saw it, in spite of his fathers disapproval, he soon became friends with the shy, gifted Minkowski. In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius, Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen. Hilbert remained at the University of Königsberg as a Privatdozent from 1886 to 1895, in 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world and he remained there for the rest of his life. Among Hilberts students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, john von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a circle of some of the most important mathematicians of the 20th century, such as Emmy Noether. Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, good, he did not have enough imagination to become a mathematician. Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933 and those forced out included Hermann Weyl, Emmy Noether and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic and this was a sequel to the Hilbert-Ackermann book Principles of Mathematical Logic from 1928. Hermann Weyls successor was Helmut Hasse, about a year later, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust
David Hilbert
–
David Hilbert (1912)
David Hilbert
–
The Mathematical Institute in Göttingen. Its new building, constructed with funds from the
Rockefeller Foundation, was opened by Hilbert and Courant in 1930.
David Hilbert
–
Hilbert's tomb: Wir müssen wissen Wir werden wissen
57.
Roger Penrose
–
Sir Roger Penrose OM FRS is an English mathematical physicist, mathematician and philosopher of science. He is the Emeritus Rouse Ball Professor of Mathematics at the Mathematical Institute of the University of Oxford, Penrose is known for his work in mathematical physics, in particular for his contributions to general relativity and cosmology. He has received prizes and awards, including the 1988 Wolf Prize for physics. Penrose told a Russian audience that his grandmother had left St. Petersburg in the late 1880s and his uncle was artist Roland Penrose, whose son with photographer Lee Miller is Antony Penrose. Penrose is the brother of physicist Oliver Penrose and of chess Grandmaster Jonathan Penrose, Penrose attended University College School and University College, London, where he graduated with a first class degree in mathematics. In 1955, while still a student, Penrose reintroduced the E. H. Moore generalised matrix inverse, also known as the Moore–Penrose inverse, after it had been reinvented by Arne Bjerhammar in 1951. He devised and popularised the Penrose triangle in the 1950s, describing it as impossibility in its purest form, Escher, whose earlier depictions of impossible objects partly inspired it. Eschers Waterfall, and Ascending and Descending were in inspired by Penrose. As reviewer Manjit Kumar puts it, As a student in 1954, soon he was trying to conjure up impossible figures of his own and discovered the tribar – a triangle that looks like a real, solid three-dimensional object, but isnt. Together with his father, a physicist and mathematician, Penrose went on to design a staircase that simultaneously loops up, an article followed and a copy was sent to Escher. Completing a cyclical flow of creativity, the Dutch master of illusions was inspired to produce his two masterpieces. One approach to issue was by the use of perturbation theory. The importance of Penroses epoch-making paper Gravitational collapse and space-time singularities was not only its result, following up his weak cosmic censorship hypothesis, Penrose went on, in 1979, to formulate a stronger version called the strong censorship hypothesis. Together with the BKL conjecture and issues of stability, settling the censorship conjectures is one of the most important outstanding problems in general relativity. Also from 1979 dates Penroses influential Weyl curvature hypothesis on the conditions of the observable part of the universe. Penrose and James Terrell independently realised that objects travelling near the speed of light appear to undergo a peculiar skewing or rotation. This effect has come to be called the Terrell rotation or Penrose–Terrell rotation, in 1967, Penrose invented the twistor theory which maps geometric objects in Minkowski space into the 4-dimensional complex space with the metric signature. Penrose developed these ideas based on the article Deux types fondamentaux de distribution statistique by Czech geographer, demographer, in 1984, such patterns were observed in the arrangement of atoms in quasicrystals
Roger Penrose
–
Roger Penrose, 2005
Roger Penrose
–
Predicted view from outside the horizon of a black hole lit by a thin accretion disc
Roger Penrose
–
Oil painting by Urs Schmid (1995) of a
Penrose tiling using fat and thin
rhombi.
Roger Penrose
–
Prof. Penrose at a conference.
58.
Richard Feynman
–
For his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger and Sinichirō Tomonaga, received the Nobel Prize in Physics in 1965. Feynman developed a widely used pictorial representation scheme for the mathematical expressions governing the behavior of subatomic particles, during his lifetime, Feynman became one of the best-known scientists in the world. In a 1999 poll of 130 leading physicists worldwide by the British journal Physics World he was ranked as one of the ten greatest physicists of all time. Along with his work in physics, Feynman has been credited with pioneering the field of quantum computing. Tolman professorship in physics at the California Institute of Technology. They were not religious, and by his youth, Feynman described himself as an avowed atheist, like Albert Einstein and Edward Teller, Feynman was a late talker, and by his third birthday had yet to utter a single word. He retained a Brooklyn accent as an adult and that accent was thick enough to be perceived as an affectation or exaggeration – so much so that his good friends Wolfgang Pauli and Hans Bethe once commented that Feynman spoke like a bum. The young Feynman was heavily influenced by his father, who encouraged him to ask questions to challenge orthodox thinking, from his mother, he gained the sense of humor that he had throughout his life. As a child, he had a talent for engineering, maintained a laboratory in his home. When he was in school, he created a home burglar alarm system while his parents were out for the day running errands. When Richard was five years old, his mother gave birth to a brother, Henry Philips. Four years later, Richards sister Joan was born, and the moved to Far Rockaway. Though separated by nine years, Joan and Richard were close and their mother thought that women did not have the cranial capacity to comprehend such things. Despite their mothers disapproval of Joans desire to study astronomy, Richard encouraged his sister to explore the universe, Joan eventually became an astrophysicist specializing in interactions between the Earth and the solar wind. Feynman attended Far Rockaway High School, a school in Far Rockaway, Queens, upon starting high school, Feynman was quickly promoted into a higher math class. A high-school-administered IQ test estimated his IQ at 125—high, but merely respectable according to biographer James Gleick and his sister Joan did better, allowing her to claim that she was smarter. Years later he declined to join Mensa International, saying that his IQ was too low, physicist Steve Hsu stated of the test, I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the United States by a margin on the notoriously difficult Putnam mathematics competition exam
Richard Feynman
–
Richard Feynman
Richard Feynman
–
Feynman (center) with
Robert Oppenheimer (right) relaxing at a
Los Alamos social function during the
Manhattan Project
Richard Feynman
–
The Feynman section at the
Caltech bookstore
Richard Feynman
–
Mention of Feynman's prize on the monument at the
American Museum of Natural History in New York City. Because the monument is dedicated to American Laureates,
Tomonaga is not mentioned.
59.
Abdus Salam
–
Mohammad Abdus Salam NI, SPk, KBE, was a Pakistani theoretical physicist. A major figure in 20th century theoretical physics, he shared the 1979 Nobel Prize in Physics with Sheldon Glashow and he was the first Pakistani and first Muslim to receive a Nobel Prize in science and the second from an Islamic country to receive any Nobel Prize. He was the director of the Space and Upper Atmosphere Research Commission. In 1998, following the nuclear tests, the Government of Pakistan issued a commemorative stamp, as a part of Scientists of Pakistan. Salam made a contribution in quantum field theory and in the advancement of Mathematics at Imperial College London. Salam heavily contributed to the rise of Pakistani physics to the community in the world. Even until shortly before his death, Salam continued to contribute to physics, Abdus Salam was born to Chaudhry Muhammad Hussain and Hajira Hussain, into an Ahmadi Muslim Punjabi family. His own grandfather, Gul Muhammad, was a religious scholar apart from being a physician, Salams father was an education officer in the Department of Education of Punjab State in a poor farming district. Salam very early established a reputation throughout the Punjab and later at the University of Cambridge for outstanding brilliance, at age 14, Salam scored the highest marks ever recorded for the matriculation examination at the Punjab University. He won a scholarship to the Government College University of Lahore. Salam was a scholar, interested in Urdu and English literature in which he excelled. But he soon picked up Mathematics as his concentration. A. in Mathematics in 1944 and his father wanted him to join Indian Civil Service. In those days, the Indian Civil Service was the highest aspiration for young university graduates, respecting his fathers wish, Salam tried for the Indian Railways but did not qualify for the service as he failed the medical optical tests because he had worn spectacles since an early age. Therefore, Indian Railways rejected Abdus Salams job application, while in Lahore, Abdus Salam went on to attend the graduate school of Government College University. He received his MA in Mathematics from the Government College University in 1946 and that same year, he was awarded a scholarship to St Johns College, Cambridge, where he completed a BA degree with Double First-Class Honours in Mathematics and Physics in 1949. In 1950, he received the Smiths Prize from Cambridge University for the most outstanding contribution to Physics. Salam returned to Jhang, Punjab and renewed his scholarship and returned to the United Kingdom to do his doctorate and he obtained a PhD degree in Theoretical Physics from the Cavendish Laboratory at Cambridge. His doctoral thesis titled Developments in quantum theory of fields contained comprehensive, by the time it was published in 1951, it had already gained him an international reputation and the Adams Prize
Abdus Salam
–
Abdus Salam in 1987
Abdus Salam
–
Abdus Salam lectures on G.U.T. at the University of Chicago's Oriental Institute
Abdus Salam
–
The defaced grave of Abdus Salam at Rabwah, Punjab
Abdus Salam
–
A commemorative stamp to honour the services of Dr. Abdus Salam.
60.
J. J. Thomson
–
He was elected as a fellow of the Royal Society of London and appointed to the Cavendish Professorship of Experimental Physics at the Cambridge Universitys Cavendish Laboratory in 1884. Thomson is also credited with finding the first evidence for isotopes of an element in 1913. His experiments to determine the nature of positively charged particles, with Francis William Aston, were the first use of mass spectrometry, Thomson was awarded the 1906 Nobel Prize in Physics for his work on the conduction of electricity in gases. Seven of his students, including his son George Paget Thomson and his record is comparable only to that of the German physicist Arnold Sommerfeld. Joseph John Thomson was born 18 December 1856 in Cheetham Hill, Manchester, Lancashire and his mother, Emma Swindells, came from a local textile family. His father, Joseph James Thomson, ran a bookshop founded by a great-grandfather. He had a two years younger than he was, Frederick Vernon Thomson. J. J. Thomson was a devout Anglican and his early education was in small private schools where he demonstrated outstanding talent and interest in science. In 1870 he was admitted to Owens College at the young age of 14. His parents planned to enroll him as an engineer to Sharp-Stewart & Co, a locomotive manufacturer. He moved on to Trinity College, Cambridge, in 1876, in 1880 he obtained his BA in mathematics. He applied for and became a Fellow of Trinity College in 1881, Thomson received his MA in 1883. Thomson was elected a Fellow of the Royal Society on 12 June 1884, on 22 December 1884 Thomson was chosen to become Cavendish Professor of Physics at the University of Cambridge. The appointment caused considerable surprise, given that such as Richard Glazebrook were older. Thomson was known for his work as a mathematician, where he was recognized as an exceptional talent. In 1890, Thomson married Rose Elisabeth Paget, daughter of Sir George Edward Paget, KCB and they had one son, George Paget Thomson, and one daughter, Joan Paget Thomson. He was awarded a Nobel Prize in 1906, in recognition of the merits of his theoretical and experimental investigations on the conduction of electricity by gases. He was knighted in 1908 and appointed to the Order of Merit in 1912, in 1914 he gave the Romanes Lecture in Oxford on The atomic theory
J. J. Thomson
–
Sir Joseph John Thomson
J. J. Thomson
–
External video
J. J. Thomson
–
In the bottom right corner of this photographic plate are markings for the two isotopes of neon: neon-20 and neon-22.
J. J. Thomson
–
Plaque commemorating J. J. Thomson's discovery of the electron outside the old Cavendish Laboratory in Cambridge
61.
C. V. Raman
–
He discovered that when light traverses a transparent material, some of the deflected light changes in wavelength. This phenomenon, subsequently known as Raman scattering, results from the Raman effect, in 1954, India honoured him with its highest civilian award, the Bharat Ratna. Ramans father initially taught in a school in Thiruvanaikaval, became a lecturer of mathematics and physics in Mrs. A. V, narasimha Rao College, Visakhapatnam in the Indian state of Andhra Pradesh, and later joined Presidency College in Madras. At an early age, Raman moved to the city of Visakhapatnam, Raman passed his matriculation examination at the age of 11 and he passed his F. A. examination with a scholarship at the age of 13. In 1902, Raman joined Presidency College in Madras where his father was a lecturer in mathematics and physics, in 1904 he passed his Bachelor of Arts examination of University of Madras. He stood first and won the medal in physics. In 1907 he gained his Master of Sciences degree with the highest distinctions from University of Madras, in year 1917, Raman resigned from his government service after he was appointed the first Palit Professor of Physics at the University of Calcutta. At the same time, he continued doing research at the Indian Association for the Cultivation of Science, Calcutta, Raman used to refer to this period as the golden era of his career. Many students gathered around him at the IACS and the University of Calcutta, on 28 February 1928, Raman led experiments at the IACS with collaborators, including K. S. Krishnan, on the scattering of light, when he discovered what now is called the Raman effect. A detailed account of this period is reported in the biography by G. Venkatraman and it was instantly clear that this discovery was of huge value. It gave further proof of the nature of light. Raman had a professional relationship with K. S. Krishnan, who surprisingly did not share the award. Raman spectroscopy came to be based on this phenomenon, and Ernest Rutherford referred to it in his address to the Royal Society in 1929. Raman was president of the 16th session of the Indian Science Congress in 1929 and he was conferred a knighthood, and medals and honorary doctorates by various universities. Raman was confident of winning the Nobel Prize in Physics as well and he did eventually win the 1930 Nobel Prize in Physics for his work on the scattering of light and for the discovery of the Raman effect. He was the first Asian and first non-white to receive any Nobel Prize in the sciences, before him Rabindranath Tagore had received the Nobel Prize for Literature in 1913. Raman and Suri Bhagavantam discovered the quantum photon spin in 1932 and he also held the position of permanent visiting professor at BHU. During his tenure at IISc, he recruited the talented electrical engineering student, G. N. Ramachandran, Raman also worked on the acoustics of musical instruments
C. V. Raman
–
Sir Chandrasekhara Raman
FRS
C. V. Raman
–
Bust of Chandrasekhara Venkata Raman which is placed in the garden of Birla Industrial & Technological Museum.
C. V. Raman
–
1954–1960
62.
James Chadwick
–
Sir James Chadwick, CH, FRS was an English physicist who was awarded the 1935 Nobel Prize in Physics for his discovery of the neutron in 1932. In 1941, he wrote the draft of the MAUD Report. He was the head of the British team that worked on the Manhattan Project during the Second World War and he was knighted in England in 1945 for his achievements in physics. Chadwick graduated from the Victoria University of Manchester in 1911, where he studied under Ernest Rutherford, at Manchester, he continued to study under Rutherford until he was awarded his MSc in 1913. The same year, Chadwick was awarded an 1851 Research Fellowship from the Royal Commission for the Exhibition of 1851 and he elected to study beta radiation under Hans Geiger in Berlin. Using Geigers recently developed Geiger counter, Chadwick was able to demonstrate that beta radiation produced a continuous spectrum, still in Germany when the First World War broke out in Europe, he spent the next four years in the Ruhleben internment camp. Chadwick followed his discovery of the neutron by measuring its mass and he anticipated that neutrons would become a major weapon in the fight against cancer. During the Second World War, Chadwick carried out research as part of the Tube Alloys project to build a bomb, while his Liverpool lab. When the Quebec Agreement merged his project with the American Manhattan Project, he part of the British Mission. He surprised everyone by earning the almost-complete trust of project director Leslie R. Groves, for his efforts, Chadwick received a knighthood in the New Year Honours on 1 January 1945. In July 1945, he viewed the Trinity nuclear test, after this, he served as the British scientific advisor to the United Nations Atomic Energy Commission. Uncomfortable with the trend toward Big Science, Chadwick became the Master of Gonville, James Chadwick was born in Bollington, Cheshire, on 20 October 1891, the first child of John Joseph, a cotton spinner, and Anne Mary Knowles, a domestic servant. He was named James after his paternal grandfather, in 1895, his parents moved to Manchester, leaving him in the care of his maternal grandparents. Instead he attended the Central Grammar School for Boys in Manchester and he now had two younger brothers, Harry and Hubert, a sister had died in infancy. At the age of 16, he sat two examinations for university scholarships, and won both of them, Chadwick chose to attend Victoria University of Manchester, which he entered in 1908. He meant to study maths, but enrolled in physics by mistake, like most students, he lived at home, walking the 4 miles to the university and back each day. At the end of his first year, he was awarded a Heginbottom Scholarship to study physics, the idea was that they could be measured in terms of the activity of 1 gram of radium, a unit of measurement which would become known as the curie. Rutherfords suggested approach was unworkable—something Chadwick knew but was afraid to tell Rutherford—so Chadwick pressed on, the results became Chadwicks first paper, which, co-authored with Rutherford, was published in 1912
James Chadwick
–
Sir James Chadwick
James Chadwick
–
The Cavendish Laboratory was the home of some of the great discoveries in physics. It was founded in 1874 by the
Duke of Devonshire (Cavendish was his family name), and its first professor was
James Clerk Maxwell.
James Chadwick
–
Sir Ernest Rutherford's laboratory
James Chadwick
–
"
Red brick "
Victoria Building at the
University of Liverpool
63.
Paradigm shift
–
A paradigm shift, a concept identified by the American physicist and philosopher Thomas Kuhn, is a fundamental change in the basic concepts and experimental practices of a scientific discipline. Kuhn contrasted these shifts, which characterize a scientific revolution, to the activity of normal science, in this context, the word paradigm is used in its original meaning, as example. The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to his Critique of Pure Reason and he referred to Greek mathematics and Newtonian physics. In the 20th century, new crises in the concepts of mathematics, physics. It was against this background that Kuhn published his work. Kuhn presented his notion of a shift in his influential book The Structure of Scientific Revolutions. As one commentator summarizes, Kuhn acknowledges having used the term paradigm in two different meanings, in the second sense, the paradigm is a single element of a whole, say for instance Newton’s Principia, which, acting as a common model or an example. Stands for the rules and thus defines a coherent tradition of investigation. Thus the question is for Kuhn to investigate by means of the paradigm what makes possible the constitution of what he calls normal science and that is to say, the science which can decide if a certain problem will be considered scientific or not. This is precisely the meaning of the term paradigm, which Kuhn considered the most new and profound. An epistemological paradigm shift was called a revolution by epistemologist. The paradigm, in Kuhns view, is not simply the current theory, but the entire worldview in which it exists and this is based on features of landscape of knowledge that scientists can identify around them. There are anomalies for all paradigms, Kuhn maintained, that are brushed away as acceptable levels of error, or simply ignored, rather, according to Kuhn, anomalies have various levels of significance to the practitioners of science at the time. When enough significant anomalies have accrued against a current paradigm, the discipline is thrown into a state of crisis. During this crisis, new ideas, perhaps ones previously discarded, are tried, eventually a new paradigm is formed, which gains its own new followers, and an intellectual battle takes place between the followers of the new paradigm and the hold-outs of the old paradigm. Some found Arthur Eddingtons photographs of light bending around the sun to be compelling, while some questioned their accuracy, after a given discipline has changed from one paradigm to another, this is called, in Kuhns terminology, a scientific revolution or a paradigm shift. If this is correct, Kuhns claims must be taken in a weaker sense than they often are, at that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system. In 1900, Lord Kelvin famously told an assemblage of physicists at the British Association for the Advancement of Science, all that remains is more and more precise measurement
Paradigm shift
–
Kuhn used the duck-rabbit
optical illusion to demonstrate the way in which a paradigm shift could cause one to see the same information in an entirely different way.
64.
Absolute zero
–
Absolute zero is the lower limit of the thermodynamic temperature scale, a state at which the enthalpy and entropy of a cooled ideal gas reaches its minimum value, taken as 0. The corresponding Kelvin and Rankine temperature scales set their zero points at absolute zero by definition, in the quantum-mechanical description, matter at absolute zero is in its ground state, the point of lowest internal energy. And a system at absolute zero still possesses quantum mechanical zero-point energy, the kinetic energy of the ground state cannot be removed. Scientists and technologists routinely achieve temperatures close to zero, where matter exhibits quantum effects such as superconductivity and superfluidity. At temperatures near 0 K, nearly all molecular motion ceases and ΔS =0 for any adiabatic process, in such a circumstance, pure substances can form perfect crystals as T →0. Max Plancks strong form of the law of thermodynamics states the entropy of a perfect crystal vanishes at absolute zero. The Nernst postulate identifies the isotherm T =0 as coincident with the adiabat S =0, although other isotherms, as no two adiabats intersect, no other adiabat can intersect the T =0 isotherm. Consequently no adiabatic process initiated at nonzero temperature can lead to zero temperature, a perfect crystal is one in which the internal lattice structure extends uninterrupted in all directions. The perfect order can be represented by translational symmetry along three axes, every lattice element of the structure is in its proper place, whether it is a single atom or a molecular grouping. For substances that exist in two crystalline forms, such as diamond and graphite for carbon, there is a kind of chemical degeneracy. The question remains whether both can have zero entropy at T =0 even though each is perfectly ordered, using the Debye model, the specific heat and entropy of a pure crystal are proportional to T3, while the enthalpy and chemical potential are proportional to T4. These quantities drop toward their T =0 limiting values and approach with zero slopes, for the specific heats at least, the limiting value itself is definitely zero, as borne out by experiments to below 10 K. Even the less detailed Einstein model shows this curious drop in specific heats, in fact, all specific heats vanish at absolute zero, not just those of crystals. Likewise for the coefficient of thermal expansion, maxwells relations show that various other quantities also vanish. Since the relation between changes in Gibbs free energy, the enthalpy and the entropy is Δ G = Δ H − T Δ S thus, as T decreases, ΔG and ΔH approach each other. Experimentally, it is found that all spontaneous processes result in a decrease in G as they proceed toward equilibrium, if ΔS and/or T are small, the condition ΔG <0 may imply that ΔH <0, which would indicate an exothermic reaction. However, this is not required, endothermic reactions can proceed spontaneously if the TΔS term is large enough, moreover, the slopes of the derivatives of ΔG and ΔH converge and are equal to zero at T =0. e. An actual process is the most exothermic one, one model that estimates the properties of an electron gas at absolute zero in metals is the Fermi gas
Absolute zero
–
Robert Boyle pioneered the idea of an absolute zero.
Absolute zero
–
Velocity-distribution data of a gas of
rubidium atoms at a temperature within a few billionths of a degree above absolute zero. Left: just before the appearance of a Bose–Einstein condensate. Center: just after the appearance of the condensate. Right: after further evaporation, leaving a sample of nearly pure condensate.
Absolute zero
–
The rapid expansion of gases leaving the
Boomerang Nebula causes the lowest observed temperature outside a laboratory: 1 K
65.
Limit (mathematics)
–
In mathematics, a limit is the value that a function or sequence approaches as the input or index approaches some value. Limits are essential to calculus and are used to define continuity, derivatives, the concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. In formulas, a limit is usually written as lim n → c f = L and is read as the limit of f of n as n approaches c equals L. Here lim indicates limit, and the fact that function f approaches the limit L as n approaches c is represented by the right arrow, suppose f is a real-valued function and c is a real number. Intuitively speaking, the lim x → c f = L means that f can be made to be as close to L as desired by making x sufficiently close to c. The first inequality means that the distance x and c is greater than 0 and that x ≠ c, while the second indicates that x is within distance δ of c. Note that the definition of a limit is true even if f ≠ L. Indeed. Now since x +1 is continuous in x at 1, we can now plug in 1 for x, in addition to limits at finite values, functions can also have limits at infinity. In this case, the limit of f as x approaches infinity is 2, in mathematical notation, lim x → ∞2 x −1 x =2. Consider the following sequence,1.79,1.799,1.7999 and it can be observed that the numbers are approaching 1.8, the limit of the sequence. Formally, suppose a1, a2. is a sequence of real numbers, intuitively, this means that eventually all elements of the sequence get arbitrarily close to the limit, since the absolute value | an − L | is the distance between an and L. Not every sequence has a limit, if it does, it is called convergent, one can show that a convergent sequence has only one limit. The limit of a sequence and the limit of a function are closely related, on one hand, the limit as n goes to infinity of a sequence a is simply the limit at infinity of a function defined on the natural numbers n. On the other hand, a limit L of a function f as x goes to infinity, if it exists, is the same as the limit of any sequence a that approaches L. Note that one such sequence would be L + 1/n, in non-standard analysis, the limit of a sequence can be expressed as the standard part of the value a H of the natural extension of the sequence at an infinite hypernatural index n=H. Thus, lim n → ∞ a n = st , here the standard part function st rounds off each finite hyperreal number to the nearest real number. This formalizes the intuition that for very large values of the index. Conversely, the part of a hyperreal a = represented in the ultrapower construction by a Cauchy sequence, is simply the limit of that sequence
Limit (mathematics)
–
Whenever a point x is within δ units of c, f (x) is within ε units of L.
66.
Approximation theory
–
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by best and simpler will depend on the application, a closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. This is typically done with polynomial or rational approximations, the objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computers floating point arithmetic. This is accomplished by using a polynomial of degree, and/or narrowing the domain over which the polynomial has to approximate the function. Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated, modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment. Once the domain and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the value of ∣ P − f ∣, where P is the approximating polynomial, f is the actual function. It is seen that an Nth-degree polynomial can interpolate N+1 points in a curve, such a polynomial is always optimal. It is possible to make contrived functions f for which no such polynomial exists, for example, the graphs shown to the right show the error in approximating log and exp for N =4. The red curves, for the polynomial, are level. Note that, in case, the number of extrema is N+2. Two of the extrema are at the end points of the interval, at the left, the red graph to the right shows what this error function might look like for N =4. Suppose Q is another N-degree polynomial that is an approximation to f than P. In particular, Q is closer to f than P for each value xi where an extreme of P−f occurs, but − reduces to P − Q which is a polynomial of degree N. This function changes sign at least N+1 times so, by the Intermediate value theorem, it has N+1 zeroes, one can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree. This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the trigonometric functions. If one calculates the coefficients in the Chebyshev expansion for a function, f ∼ ∑ i =0 ∞ c i T i and then cuts off the series after the T N term and that is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of Chebyshev polynomials, if a Chebyshev expansion is cut off after T N, the error will take a form close to a multiple of T N +1
Approximation theory
–
Error between optimal polynomial and log(x) (red), and Chebyshev approximation and log(x) (blue) over the interval [2, 4]. Vertical divisions are 10 −5. Maximum error for the optimal polynomial is 6.07 x 10 −5.
67.
Black body radiation
–
Black-body radiation is the thermal electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body. It has a spectrum and intensity that depends only on the bodys temperature. The thermal radiation emitted by many ordinary objects can be approximated as black-body radiation. A black-body at room temperature appears black, as most of the energy it radiates is infra-red, when it becomes a little hotter, it appears dull red. As its temperature increases further it eventually becomes blue-white, although planets and stars are neither in thermal equilibrium with their surroundings nor perfect black bodies, black-body radiation is used as a first approximation for the energy they emit. Black holes are near-perfect black bodies, in the sense that they absorb all the radiation that falls on them and it has been proposed that they emit black-body radiation, with a temperature that depends on the mass of the black hole. The term black body was introduced by Gustav Kirchhoff in 1860, Black-body radiation is also called thermal radiation, cavity radiation, complete radiation or temperature radiation. Black-body radiation has a characteristic, continuous spectrum that depends only on the bodys temperature. As the temperature increases past about 500 degrees Celsius, black start to emit significant amounts of visible light. Viewed in the dark by the eye, the first faint glow appears as a ghostly grey. When the body appears white, it is emitting a substantial fraction of its energy as ultraviolet radiation, Black-body radiation provides insight into the thermodynamic equilibrium state of cavity radiation. Instead, in theory the occupation numbers of the modes are quantized, cutting off the spectrum at high frequency in agreement with experimental observation. The study of the laws of black bodies and the failure of classical physics to describe them helped establish the foundations of quantum mechanics, all normal matter emits electromagnetic radiation when it has a temperature above absolute zero. The radiation represents a conversion of a thermal energy into electromagnetic energy. It is a process of radiative distribution of entropy. Conversely all normal matter absorbs electromagnetic radiation to some degree, an object that absorbs all radiation falling on it, at all wavelengths, is called a black body. When a black body is at a temperature, its emission has a characteristic frequency distribution that depends on the temperature. Its emission is called black-body radiation, the concept of the black body is an idealization, as perfect black bodies do not exist in nature
Black body radiation
–
The temperature of a
Pāhoehoe lava flow can be estimated by observing its color. The result agrees well with measured temperatures of lava flows at about 1,000 to 1,200 °C (1,830 to 2,190 °F).
Black body radiation
–
As the temperature decreases, the peak of the black-body radiation curve moves to lower intensities and longer wavelengths. The black-body radiation graph is also compared with the classical model of Rayleigh and Jeans.
Black body radiation
–
Much of a person's energy is radiated away in the form of
infrared light. Some materials are transparent in the infrared, but opaque to visible light, as is the plastic bag in this infrared image (bottom). Other materials are transparent to visible light, but opaque or reflective in the infrared, noticeable by the darkness of the man's glasses.
Black body radiation
68.
Ultraviolet catastrophe
–
The term ultraviolet catastrophe was first used in 1911 by Paul Ehrenfest, but the concept originated with the 1900 derivation of the Rayleigh–Jeans law. Since the first appearance of the term, it has also used for other predictions of a similar nature, as in quantum electrodynamics. An example, from Masons A History of the Sciences, illustrates multi-mode vibration via a piece of string, as a natural vibrator, the string will oscillate with specific modes, dependent on the length of the string. In classical physics, a radiator of energy will act as a natural vibrator, and, since each mode will have the same energy, most of the energy in a natural vibrator will be in the smaller wavelengths and higher frequencies, where most of the modes are. According to classical electromagnetism, the number of modes in a 3-dimensional cavity. This therefore implies that the power per unit frequency should follow the Rayleigh–Jeans law. Planck derived the form for the intensity spectral distribution function by making some strange assumptions. In particular, Planck assumed that electromagnetic radiation can only be emitted or absorbed in discrete packets, called quanta, of energy, E quanta = h ν = h c λ, where h is Plancks constant. Plancks assumptions led to the form of the spectral distribution functions. Albert Einstein solved the problem by postulating that Plancks quanta were real physical particles—what we now call photons and he modified statistical mechanics in the style of Boltzmann to an ensemble of photons. Einsteins photon had a proportional to its frequency and also explained an unpublished law of Stokes. Many popular histories of physics, as well as a number of physics textbooks, in that version, the catastrophe was first noticed by Planck, who developed his formula in response. That Plancks proposal happened to provide a solution for it was realized only later, wien approximation Vacuum catastrophe Kroemer, Herbert, Kittel, Charles. Cohen-Tannoudji, Claude, Diu, Bernard, Laloë, Franck
Ultraviolet catastrophe
–
The ultraviolet catastrophe is the error at short wavelengths in the
Rayleigh–Jeans law (depicted as "classical theory" in the graph) for the energy emitted by an ideal black-body. The error, much more pronounced for short wavelengths, is the difference between the black curve (as classically predicted by the
Rayleigh–Jeans law) and the blue curve (the measured curve as predicted by
Planck's law).
69.
Heinrich Hertz
–
Heinrich Rudolf Hertz was a German physicist who first conclusively proved the existence of the electromagnetic waves theorized by James Clerk Maxwells electromagnetic theory of light. The unit of frequency – cycle per second – was named the hertz in his honor, Heinrich Rudolf Hertz was born in 1857 in Hamburg, then a sovereign state of the German Confederation, into a prosperous and cultured Hanseatic family. His father Gustav Ferdinand Hertz was a barrister and later a senator and his mother was Anna Elisabeth Pfefferkorn. Hertzs paternal grandfather, Heinrich David Hertz, was a businessman and their first son, Wolff Hertz, was chairman of the Jewish community. Heinrich Rudolf Hertzs father and paternal grandparents had converted from Judaism to Christianity in 1834 and his mothers family was a Lutheran pastors family. While studying at the Gelehrtenschule des Johanneums in Hamburg, Heinrich Rudolf Hertz showed an aptitude for sciences as well as languages, learning Arabic and Sanskrit. He studied sciences and engineering in the German cities of Dresden, Munich and Berlin, in 1880, Hertz obtained his PhD from the University of Berlin, and for the next three years remained for post-doctoral study under Helmholtz, serving as his assistant. In 1883, Hertz took a post as a lecturer in physics at the University of Kiel. In 1885, Hertz became a professor at the University of Karlsruhe. In 1886, Hertz married Elisabeth Doll, the daughter of Dr. Max Doll and they had two daughters, Johanna, born on 20 October 1887 and Mathilde, born on 14 January 1891, who went on to become a notable biologist. During this time Hertz conducted his research into electromagnetic waves. Hertz took a position of Professor of Physics and Director of the Physics Institute in Bonn on 3 April 1889, during this time he worked on theoretical mechanics with his work published in the book Die Prinzipien der Mechanik in neuem Zusammenhange dargestellt, published posthumously in 1894. In 1892, Hertz was diagnosed with an infection and underwent operations to treat the illness and he died of granulomatosis with polyangiitis at the age of 36 in Bonn, Germany in 1894, and was buried in the Ohlsdorf Cemetery in Hamburg. Hertzs wife, Elisabeth Hertz née Doll, did not remarry, Hertz left two daughters, Johanna and Mathilde. Hertzs daughters never married and he has no descendants, Hertz always had a deep interest in meteorology, probably derived from his contacts with Wilhelm von Bezold. In 1886–1889, Hertz published two articles on what was to become known as the field of contact mechanics, joseph Valentin Boussinesq published some critically important observations on Hertzs work, nevertheless establishing this work on contact mechanics to be of immense importance. His work basically summarises how two objects placed in contact will behave under loading, he obtained results based upon the classical theory of elasticity. It was natural to neglect adhesion in that age as there were no methods of testing for it
Heinrich Hertz
–
Heinrich Hertz
Heinrich Hertz
–
Memorial of Heinrich Hertz on the campus of the
Karlsruhe Institute of Technology, which translates as At this site, Heinrich Hertz discovered electromagnetic waves in the years 1885–1889.
Heinrich Hertz
–
Official English translation of Untersuchungen über die Ausbreitung der elektrischen Kraft published in 1893, a year before Hertz's death.
Heinrich Hertz
70.
Photoelectric effect
–
The photoelectric effect or photo ionization is the emission of electrons or other free carriers when light is shone onto a material. Electrons emitted in this manner can be called photo electrons, the phenomenon is commonly studied in electronic physics, as well as in fields of chemistry, such as quantum chemistry or electrochemistry. According to classical theory, this effect can be attributed to the transfer of energy from the light to an electron. From this perspective, an alteration in the intensity of light would induce changes in the energy of the electrons emitted from the metal. Furthermore, according to theory, a sufficiently dim light would be expected to show a time lag between the initial shining of its light and the subsequent emission of an electron. However, the results did not correlate with either of the two predictions made by classical theory. Instead, electrons are dislodged only by the impingement of photons when those photons reach or exceed a threshold frequency, below that threshold, no electrons are emitted from the metal regardless of the light intensity or the length of time of exposure to the light. This shed light on Max Plancks previous discovery of the Planck relation linking energy, the factor h is known as the Planck constant. In 1887, Heinrich Hertz discovered that illuminated with ultraviolet light create electric sparks more easily. In 1900, while studying black-body radiation, the German physicist Max Planck suggested that the energy carried by electromagnetic waves could only be released in packets of energy. In 1905, Albert Einstein published a paper advancing the hypothesis that energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect. This model contributed to the development of quantum mechanics, in 1914, Robert Millikans experiment supported Einsteins model of the photoelectric effect. The photoelectric effect requires photons with energies approaching zero to over 1 MeV for core electrons in elements with an atomic number. Emission of conduction electrons from typical metals usually requires a few electron-volts, study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality. Other phenomena where light affects the movement of electric charges include the effect, the photovoltaic effect. It is also usual to have the surface in a vacuum, since gases impede the flow of photoelectrons. When the photoelectron is emitted into a rather than into a vacuum, the term internal photoemission is often used. The photons of a light beam have an energy proportional to the frequency of the light
Photoelectric effect
–
Work function and cut off frequency
Photoelectric effect
–
Light–matter interaction
Photoelectric effect
–
Heinrich Rudolf Hertz
Photoelectric effect
–
German physicist Philipp Lenard
71.
History of quantum mechanics
–
The history of quantum mechanics is a fundamental part of the history of modern physics. In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, Ludwig Boltzmann suggested in 1877 that the energy levels of a physical system, such as a molecule, could be discrete. He was a founder of the Austrian Mathematical Society, together with the mathematicians Gustav von Escherich, the earlier Wien approximation may be derived from Plancks law by assuming h ν ≫ k T. This statement has been called the most revolutionary sentence written by a physicist of the twentieth century and these energy quanta later came to be called photons, a term introduced by Gilbert N. Lewis in 1926. In 1913, Bohr explained the lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms. They are collectively known as the old quantum theory, the phrase quantum physics was first used in Johnstons Plancks Universe in Light of Modern Physics. In 1923, the French physicist Louis de Broglie put forward his theory of waves by stating that particles can exhibit wave characteristics. This theory was for a particle and derived from special relativity theory. Schrödinger subsequently showed that the two approaches were equivalent, heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation started to take shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by proposing the Dirac equation for the electron, the Dirac equation achieves the relativistic description of the wavefunction of an electron that Schrödinger failed to obtain. It predicts electron spin and led Dirac to predict the existence of the positron and he also pioneered the use of operator theory, including the influential bra–ket notation, as described in his famous 1930 textbook. These, like other works from the founding period, still stand. The field of chemistry was pioneered by physicists Walter Heitler and Fritz London. Beginning in 1927, researchers made attempts at applying quantum mechanics to fields instead of single particles, early workers in this area include P. A. M. Dirac, W. Pauli, V. Weisskopf, and P. Jordan and this area of research culminated in the formulation of quantum electrodynamics by R. P. Feynman, F. Dyson, J. Schwinger, and S. I. Tomonaga during the 1940s. Quantum electrodynamics describes a quantum theory of electrons, positrons, and the electromagnetic field, the theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross, thomas Youngs double-slit experiment demonstrating the wave nature of light. J. J. Thomsons cathode ray tube experiments, the study of black-body radiation between 1850 and 1900, which could not be explained without quantum concepts
History of quantum mechanics
–
Max Planck,
Albert Einstein,
Niels Bohr,
Louis de Broglie,
Max Born,
Paul Dirac,
Werner Heisenberg,
Wolfgang Pauli,
Erwin Schrödinger,
Richard Feynman.
History of quantum mechanics
–
Ludwig Boltzmann 's diagram of the I 2 molecule proposed in 1898 showing the atomic "sensitive region" (α, β) of overlap.
History of quantum mechanics
72.
History of relativity
–
The history of special relativity consists of many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. It culminated in the theory of special relativity proposed by Albert Einstein and subsequent work of Max Planck, Hermann Minkowski, although Isaac Newton based his physics on absolute time and space, he also adhered to the principle of relativity of Galileo Galilei. According to Maxwells theory, all optical and electrical phenomena propagate through that medium, following the work of Thomas Young and Augustin-Jean Fresnel, it was believed that light propagates as a transverse wave within an elastic medium called luminiferous aether. However, a distinction was made between optical and electrodynamical phenomena so it was necessary to create specific aether models for all phenomena and he first proposed that light was in fact undulations in the same aetherial medium that is the cause of electric and magnetic phenomena. After Heinrich Hertz in 1887 demonstrated the existence of electromagnetic waves, in addition, Oliver Heaviside and Hertz further developed the theory and introduced modernized versions of Maxwells equations. The Maxwell-Hertz or Heaviside-Hertz equations subsequently formed an important basis for the development of electrodynamics. Other important contributions to Maxwells theory were made by George FitzGerald, Joseph John Thomson, John Henry Poynting, Hendrik Lorentz, regarding the relative motion and the mutual influence of matter and aether, there were two controversial theories. This model supposed that light propagates as a wave and aether is partially dragged with a certain coefficient by matter. Based on this assumption, Fresnel was able to explain the aberration of light, the other hypothesis was proposed by George Gabriel Stokes, who stated in 1845 that the aether was fully dragged by matter. In this model the aether might be rigid for fast objects, thus the Earth could move through it fairly freely, but it would be rigid enough to transport light. Fresnels theory was preferred because his dragging coefficient was confirmed by the Fizeau experiment in 1851, Albert A. Michelson tried to measure the relative motion of the Earth and aether, as it was expected in Fresnels theory, by using an interferometer. He could not determine any relative motion, so he interpreted the result as a confirmation of the thesis of Stokes, however, Lorentz showed Michelsons calculations were wrong and that he had overestimated the accuracy of the measurement. This, together with the margin of error, made the result of Michelsons experiment inconclusive. In addition, Lorentz showed that Stokes completely dragged aether led to contradictory consequences, to check Fresnels theory again, Michelson and Edward W. Morley performed a repetition of the Fizeau experiment. Fresnels dragging coefficient was confirmed very exactly on that occasion, to clarify the situation, Michelson and Morley repeated Michelsons 1881-experiment, and they substantially increased the accuracy of the measurement. However, this now famous Michelson–Morley experiment again yielded a negative result, however, Voigts work was completely ignored by his contemporaries. FitzGerald offered another explanation of the result of the Michelson–Morley experiment. Contrary to Voigt, he speculated that the forces are possibly of electrical origin so that material bodies would contract in the line of motion
History of relativity
–
A. A. Michelson
History of relativity
–
Hendrik Antoon Lorentz
History of relativity
–
Henri Poincaré
History of relativity
–
Albert Einstein, 1921
73.
Atomic theory
–
In chemistry and physics, atomic theory is a scientific theory of the nature of matter, which states that matter is composed of discrete units called atoms. The word atom comes from the Ancient Greek adjective atomos, meaning indivisible, 19th century chemists began using the term in connection with the growing number of irreducible chemical elements. In fact, in extreme environments, such as neutron stars. Since atoms were found to be divisible, physicists later invented the term elementary particles to describe the uncuttable, though not indestructible, parts of an atom. The field of science which studies subatomic particles is particle physics, the idea that matter is made up of discrete units is a very old one, appearing in many ancient cultures such as Greece and India. However, these ideas were founded in philosophical and theological reasoning rather than evidence, because of this, they could not convince everybody, so atomism was but one of a number of competing theories on the nature of matter. Near the end of the 18th century, two laws about chemical reactions emerged without referring to the notion of an atomic theory. The first was the law of conservation of mass, formulated by Antoine Lavoisier in 1789, the second was the law of definite proportions. For example, Proust had studied tin oxides and found that their masses were either 88. 1% tin and 11. 9% oxygen or 78. 7% tin and 21. 3% oxygen. Dalton noted from these percentages that 100g of tin will combine either with 13. 5g or 27g of oxygen,13.5 and 27 form a ratio of 1,2, Dalton found that an atomic theory of matter could elegantly explain this common pattern in chemistry. In the case of Prousts tin oxides, one tin atom will combine with one or two oxygen atoms. Dalton hypothesized this was due to the differences in mass and complexity of the gases respective particles, indeed, carbon dioxide molecules are heavier and larger than nitrogen molecules. This marked the first truly scientific theory of the atom, since Dalton reached his conclusions by experimentation and examination of the results in an empirical fashion, in 1803 Dalton orally presented his first list of relative atomic weights for a number of substances. This paper was published in 1805, but he did not discuss there exactly how he obtained these figures, the method was first revealed in 1807 by his acquaintance Thomas Thomson, in the third edition of Thomsons textbook, A System of Chemistry. Finally, Dalton published an account in his own textbook. Dalton estimated the atomic weights according to the ratios in which they combined. However, Dalton did not conceive that with some elements atoms exist in molecules—e. g and he also mistakenly believed that the simplest compound between any two elements is always one atom of each. This, in addition to the crudity of his equipment, flawed his results, adopting better data, in 1806 he concluded that the atomic weight of oxygen must actually be 7 rather than 5.5, and he retained this weight for the rest of his life
Atomic theory
–
The cathode rays (blue) were emitted from the cathode, sharpened to a beam by the slits, then deflected as they passed between the two electrified plates.
Atomic theory
–
The current theoretical model of the atom involves a dense nucleus surrounded by a probabilistic "cloud" of electrons
74.
Tests of general relativity
–
At its introduction in 1915, the general theory of relativity did not have a solid empirical foundation. Beginning in 1974, Hulse, Taylor and others have studied the behaviour of binary pulsars experiencing much stronger gravitational fields than those found in the Solar System. Both in the field limit and with the stronger fields present in systems of binary pulsars the predictions of general relativity have been extremely well tested locally. As a consequence of the principle, Lorentz invariance holds locally in non-rotating. Experiments related to Lorentz invariance and thus special relativity are described in Tests of special relativity, in February 2016, the Advanced LIGO team announced that they had directly detected gravitational waves from a black hole merger. This discovery along with a second discovery announced in June 2016 tested general relativity in the strong field limit. He also mentioned three classical tests with comments, The chief attraction of the lies in its logical completeness. If a single one of the conclusions drawn from it proves wrong, it must be given up, under Newtonian physics, a two-body system consisting of a lone object orbiting a spherical mass would trace out an ellipse with the spherical mass at a focus. The point of closest approach, called the periapsis, is fixed, a number of effects in the Solar System cause the perihelia of planets to precess around the Sun. The principal cause is the presence of planets which perturb one anothers orbit. Mercury deviates from the precession predicted from these Newtonian effects and this anomalous rate of precession of the perihelion of Mercurys orbit was first recognized in 1859 as a problem in celestial mechanics, by Urbain Le Verrier. A number of ad hoc and ultimately unsuccessful solutions were proposed, in general relativity, this remaining precession, or change of orientation of the orbital ellipse within its orbital plane, is explained by gravitation being mediated by the curvature of spacetime. Einstein showed that general relativity agrees closely with the amount of perihelion shift. This was a factor motivating the adoption of general relativity. Although earlier measurements of planetary orbits were made using conventional telescopes, the total observed precession of Mercury is 574. 10±0.65 arc-seconds per century relative to the inertial ICRF. This precession can be attributed to the causes, The correction by 42.98 is 3/2 multiple of classical prediction with PPN parameters γ = β =0. Thus the effect can be explained by general relativity. More recent calculations based on precise measurements have not materially changed the situation
Tests of general relativity
–
Transit of Mercury on November 8, 2006 with
sunspots #921, 922, and 923
Tests of general relativity
–
One of
Eddington 's photographs of the 1919
solar eclipse experiment, presented in his 1920 paper announcing its success
Tests of general relativity
–
The LAGEOS-1 satellite. (
D =60 cm)
Tests of general relativity
–
Artist's impression of the pulsar
PSR J0348+0432 and its white dwarf companion.
75.
Ballistics
–
A ballistic body is a body with momentum which is free to move, subject to forces, such as the pressure of gases in a gun or a propulsive nozzle, by rifling in a barrel, by gravity, or by air drag. The earliest known ballistic projectiles were stones and spears, and the throwing stick. The oldest evidence of stone-tipped projectiles, which may or may not have been propelled by a bow, dating to c.64,000 years ago, were found in Sibudu Cave, present day-South Africa. The oldest evidence of the use of bows to shoot arrows dates to about 10,000 years ago and they had shallow grooves on the base, indicating that they were shot from a bow. The oldest bow so far recovered is about 8,000 years old, archery seems to have arrived in the Americas with the Arctic small tool tradition, about 4,500 years ago. The first devices identified as guns appeared in China around 1000 AD, and by the 12th century the technology was spreading through the rest of Asia, the word ballistics comes from the Greek βάλλειν ballein, meaning to throw. A projectile is any object projected into space by the exertion of a force, although any object in motion through space is a projectile, the term most commonly refers to a ranged weapon. Mathematical equations of motion are used to analyze projectile trajectory, examples of projectiles include balls, arrows, bullets, artillery shells, rockets, etc. Throwing is the launching of a projectile by hand, although some other animals can throw, humans are unusually good throwers due to their high dexterity and good timing capabilities, and it is believed that this is an evolved trait. Evidence of human throwing dates back 2 million years, the 90 mph throwing speed found in many athletes far exceeds the speed at which chimpanzees can throw things, which is about 20 mph. This ability reflects the ability of the shoulder muscles and tendons to store elasticity until it is needed to propel an object. A sling is a projectile weapon used to throw a blunt projectile such as a stone. A sling has a cradle or pouch in the middle of two lengths of cord. The sling stone is placed in the pouch, the middle finger or thumb is placed through a loop on the end of one cord, and a tab at the end of the other cord is placed between the thumb and forefinger. The sling is swung in an arc, and the tab released at a precise moment and this frees the projectile to fly to the target. A bow is a piece of material which shoots aerodynamic projectiles called arrows. A string joins the two ends and when the string is drawn back, the ends of the stick are flexed, when the string is released, the potential energy of the flexed stick is transformed into the velocity of the arrow. Archery is the art or sport of shooting arrows from bows, a catapult is a device used to launch a projectile a great distance without the aid of explosive devices — particularly various types of ancient and medieval siege engines
Ballistics
–
Baseball throws can exceed 100 mph
Ballistics
–
Trajectories of three objects thrown at the same angle (70°). The black object doesn't experience any form of drag and moves along a parabola. The blue object experiences
Stokes' drag, and the green object
Newtonian drag.
Ballistics
–
Catapult 1 Mercato San Severino
Ballistics
–
SIG Pro semi-automatic pistol
76.
Statistical mechanics
–
Statistical mechanics is a branch of theoretical physics using probability theory to study the average behaviour of a mechanical system, where the state of the system is uncertain. A common use of mechanics is in explaining the thermodynamic behaviour of large systems. This branch of mechanics, which treats and extends classical thermodynamics, is known as statistical thermodynamics or equilibrium statistical mechanics. Statistical mechanics also finds use outside equilibrium, an important subbranch known as non-equilibrium statistical mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions or flows of particles, in physics there are two types of mechanics usually examined, classical mechanics and quantum mechanics. Statistical mechanics fills this disconnection between the laws of mechanics and the experience of incomplete knowledge, by adding some uncertainty about which state the system is in. The statistical ensemble is a probability distribution over all states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points, in quantum statistical mechanics, the ensemble is a probability distribution over pure states, and can be compactly summarized as a density matrix. These two meanings are equivalent for many purposes, and will be used interchangeably in this article, however the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself also evolves, as the systems in the ensemble continually leave one state. The ensemble evolution is given by the Liouville equation or the von Neumann equation, one special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium, Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics, non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of thermodynamics is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles. Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium, Statistical equilibrium does not mean that the particles have stopped moving, rather, only that the ensemble is not evolving. A sufficient condition for statistical equilibrium with a system is that the probability distribution is a function only of conserved properties. There are many different equilibrium ensembles that can be considered, additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in textbooks is to take the equal a priori probability postulate
Statistical mechanics
–
Statistical mechanics
77.
Solid mechanics
–
Solid mechanics is fundamental for civil, aerospace, nuclear, and mechanical engineering, for geology, and for many branches of physics such as materials science. It has specific applications in other areas, such as understanding the anatomy of living beings. One of the most common applications of solid mechanics is the Euler-Bernoulli beam equation. Solid mechanics extensively uses tensors to describe stresses, strains, as shown in the following table, solid mechanics inhabits a central place within continuum mechanics. The field of rheology presents an overlap between solid and fluid mechanics, a material has a rest shape and its shape departs away from the rest shape due to stress. The amount of departure from rest shape is called deformation, the proportion of deformation to original size is called strain and this region of deformation is known as the linearly elastic region. It is most common for analysts in solid mechanics to use linear material models, however, real materials often exhibit non-linear behavior. As new materials are used and old ones are pushed to their limits, There are four basic models that describe how a solid responds to an applied stress, Elastically – When an applied stress is removed, the material returns to its undeformed state. Linearly elastic materials, those that deform proportionally to the applied load and this implies that the material response has time-dependence. Plastically – Materials that behave elastically generally do so when the stress is less than a yield value. When the stress is greater than the stress, the material behaves plastically. That is, deformation occurs after yield is permanent. Thermoelastically - There is coupling of mechanical with thermal responses, in general, thermoelasticity is concerned with elastic solids under conditions that are neither isothermal nor adiabatic. The simplest theory involves the Fouriers law of conduction, as opposed to advanced theories with physically more realistic models. This theorem includes the method of least work as a special case 1874,1922, Timoshenko corrects the Euler-Bernoulli beam equation 1936, Hardy Cross publication of the moment distribution method, an important innovation in the design of continuous frames. Martin, and L. J. Applied mechanics Materials science Continuum mechanics Fracture mechanics L. D, landau, E. M. Lifshitz, Course of Theoretical Physics, Theory of Elasticity Butterworth-Heinemann, ISBN 0-7506-2633-X J. E. Marsden, T. J. Hughes, Mathematical Foundations of Elasticity, Dover, ISBN 0-486-67865-2 P. C. Chou, N. J. Pagano, Elasticity, Tensor, Dyadic, goodier, Theory of elasticity, 3d ed
Solid mechanics
–
Continuum mechanics
78.
Fluid mechanics
–
Fluid mechanics is a branch of physics concerned with the mechanics of fluids and the forces on them. Fluid mechanics has a range of applications, including for mechanical engineering, civil engineering, chemical engineering, geophysics, astrophysics. Fluid mechanics can be divided into fluid statics, the study of fluids at rest, and fluid dynamics, fluid mechanics, especially fluid dynamics, is an active field of research with many problems that are partly or wholly unsolved. Fluid mechanics can be complex, and can best be solved by numerical methods. A modern discipline, called computational fluid dynamics, is devoted to this approach to solving fluid mechanics problems, Particle image velocimetry, an experimental method for visualizing and analyzing fluid flow, also takes advantage of the highly visual nature of fluid flow. Inviscid flow was further analyzed by mathematicians and viscous flow was explored by a multitude of engineers including Jean Léonard Marie Poiseuille. Fluid statics or hydrostatics is the branch of mechanics that studies fluids at rest. It embraces the study of the conditions under which fluids are at rest in stable equilibrium, and is contrasted with fluid dynamics, hydrostatics is fundamental to hydraulics, the engineering of equipment for storing, transporting and using fluids. It is also relevant to some aspect of geophysics and astrophysics, to meteorology, to medicine, fluid dynamics is a subdiscipline of fluid mechanics that deals with fluid flow—the science of liquids and gases in motion. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density and it has several subdisciplines itself, including aerodynamics and hydrodynamics. Some fluid-dynamical principles are used in engineering and crowd dynamics. Fluid mechanics is a subdiscipline of continuum mechanics, as illustrated in the following table, in a mechanical view, a fluid is a substance that does not support shear stress, that is why a fluid at rest has the shape of its containing vessel. A fluid at rest has no shear stress, the assumptions inherent to a fluid mechanical treatment of a physical system can be expressed in terms of mathematical equations. This can be expressed as an equation in integral form over the control volume, the continuum assumption is an idealization of continuum mechanics under which fluids can be treated as continuous, even though, on a microscopic scale, they are composed of molecules. Fluid properties can vary continuously from one element to another and are average values of the molecular properties. The continuum hypothesis can lead to results in applications like supersonic speed flows. Those problems for which the continuum hypothesis fails, can be solved using statistical mechanics, to determine whether or not the continuum hypothesis applies, the Knudsen number, defined as the ratio of the molecular mean free path to the characteristic length scale, is evaluated. Problems with Knudsen numbers below 0.1 can be evaluated using the continuum hypothesis, the Navier–Stokes equations are differential equations that describe the force balance at a given point within a fluid
Fluid mechanics
–
Balance for some integrated fluid quantity in a
control volume enclosed by a
control surface.
79.
Field (physics)
–
In physics, a field is a physical quantity, typically a number or tensor, that has a value for each point in space and time. For example, on a map, the surface wind velocity is described by assigning a vector to each point on a map. Each vector represents the speed and direction of the movement of air at that point, as another example, an electric field can be thought of as a condition in space emanating from an electric charge and extending throughout the whole of space. When a test electric charge is placed in this electric field, physicists have found the notion of a field to be of such practical utility for the analysis of forces that they have come to think of a force as due to a field. In the modern framework of the theory of fields, even without referring to a test particle, a field occupies space, contains energy. This led physicists to consider electromagnetic fields to be a physical entity, the fact that the electromagnetic field can possess momentum and energy makes it very real. A particle makes a field, and a field acts on another particle, in practice, the strength of most fields has been found to diminish with distance to the point of being undetectable. One consequence is that the Earths gravitational field quickly becomes undetectable on cosmic scales, a field has a unique tensorial character in every point where it is defined, i. e. a field cannot be a scalar field somewhere and a vector field somewhere else. For example, the Newtonian gravitational field is a vector field, moreover, within each category, a field can be either a classical field or a quantum field, depending on whether it is characterized by numbers or quantum operators respectively. In fact in this theory an equivalent representation of field is a field particle, to Isaac Newton his law of universal gravitation simply expressed the gravitational force that acted between any pair of massive objects. In the eighteenth century, a new quantity was devised to simplify the bookkeeping of all these gravitational forces and this quantity, the gravitational field, gave at each point in space the total gravitational force which would be felt by an object with unit mass at that point. The development of the independent concept of a field began in the nineteenth century with the development of the theory of electromagnetism. In the early stages, André-Marie Ampère and Charles-Augustin de Coulomb could manage with Newton-style laws that expressed the forces between pairs of electric charges or electric currents. However, it became more natural to take the field approach and express these laws in terms of electric and magnetic fields. The independent nature of the field became more apparent with James Clerk Maxwells discovery that waves in these fields propagated at a finite speed, Maxwell, at first, did not adopt the modern concept of a field as fundamental quantity that could independently exist. Instead, he supposed that the field expressed the deformation of some underlying medium—the luminiferous aether—much like the tension in a rubber membrane. If that were the case, the velocity of the electromagnetic waves should depend upon the velocity of the observer with respect to the aether. Despite much effort, no evidence of such an effect was ever found
Field (physics)
–
Illustration of the electric field surrounding a positive (red) and a negative (blue) charge.
80.
Gravity
–
Gravity, or gravitation, is a natural phenomenon by which all things with mass are brought toward one another, including planets, stars and galaxies. Since energy and mass are equivalent, all forms of energy, including light, on Earth, gravity gives weight to physical objects and causes the ocean tides. Gravity has a range, although its effects become increasingly weaker on farther objects. The most extreme example of this curvature of spacetime is a hole, from which nothing can escape once past its event horizon. More gravity results in time dilation, where time lapses more slowly at a lower gravitational potential. Gravity is the weakest of the four fundamental interactions of nature, the gravitational attraction is approximately 1038 times weaker than the strong force,1036 times weaker than the electromagnetic force and 1029 times weaker than the weak force. As a consequence, gravity has an influence on the behavior of subatomic particles. On the other hand, gravity is the dominant interaction at the macroscopic scale, for this reason, in part, pursuit of a theory of everything, the merging of the general theory of relativity and quantum mechanics into quantum gravity, has become an area of research. While the modern European thinkers are credited with development of gravitational theory, some of the earliest descriptions came from early mathematician-astronomers, such as Aryabhata, who had identified the force of gravity to explain why objects do not fall out when the Earth rotates. Later, the works of Brahmagupta referred to the presence of force, described it as an attractive force. Modern work on gravitational theory began with the work of Galileo Galilei in the late 16th and this was a major departure from Aristotles belief that heavier objects have a higher gravitational acceleration. Galileo postulated air resistance as the reason that objects with less mass may fall slower in an atmosphere, galileos work set the stage for the formulation of Newtons theory of gravity. In 1687, English mathematician Sir Isaac Newton published Principia, which hypothesizes the inverse-square law of universal gravitation. Newtons theory enjoyed its greatest success when it was used to predict the existence of Neptune based on motions of Uranus that could not be accounted for by the actions of the other planets. Calculations by both John Couch Adams and Urbain Le Verrier predicted the position of the planet. A discrepancy in Mercurys orbit pointed out flaws in Newtons theory, the issue was resolved in 1915 by Albert Einsteins new theory of general relativity, which accounted for the small discrepancy in Mercurys orbit. The simplest way to test the equivalence principle is to drop two objects of different masses or compositions in a vacuum and see whether they hit the ground at the same time. Such experiments demonstrate that all objects fall at the rate when other forces are negligible
Gravity
–
Sir Isaac Newton, an English physicist who lived from 1642 to 1727
Gravity
Gravity
–
Two-dimensional analogy of spacetime distortion generated by the mass of an object. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the
coordinate system imposed on the curved spacetime, which would be
rectilinear in a flat spacetime.
Gravity
–
Ball falling freely under gravity. See text for description.
81.
Nonlinear optics
–
The nonlinearity is typically observed only at very high light intensities such as those provided by lasers. Above the Schwinger limit, the vacuum itself is expected to become nonlinear, in nonlinear optics, the superposition principle no longer holds. However, some effects were discovered before the development of the laser. The theoretical basis for many nonlinear processes were first described in Bloembergens monograph Nonlinear Optics, Nonlinear optics explains nonlinear response of properties such as frequency, polarization, phase or path of incident light. Third-harmonic generation, generation of light with a frequency, three photons are destroyed, creating a single photon at three times the frequency. High-harmonic generation, generation of light with much greater than the original. Sum-frequency generation, generation of light with a frequency that is the sum of two other frequencies, difference-frequency generation, generation of light with a frequency that is the difference between two other frequencies. Optical parametric amplification, amplification of an input in the presence of a higher-frequency pump wave. Optical parametric oscillation, generation of a signal and idler wave using an amplifier in a resonator. Optical parametric generation, like parametric oscillation but without a resonator, spontaneous parametric down-conversion, the amplification of the vacuum fluctuations in the low-gain regime. Optical rectification, generation of electric fields. Nonlinear light-matter interaction with electrons and plasmas. Optical Kerr effect, intensity-dependent refractive index, self-focusing, an effect due to the optical Kerr effect caused by the spatial variation in the intensity creating a spatial variation in the refractive index. Kerr-lens modelocking, the use of self-focusing as a mechanism to mode-lock laser, self-phase modulation, an effect due to the optical Kerr effect caused by the temporal variation in the intensity creating a temporal variation in the refractive index. Optical solitons, a solution for either an optical pulse or spatial mode that does not change during propagation due to a balance between dispersion and the Kerr effect. Cross-phase modulation, where one wavelength of light can affect the phase of another wavelength of light through the optical Kerr effect, four-wave mixing, can also arise from other nonlinearities. Cross-polarized wave generation, a χ effect in which a wave with polarization vector perpendicular to the one is generated. Stimulated Brillouin scattering, interaction of photons with acoustic phonons Multi-photon absorption, multiple photoionisation, near-simultaneous removal of many bound electrons by one photon
Nonlinear optics
–
Reversal of Linear Momentum and Angular Momentum in Phase Conjugating Mirror.
Nonlinear optics
Nonlinear optics
–
Dark-Red Gallium Selenide in its bulk form.
82.
Nuclear astrophysics
–
In general terms, nuclear astrophysics aims to understand the origin of the chemical elements and the energy generation in stars. The basic tenets of astrophysics are that only isotopes of hydrogen and helium can be formed in a homogeneous big bang model. The conversion of mass to radiative energy is the source of energy which allows stars to shine for up to billions of years.5 billion years. While impressive, these data were used to formulate the theory, the theory of stellar nucleosynthesis has been well-tested by observation and experiment since the theory was first formulated. 26Al has a lifetime a bit less than one million years, the observable neutrino flux from nuclear reactors is much larger than that of the Sun, and thus Davis and others were primarily motivated to look for solar neutrinos for astronomical reasons. Although the foundations of the science are bona fide, there are many remaining open questions. Nuclear physics Astrophysics Nucleosynthesis Abundance of the chemical elements Joint Institute for Nuclear Astrophysics
Nuclear astrophysics
83.
Solar physics
–
Solar physics is the branch of astrophysics that specializes in the study of the Sun. It deals with detailed measurements that are only for our closest star. Because the Sun is uniquely situated for close-range observing, there is a split between the discipline of observational astrophysics and observational solar physics. The study of physics is also important as it is believed that changes in the solar atmosphere. The Sun also provides a physical laboratory for the study of plasma physics, babylonians were keeping a record of solar eclipses, with the oldest record originating from the ancient city of Ugarit, in modern-day Syria. This record dates to about 1300 BC, ancient Chinese astronomers were also observing solar phenomena with the purpose of keeping track of calendars, which were based on lunar and solar cycles. Unfortunately, records kept before 720 BC are very vague and offer no useful information, however, after 720 BC,37 solar eclipses were noted over the course of 240 years. Astronomical knowledge flourished in the Islamic world during medieval times, many observatories were built in cities from Damascus to Baghdad, where detailed astronomical observations were taken. Particularly, a few solar parameters were measured and detailed observations of the Sun were taken, Solar observations were taken with the purpose of navigation, but mostly for timekeeping. Islam requires its followers to pray five times a day, at position of the Sun in the sky. As such, accurate observations of the Sun and its trajectory on the sky were needed, in the late 10th century, Iranian astronomer Abu-Mahmud Khojandi built a massive observatory near Tehran. There, he took measurements of a series of meridian transits of the Sun. Following the fall of the Western Roman Empire, Western Europe was cut from all sources of ancient scientific knowledge and this, plus de-urbanisation and diseases such as the Black Death led to a decline in scientific knowledge in Medieval Europe, especially in the early Middle Ages. During this period, observations of the Sun were taken either in relation to the zodiac, or to assist in building places of worship such as churches, in astronomy, the renaissance period started with the work of Nicolaus Copernicus. He proposed that planets revolve around the Sun and not around the Earth and this model is known as the heliocentric model. His work was expanded by Johannes Kepler and Galileo Galilei. Particularly, Galilei used his new telescope to look at the Sun, in 1610, he discovered sunspots on its surface. In the autumn of 1611, Johannes Fabricius wrote the first book on sunspots, modern day solar physics is focused towards understanding the many phenomena observed with the help of modern telescopes and satellites
Solar physics
–
The SDO satellite
Solar physics
–
Internal structure
84.
Computational physics
–
Computational physics is the study and implementation of numerical analysis to solve problems in physics for which a quantitative theory already exists. Historically, computational physics was the first application of computers in science. In physics, different theories based on mathematical models provide very precise predictions on how systems behave, unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, in such cases, numerical approximations are required. There is a debate about the status of computation within the scientific method, while computers can be used in experiments for the measurement and recording of data, this clearly does not constitute a computational approach. Physics problems are in very difficult to solve exactly. This is due to several reasons, lack of algebraic and/or analytic solubility, complexity, on the more advanced side, mathematical perturbation theory is also sometimes used. In addition, the computational cost and computational complexity for many-body problems tend to grow quickly, a macroscopic system typically has a size of the order of 1023 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system, because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Furthermore, computational physics encompasses the tuning of the structure to solve the problems. It is possible to find a corresponding computational branch for every field in physics, for example computational mechanics. Computational mechanics consists of fluid dynamics, computational solid mechanics. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics, the quantum many-body problem leads naturally to the large and rapidly growing field of computational chemistry. Computational solid state physics is an important division of computational physics dealing directly with material science. A field related to computational condensed matter is computational statistical mechanics, computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, it concerns itself with in the social sciences, network theory, and mathematical models for the propagation of disease. Computational astrophysics is the application of techniques and methods to astrophysical problems. Stickler, E. Schachinger, Basic concepts in computational physics, E. Winsberg, Science in the Age of Computer Simulation
Computational physics
–
Computational physics
85.
Soft matter
–
They include liquids, colloids, polymers, foams, gels, granular materials, liquid crystals, and a number of biological materials. These materials share an important common feature in that predominant physical behaviors occur at a scale comparable with room temperature thermal energy. At these temperatures, quantum aspects are generally unimportant and he is especially noted for inventing the concept of reptation. Interesting behaviors arise from matter in ways that cannot be predicted, or are difficult to predict. The properties and interactions of these structures may determine the macroscopic behavior of the material. Soft materials are important in a range of technological applications. They may appear as structural and packaging materials, foams and adhesives, detergents and cosmetics, paints, food additives, lubricants and fuel additives, rubber in tires, in addition, a number of biological materials are classifiable as soft matter. Liquid crystals, another category of soft matter, exhibit a responsivity to electric fields that make them very important as materials in display devices. These properties lead to thermal fluctuations, a wide variety of forms, sensitivity of equilibrium structures to external conditions, macroscopic softness. Soft matters, such as polymers and lipids have found applications in nanotechnology as well, an important part of soft condensed matter research is biophysics. Soft condensed matter biophysics may be diverging into two directions, a physical chemistry approach and a complex systems approach. Hamley, Introduction to Soft Matter, J. Wiley, Chichester, R. A. L. Jones, Soft Condensed Matter, Oxford University Press, Oxford. T. A. Witten, Structured Fluids, Polymers, Colloids, Surfactants, M. Kleman and O. D. Lavrentovich, Soft Matter Physics, An Introduction, Springer. M. Mitov, Sensitive Matter, Foams, Gels, Liquid Crystals and Other Miracles, J. N. Israelachvili, Intermolecular and Surface Forces, Academic Press. A. V. Zvelindovksy, Nanostructured Soft Matter - Experiment, Theory, Simulation and Perspectives, Springer/Dodrecht, M. Daoud, C. E. Williams, Soft Matter Physics, Springer Verlag, Berlin. Gerald H. Ristow, Pattern Formation in Granular Materials, Springer Tracts in Modern Physics, ISBN 3-540-66701-6. de Gennes, Pierre-Gilles, Soft Matter, Nobel Lecture, December 9,1991. S. A. Safran, Statistical thermodynamics of surfaces, interfaces and membranes, Harvard School of Engineering and Applied Sciences Soft Matter Wiki - organizes, reviews, and summarizes academic papers on soft matter. Soft Matter Engineering - A group dedicated to Soft Matter Engineering at the University of Florida Google Scholar page on soft matter
Soft matter
–
Condensed matter physics
86.
Biomechanics
–
Biomechanics is the study of the structure and function of biological systems such as humans, animals, plants, organs, fungi, and cells by means of the methods of mechanics. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems, some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Usually biological systems are more complex than man-built systems. Numerical methods are applied in almost every biomechanical study. Research is done in a process of hypothesis and verification, including several steps of modeling, computer simulation. Elements of mechanical engineering, electrical engineering, computer science, gait analysis, Biomechanics in sports can be stated as the muscular, joint and skeletal actions of the body during the execution of a given task, skill and/or technique. Proper understanding of biomechanics relating to sports skill has the greatest implications on, sports performance, rehabilitation and injury prevention, as noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best. The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics and this assumption breaks down when the length scales of interest approach the order of the micro structural details of the material. One of the most remarkable characteristic of biomaterials is their hierarchical structure, in other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels. Biomaterials are classified in two groups, hard and soft tissues, mechanical deformation of hard tissues may be analysed with the theory of linear elasticity. On the other hand, soft tissues usually undergo large deformations and thus their analysis rely on the strain theory. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation, biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluids problem is that of blood flow in the cardiovascular system. Under certain mathematical circumstances, blood flow can be modelled by the Navier–Stokes equations, in vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this fails when considering forward flow within arterioles. At the microscopic scale, the effects of red blood cells become significant. When the diameter of the vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs. However, as the diameter of the vessel decreases further
Biomechanics
–
Page of one of the first works of Biomechanics (De Motu Animalium of Giovanni Alfonso Borelli) in the 17th century
Biomechanics
–
Red blood cells
Biomechanics
–
Chinstrap penguin leaping over water
Biomechanics
–
Subdisciplines
87.
Psychophysics
–
Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics also refers to a class of methods that can be applied to study a perceptual system. Modern applications rely heavily on threshold measurement, ideal observer analysis, Psychophysics has widespread and important practical applications. For example, in the study of signal processing, psychophysics has informed the development of models. These models explain why humans perceive very little loss of quality when audio. Many of the techniques and theories of psychophysics were formulated in 1860 when Gustav Theodor Fechner in Leipzig published Elemente der Psychophysik. He coined the term psychophysics, describing research intended to relate physical stimuli to the contents of such as sensations. As a physicist and philosopher, Fechner aimed at developing a method that relates matter to the mind, connecting the publicly observable world, from this, Fechner derived his well-known logarithmic scale, now known as Fechner scale. Webers and Fechners work formed one of the bases of psychology as a science, Fechners work systematised the introspectionist approach, that had to contend with the Behaviorist approach in which even verbal responses are as physical as the stimuli. Fechners work was studied and extended by Charles S. Peirce, who was aided by his student Joseph Jastrow, Peirce and Jastrow largely confirmed Fechners empirical findings, but not all. In particular, an experiment of Peirce and Jastrow rejected Fechners estimation of a threshold of perception of weights. The Peirce–Jastrow experiments were conducted as part of Peirces application of his program to human perception, other studies considered the perception of light. Jastrow wrote the summary, Mr. Peirce’s courses in logic gave me my first real experience of intellectual muscle. He borrowed the apparatus for me, which I took to my room, installed at my window, and with which, the results were published over our joint names in the Proceedings of the National Academy of Sciences. This work clearly distinguishes observable cognitive performance from the expression of consciousness, one leading method is based on signal detection theory, developed for cases of very weak stimuli. However, the subjectivist approach persists among those in the tradition of Stanley Smith Stevens, Stevens revived the idea of a power law suggested by 19th century researchers, in contrast with Fechners log-linear function. He also advocated the assignment of numbers in ratio to the strengths of stimuli, Stevens added techniques such as magnitude production and cross-modality matching. He opposed the assignment of stimulus strengths to points on a line that are labeled in order of strength, nevertheless, that sort of response has remained popular in applied psychophysics
Psychophysics
–
Diagram showing a specific staircase procedure: Transformed Up/Down Method (1 up/ 2 down rule). Until the first reversal (which is neglected) the simple up/down rule and a larger step size is used.
88.
Cloud physics
–
Cloud physics is the study of the physical processes that lead to the formation, growth and precipitation of atmospheric clouds. Clouds consist of microscopic droplets of water, tiny crystals of ice. Cloud droplets initially form by the condensation of water vapor onto condensation nuclei when the supersaturation of air exceeds a critical value according to Köhler theory. Cloud condensation nuclei are necessary for cloud droplets formation because of the Kelvin effect, at small radii, the amount of supersaturation needed for condensation to occur is so large, that it does not happen naturally. Raoults Law describes how the pressure is dependent on the amount of solute in a solution. At high concentrations, when the droplets are small, the supersaturation required is smaller than without the presence of a nucleus. In warm clouds, larger cloud droplets fall at a terminal velocity, because at a given velocity. The large droplets can then collide with small droplets and combine to form even larger drops, when the drops become large enough that their downward velocity is greater than the upward velocity of the surrounding air, the drops can fall to the earth as precipitation. The collision and coalescence is not as important in mixed phase clouds where the Bergeron process dominates, other important processes that form precipitation are riming, when a supercooled liquid drop collides with a solid snowflake, and aggregation, when two solid snowflakes collide and combine. Advances in weather radar and satellite technology have allowed the precise study of clouds on a large scale. The history of cloud microphysics developed in the 19th century and is described in several publications, otto von Guericke originated the idea that clouds were composed of water bubbles. In 1847 Augustus Waller used spider web to examine droplets under the microscope and these observations were confirmed by William Henry Dines in 1880 and Richard Assmann in 1884. As water evaporates from an area of the surface, the air over that area becomes moist. Moist air is lighter than the dry air, creating an unstable situation. When enough moist air has accumulated, all the moist air rises as a single packet, as more moist air forms along the surface, the process repeats, resulting in a series of discrete packets of moist air rising to form clouds. The main mechanism behind this process is adiabatic cooling, atmospheric pressure decreases with altitude, so the rising air expands in a process that expends energy and causes the air to cool, which makes water vapor condense into cloud. Water vapor in saturated air is normally attracted to condensation nuclei such as dust, the water droplets in a cloud have a normal radius of about 0.002 mm. The droplets may collide to form larger droplets, which remain aloft as long as the velocity of the air within the cloud is equal to or greater than the terminal velocity of the droplets
Cloud physics
–
Atmospheric sciences
Cloud physics
–
Late-summer
rainstorm in
Denmark. Nearly black color of base indicates main cloud in foreground probably cumulonimbus.
Cloud physics
–
Windy evening
twilight enhanced by the Sun's angle, can visually mimic a
tornado resulting from orographic lift
89.
Modern physics
–
Modern physics is the post-Newtonian conception of physics. In general, the term is used to refer to any branch of physics either developed in the early 20th century and onwards, small velocities and large distances is usually the realm of classical physics. In general, quantum and relativist effects exist across all scales, in a literal sense, the term modern physics, means up-to-date physics. In this sense, a significant portion of so-called classical physics is modern, however, since roughly 1890, new discoveries have caused significant paradigm shifts, the advent of quantum mechanics and of Einsteinian relativity. Physics that incorporates elements of either QM or ER is said to be modern physics and it is in this latter sense that the term is generally used. Modern physics is often encountered when dealing with extreme conditions, quantum mechanical effects tend to appear when dealing with lows, while relativistic effects tend to appear when dealing with highs, the middles being classical behaviour. For example, when analysing the behaviour of a gas at room temperature, however near absolute zero, the Maxwell–Boltzmann distribution fails to account for the observed behaviour of the gas, and the Fermi–Dirac or Bose–Einstein distributions have to be used instead. Very often, it is possible to find – or retrieve – the classical behaviour from the description by analysing the modern description at low speeds. When doing so, the result is called the classical limit
Modern physics
–
German physicist
Max Planck, founder of
quantum theory.
Modern physics