The Ising model, named after the physicist Ernst Ising, is a mathematical model of ferromagnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states; the spins are arranged in a graph a lattice, allowing each spin to interact with its neighbors. The model allows the identification of phase transitions, as a simplified model of reality; the two-dimensional square-lattice Ising model is one of the simplest statistical models to show a phase transition. The Ising model was invented by the physicist Wilhelm Lenz, who gave it as a problem to his student Ernst Ising; the one-dimensional Ising model has no phase transition and was solved by Ising himself in his 1924 thesis. The two-dimensional square lattice Ising model is much harder, was given an analytic description much by Lars Onsager, it is solved by a transfer-matrix method, although there exist different approaches, more related to quantum field theory.
In dimensions greater than four, the phase transition of the Ising model is described by mean field theory. Consider a set Λ of lattice sites, each with a set of adjacent sites forming a d-dimensional lattice. For each lattice site k ∈ Λ there is a discrete variable σk such that σk ∈, representing the site's spin. A spin configuration, σ = k ∈ Λ is an assignment of spin value to each lattice site. For any two adjacent sites i, j ∈ Λ there is an interaction Jij. A site j ∈ Λ has an external magnetic field hj interacting with it; the energy of a configuration σ is given by the Hamiltonian function H = − ∑ ⟨ i j ⟩ J i j σ i σ j − μ ∑ j h j σ j where the first sum is over pairs of adjacent spins. The notation ⟨ ij ⟩ indicates that sites j are nearest neighbors; the magnetic moment is given by µ. Note that the sign in the second term of the Hamiltonian above should be positive because the electron's magnetic moment is antiparallel to its spin, but the negative term is used conventionally; the configuration probability is given by the Boltzmann distribution with inverse temperature β ≥ 0: P β = e − β H Z β, where β = −1 and the normalization constant Z β = ∑ σ e − β H is the partition function.
For a function f of the spins, one denotes by ⟨ f ⟩ β = ∑ σ f P β the expectation value of f. The configuration probabilities Pβ represent the probability that the system is in a state with configuration σ; the minus sign on each term of the Hamiltonian function. Using this sign convention, the Ising models can be classified according to the sign of the interaction: if, for all pairs i,j J i j > 0, the interaction is called ferromagnetic J i j < 0, the interaction is called antiferromagnetic J i j = 0, the spins are noninteractingotherwise the system is called nonferromagnetic. In a ferromagnetic Ising model, spins desire to be aligned: the configurations in which adjacent spins are of the same sign have higher probability. In an antiferromagnetic model, adjacent spins tend to have opposite signs; the sign convention of H explains how a spin site j interacts with the external field. Namely, the spin site wants to line up with the external field. If: h j > 0, the spin site j desires to line up in the positive direction h j < 0, the spin site j desires to line up in the negative direction h j = 0, there is no external influence on the spin site.
Ising models are examined without an external field interacting with the lattice, that is, h = 0 for all j in the lattice Λ. Using this simplification, the Hamiltonian becomes: H = − ∑ ⟨ i j ⟩ J i j σ i σ j; when the external field is everywhere zero, h = 0, the Ising model is symmetric under switching the value of the spin in all the lattice sites. Another common simplification is to assume that all of the nearest neighbors <i
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of achieving its goals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"; as machines become capable, tasks considered to require "intelligence" are removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done yet." For instance, optical character recognition is excluded from things considered to be AI, having become a routine technology. Modern machine capabilities classified as AI include understanding human speech, competing at the highest level in strategic game systems, autonomously operating cars, intelligent routing in content delivery networks and military simulations.
Artificial intelligence can be classified into three different types of systems: analytical, human-inspired, humanized artificial intelligence. Analytical AI has only characteristics consistent with cognitive intelligence. Human-inspired AI has elements from emotional intelligence. Humanized AI shows characteristics of all types of competencies, is able to be self-conscious and is self-aware in interactions with others. Artificial intelligence was founded as an academic discipline in 1956, in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding, followed by new approaches and renewed funding. For most of its history, AI research has been divided into subfields that fail to communicate with each other; these sub-fields are based on technical considerations, such as particular goals, the use of particular tools, or deep philosophical differences. Subfields have been based on social factors; the traditional problems of AI research include reasoning, knowledge representation, learning, natural language processing and the ability to move and manipulate objects.
General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, methods based on statistics and economics; the AI field draws upon computer science, information engineering, psychology, linguistics and many other fields. The field was founded on the claim that human intelligence "can be so described that a machine can be made to simulate it"; this raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth and philosophy since antiquity. Some people consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, theoretical understanding.
Thought-capable artificial beings appeared as storytelling devices in antiquity, have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R. U. R.. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence; the study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction; this insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that "if a human could not distinguish between responses from a machine and a human, the machine could be considered "intelligent".
The first work, now recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was born at a workshop at Dartmouth College in 1956. Attendees Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Arthur Samuel became the founders and leaders of AI research, they and their students produced programs that the press described as "astonishing": computers were learning checkers strategies (and by 1959 were playing better than the average human
The term phase transition is most used to describe transitions between solid and gaseous states of matter, as well as plasma in rare cases. A phase of a thermodynamic system and the states of matter have uniform physical properties. During a phase transition of a given medium, certain properties of the medium change discontinuously, as a result of the change of external conditions, such as temperature, pressure, or others. For example, a liquid may become gas upon heating to the boiling point, resulting in an abrupt change in volume; the measurement of the external conditions at which the transformation occurs is termed the phase transition. Phase transitions occur in nature and are used today in many technologies. Examples of phase transitions include: The transitions between the solid and gaseous phases of a single component, due to the effects of temperature and/or pressure: A eutectic transformation, in which a two component single phase liquid is cooled and transforms into two solid phases.
The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two component single phase solid is heated and transforms into a solid phase and a liquid phase. A spinodal decomposition, in which a single phase is cooled and separates into two different compositions of that same phase. Transition to a mesophase between solid and liquid, such as one of the "liquid crystal" phases; the transition between the ferromagnetic and paramagnetic phases of magnetic materials at the Curie point. The transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide; the martensitic transformation which occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations. Changes in the crystallographic structure such as between ferrite and austenite of iron. Order-disorder transitions such as in alpha-titanium aluminides.
The dependence of the adsorption geometry on coverage and temperature, such as for hydrogen on iron. The emergence of superconductivity in certain metals and ceramics when cooled below a critical temperature; the transition between different molecular structures of solids, such as between an amorphous structure and a crystal structure, between two different crystal structures, or between two amorphous structures. Quantum condensation of bosonic fluids; the superfluid transition in liquid helium is an example of this. The breaking of symmetries in the laws of physics during the early history of the universe as its temperature cooled. Isotope fractionation occurs during a phase transition, the ratio of light to heavy isotopes in the involved molecules changes; when water vapor condenses, the heavier water isotopes become enriched in the liquid phase while the lighter isotopes tend toward the vapor phase. Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables.
This condition stems from the interactions of a large number of particles in a system, does not appear in systems that are too small. It is important to note that phase transitions can occur and are defined for non-thermodynamic systems, where temperature is not a parameter. Examples include: quantum phase transitions, dynamic phase transitions, topological phase transitions. In these types of systems other parameters take the place of temperature. For instance, connection probability replaces temperature for percolating networks. At the phase transition point the two phases of a substance and vapor, have identical free energies and therefore are likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the gaseous form is preferred, it is sometimes possible to change the state of a system diabatically in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable, i.e. less stable than the phase to which the transition would have occurred, but not unstable either.
This occurs in superheating and supersaturation, for example. Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. Under this scheme, phase transitions were labeled by the lowest derivative of the free energy, discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable; the various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, the first derivative of the free energy with respect to pressure. Second-order phase transitions are continuous in the first derivative but exhibit discontinuity in a second derivative of the free energy; these include the ferromagnetic phase transition in materials such as iron, where the magnetization, the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature.
The magnetic susceptibility, the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification sche
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms; these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion. Although it is not possible to predict random events, much can be said about their behavior. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem.
As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth-century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics; the mathematical theory of probability has its roots in attempts to analyze games of chance by Gerolamo Cardano in the sixteenth century, by Pierre de Fermat and Blaise Pascal in the seventeenth century. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, Pierre Laplace completed what is today considered the classic interpretation. Probability theory considered discrete events, its methods were combinatorial. Analytical considerations compelled the incorporation of continuous variables into the theory; this culminated on foundations laid by Andrey Nikolaevich Kolmogorov.
Kolmogorov combined the notion of sample space, introduced by Richard von Mises, measure theory and presented his axiom system for probability theory in 1933. This became the undisputed axiomatic basis for modern probability theory. Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately; the measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, more. Consider an experiment that can produce a number of outcomes; the set of all outcomes is called the sample space of the experiment. The power set of the sample space is formed by considering all different collections of possible results. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number. Thus, the subset is an element of the power set of the sample space of die rolls; these collections are called events. In this case, is the event that the die falls on some odd number.
If the results that occur fall in a given event, that event is said to have occurred. Probability is a way of assigning every "event" a value between zero and one, with the requirement that the event made up of all possible results be assigned a value of one. To qualify as a probability distribution, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events, the probability that any of these events occurs is given by the sum of the probabilities of the events; the probability that any one of the events, or will occur is 5/6. This is the same as saying that the probability of event is 5/6; this event encompasses the possibility of any number except five being rolled. The mutually exclusive event has a probability of 1/6, the event has a probability of 1, that is, absolute certainty; when doing calculations using the outcomes of an experiment, it is necessary that all those elementary events have a number assigned to them. This is done using a random variable.
A random variable is a function that assigns to each elementary event in the sample space a real number. This function is denoted by a capital letter. In the case of a die, the assignment of a number to a certain elementary events can be done using the identity function; this does not always work. For example, when flipping a coin the two possible outcomes are "heads" and "tails". In this example, the random variable X could assign to the outcome "heads" the number "0" and to the outcome "tails" the number "1". Discrete probability theory deals with events. Examples: Throwing dice, experiments with decks of cards, random walk, tossing coins Classical definition: Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an number when a die is
Degrees of freedom (physics and chemistry)
In physics, a degree of freedom is an independent physical parameter in the formal description of the state of a physical system. The set of all states of a system is known as the system's phase space, degrees of freedom of the system, are the dimensions of the phase space; the location of a particle in three-dimensional space requires. The direction and speed at which a particle moves can be described in terms of three velocity components, each in reference to the three dimensions of space. If the time evolution of the system is deterministic, where the state at one instant uniquely determines its past and future position and velocity as a function of time, such a system has six degrees of freedom. If the motion of the particle is constrained to a lower number of dimensions, for example, the particle must move along a wire or on a fixed surface the system has fewer than six degrees of freedom. On the other hand, a system with an extended object that can rotate or vibrate can have more than six degrees of freedom.
In classical mechanics, the state of a point particle at any given time is described with position and velocity coordinates in the Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism. In statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a system; the specification of all microstates of a system is a point in the system's phase space. In the 3D ideal chain model in chemistry, two angles are necessary to describe the orientation of each monomer, it is useful to specify quadratic degrees of freedom. These are degrees of freedom. In three-dimensional space, three degrees of freedom are associated with the movement of a particle. A diatomic gas molecule has 6 degrees of freedom; this set may be decomposed in terms of translations and vibrations of the molecule. The center of mass motion of the entire molecule accounts for 3 degrees of freedom. In addition, the molecule has two rotational degrees of one vibrational mode.
The rotations occur around the two axes perpendicular to the line between the two atoms. The rotation around the atom–atom bond is not a physical rotation; this yields, for a diatomic molecule, a decomposition of: N = 6 = 3 + 2 + 1. For a general, non-linear molecule, all 3 rotational degrees of freedom are considered, resulting in the decomposition: 3 N = 3 + 3 + which means that an N-atom molecule has 3N − 6 vibrational degrees of freedom for N > 2. In special cases, such as adsorbed large molecules, the rotational degrees of freedom can be limited to only one; as defined above one can count degrees of freedom using the minimum number of coordinates required to specify a position. This is done as follows: For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space, thus its degree of freedom in a 3-D space is 3. For a body consisting of 2 particles in a 3-D space with constant distance between them we can show its degrees of freedom to be 5.
Let's say the other has coordinate with z2 unknown. Application of the formula for distance between two coordinates d = 2 + 2 + 2 results in one equation with one unknown, in which we can solve for z2. One of x1, x2, y1, y2, z1, or z2 can be unknown. Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules makes negligible contributions to the heat capacity; this is because these degrees of freedom are frozen because the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures. In the following table such degrees of freedom are disregarded because of their low effect on total energy. Only the translational and rotational degrees of freedom contribute, in equal amounts, to the heat capacity ratio; this is why γ = 7/5 for diatomic gases at room temperature. However, at high temperatures, on the order of the vibrational temperature, vibrational motion cannot be neglected. Vibrational temperatures are between 103 K and 104 K.
The set of degrees of freedom X1, ... , XN of a system is independent if the energy associated with the set can be written in the following form: E = ∑ i = 1 N E i, where Ei is a function of the sole variable Xi. example: if X1 and X2 are two degrees of freedom, E is the associated energy: If E = X 1 4 + X 2 4 the two degrees of freedom are independent. If E = X 1 4 + X 1
Combinatorics is an area of mathematics concerned with counting, both as a means and an end in obtaining results, certain properties of finite structures. It is related to many other areas of mathematics and has many applications ranging from logic to statistical physics, from evolutionary biology to computer science, etc. To understand the scope of combinatorics requires a great deal of further amplification, the details of which are not universally agreed upon. According to H. J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with the enumeration of specified structures, sometimes referred to as arrangements or configurations in a general sense, associated with finite systems, the existence of such structures that satisfy certain given criteria, the construction of these structures in many ways, optimization, finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion.
Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge in their objectives, their methods, the degree of coherence they have attained." One way to define combinatorics is to describe its subdivisions with their problems and techniques. This is the approach, used below. However, there are purely historical reasons for including or not including some topics under the combinatorics umbrella. Although concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite but discrete setting. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory and geometry, as well as in its many application areas. Many combinatorial questions have been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the twentieth century, however and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right.
One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a combinatorialist. Basic combinatorial concepts and enumerative results appeared throughout the ancient world. In the 6th century BCE, ancient Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc. thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus and Hipparchus of a rather delicate enumerative problem, shown to be related to Schröder–Hipparchus numbers. In the Ostomachion, Archimedes considers a tiling puzzle. In the Middle Ages, combinatorics continued to be studied outside of the European civilization; the Indian mathematician Mahāvīra provided formulae for the number of permutations and combinations, these formulas may have been familiar to Indian mathematicians as early as the 6th century CE.
The philosopher and astronomer Rabbi Abraham ibn Ezra established the symmetry of binomial coefficients, while a closed formula was obtained by the talmudist and mathematician Levi ben Gerson, in 1321. The arithmetical triangle— a graphical diagram showing relationships among the binomial coefficients— was presented by mathematicians in treatises dating as far back as the 10th century, would become known as Pascal's triangle. In Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J. J. Sylvester and Percy MacMahon helped lay the foundation for algebraic combinatorics. Graph theory enjoyed an explosion of interest at the same time in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject.
In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field. Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics; the twelvefold way provides a unified framework for counting permutations and partitions. Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theo
In chemistry and materials science the'coordination number' called ligancy, of a central atom in a molecule or crystal is the number of atoms, molecules or ions bonded to it. The ion/molecule/atom surrounding the central ion/molecule/atom is called a ligand; this number is determined somewhat differently for molecules than for crystals. For molecules and polyatomic ions the coordination number of an atom is determined by counting the other atoms to which it is bonded. For example, − has Cr3+ as its central cation, which has a coordination number of 6 and is described as hexacoordinate; however the solid-state structures of crystals have less defined bonds, in these cases a count of neighboring atoms is employed. The simplest method is one used in materials science; the usual value of the coordination number for a given structure refers to an atom in the interior of a crystal lattice with neighbors in all directions. In contexts where crystal surfaces are important, such as materials science and heterogeneous catalysis, the number of neighbors of an interior atom is the bulk coordination number, while the number of surface neighbors of an atom at the surface of the crystal is the surface coordination number.
In chemistry, coordination number, defined in 1893 by Alfred Werner, is the total number of neighbors of a central atom in a molecule or ion. Although a carbon atom has four chemical bonds in most stable molecules, the coordination number of each carbon is four in methane, three in ethylene, two in acetylene. In effect we count the first bond to each neighboring atom, but not the other bonds. In coordination complexes, only the first or sigma bond between each ligand and the central atom counts, but not any pi bonds to the same ligands. In tungsten hexacarbonyl, W6, the coordination number of tungsten is counted as six although pi as well as sigma bonding is important in such metal carbonyls; the most common coordination number for d-block transition metal complexes is 6, with an octahedral geometry. The observed range is 2 to 9. Metals in the f-block can accommodate higher coordination number due to their greater ionic radii and availability of more orbitals for bonding. Coordination numbers of 8 to 12 are observed for f-block elements.
For example, with bidentate nitrate ions as ligands, CeIV and ThIV form the 12-coordinate ions 2− and 2−. When the surrounding ligands are much smaller than the central atom higher coordination numbers may be possible. One computational chemistry study predicted a stable PbHe2+15 ion composed of a central lead ion coordinated with no fewer than 15 helium atoms. At the opposite extreme, steric shielding can give rise to unusually low coordination numbers. An rare instance of a metal adopting a coordination number of 1 occurs in the terphenyl-based arylthallium complex 2,6-Tipp2C6H3Tl, where Tipp is the 2,4,6-triisopropylphenyl group. For π-electron ligands such as the cyclopentadienide ion −, alkenes and the cyclooctatetraenide ion 2−, the number of atoms in the π-electron system that bind to the central atom is termed the hapticity. In ferrocene the hapticity, η, of each cyclopentadienide anion is five, Fe2. There are various ways of assigning the contribution made to the coordination number of the central iron atom by each cyclopentadienide ligand.
The contribution could be assigned as one since there is one ligand, or as five since there are five neighbouring atoms, or as three since there are three electron pairs involved. The count of electron pairs is taken. In order to determine the coordination number of an atom in a crystal, the crystal structure has first to be determined; this is achieved using neutron or electron diffraction. Once the positions of the atoms within the unit cell of the crystal are known the coordination number of an atom can be determined. For molecular solids or coordination complexes the units of the polyatomic species can be detected and a count of the bonds can be performed. Solids with lattice structures which includes metals and many inorganic solids can have regular structures where coordinating atoms are all at the same distance and they form the vertices of a coordination polyhedron. However, there are many such solids where the structures are irregular. In materials science, the bulk coordination number of a given atom in the interior of a crystal lattice is the number of nearest neighbours to a given atom.
For an atom at a surface of a crystal, the surface coordination number is always less than the bulk coordination number. The surface coordination number is dependent on the Miller indices of the surface. In a body-centered cubic crystal, the bulk coordination number is 8, for the surface, the surface coordination number is 4.α-Aluminium has a regular cubic close packed structure, where each aluminium atom has 12 nearest neighbors, 6 in the same plane and 3 above and below and the coordination polyhedron is a cuboctahedron. Α-Iron has a body centered cubic structure where each iron atom has 8 nearest neighbors situated at the corners of a cube. The two most common allotropes of carbon have different coordination numbers. In diamond, each carbon atom is at the centre of a regular tetrahedron formed by four other carbon atoms, the coordination number is four, as for methane. Graphite is made of two-dimensional layers in which each carbon is covalently bonded to three other carbons.