The cell is the basic structural and biological unit of all known living organisms. A cell is the smallest unit of life. Cells are called the "building blocks of life"; the study of cells is called cellular biology. Cells consist of cytoplasm enclosed within a membrane, which contains many biomolecules such as proteins and nucleic acids. Organisms can be classified as multicellular; the number of cells in plants and animals varies from species to species, it has been estimated that humans contain somewhere around 40 trillion cells. Most plant and animal cells are visible only under a microscope, with dimensions between 1 and 100 micrometres. Cells were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, that all cells come from pre-existing cells.
Cells emerged on Earth at least 3.5 billion years ago. Cells are of two types: eukaryotic, which contain a nucleus, prokaryotic, which do not. Prokaryotes are single-celled organisms, while eukaryotes can be either single-celled or multicellular. Prokaryotes include two of the three domains of life. Prokaryotic cells were the first form of life on Earth, characterised by having vital biological processes including cell signaling, they are simpler and smaller than eukaryotic cells, lack membrane-bound organelles such as a nucleus. The DNA of a prokaryotic cell consists of a single chromosome, in direct contact with the cytoplasm; the nuclear region in the cytoplasm is called the nucleoid. Most prokaryotes are the smallest of all organisms ranging from 0.5 to 2.0 µm in diameter. A prokaryotic cell has three architectural regions: Enclosing the cell is the cell envelope – consisting of a plasma membrane covered by a cell wall which, for some bacteria, may be further covered by a third layer called a capsule.
Though most prokaryotes have both a cell membrane and a cell wall, there are exceptions such as Mycoplasma and Thermoplasma which only possess the cell membrane layer. The envelope gives rigidity to the cell and separates the interior of the cell from its environment, serving as a protective filter; the cell wall consists of peptidoglycan in bacteria, acts as an additional barrier against exterior forces. It prevents the cell from expanding and bursting from osmotic pressure due to a hypotonic environment; some eukaryotic cells have a cell wall. Inside the cell is the cytoplasmic region that contains the genome and various sorts of inclusions; the genetic material is found in the cytoplasm. Prokaryotes can carry extrachromosomal DNA elements called plasmids, which are circular. Linear bacterial plasmids have been identified in several species of spirochete bacteria, including members of the genus Borrelia notably Borrelia burgdorferi, which causes Lyme disease. Though not forming a nucleus, the DNA is condensed in a nucleoid.
Plasmids encode additional genes, such as antibiotic resistance genes. On the outside and pili project from the cell's surface; these are structures made of proteins that facilitate communication between cells. Plants, fungi, slime moulds and algae are all eukaryotic; these cells are about fifteen times wider than a typical prokaryote and can be as much as a thousand times greater in volume. The main distinguishing feature of eukaryotes as compared to prokaryotes is compartmentalization: the presence of membrane-bound organelles in which specific activities take place. Most important among these is a cell nucleus, an organelle that houses the cell's DNA; this nucleus gives the eukaryote its name, which means "true kernel". Other differences include: The plasma membrane resembles that of prokaryotes in function, with minor differences in the setup. Cell walls may not be present; the eukaryotic DNA is organized in one or more linear molecules, called chromosomes, which are associated with histone proteins.
All chromosomal DNA is stored in the cell nucleus, separated from the cytoplasm by a membrane. Some eukaryotic organelles such as mitochondria contain some DNA. Many eukaryotic cells are ciliated with primary cilia. Primary cilia play important roles in chemosensation and thermosensation. Cilia may thus be "viewed as a sensory cellular antennae that coordinates a large number of cellular signaling pathways, sometimes coupling the signaling to ciliary motility or alternatively to cell division and differentiation." Motile eukaryotes can move using motile flagella. Motile cells are absent in flowering plants. Eukaryotic flagella are more complex than those of prokaryotes. All cells, whether prokaryotic or eukaryotic, have a membrane that envelops the cell, regulates what moves in and out, maintains the electric potential of the cell. Inside the membrane, the cytoplasm takes up most of the cell's volume. All cells possess DNA, the hereditary material of genes, RNA, containing the information necessary to build various proteins such as enzymes, the cell's primary machinery.
There are other kinds of biomolecules in cells. This article lists these primary cellular components briefly
Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. The term complex impedance may be used interchangeably. Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of a sinusoidal voltage between its terminals to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage. Impedance extends the concept of resistance to AC circuits, possesses both magnitude and phase, unlike resistance, which has only magnitude; when a circuit is driven with direct current, there is no distinction between impedance and resistance. The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law. In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix.
Impedance is a complex number, with the same units as resistance. Its symbol is Z, it may be represented by writing its magnitude and phase in the form |Z|∠θ. However, cartesian complex number representation is more powerful for circuit analysis purposes; the reciprocal of impedance is admittance, whose SI unit is the siemens called mho. Instruments used to measure the electrical impedance are called impedance analyzers; the term impedance was coined by Oliver Heaviside in July 1886. Arthur Kennelly was the first to represent impedance with complex numbers in 1893. In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields, the electrostatic storage of charge induced by voltages between conductors; the impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part. Impedance is defined as the frequency domain ratio of the voltage to the current.
In other words, it is the voltage–current ratio for a single complex exponential at a particular frequency ω. For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular: The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude; the impedance of a two-terminal circuit element is represented as a complex quantity Z. The polar form conveniently captures both magnitude and phase characteristics as Z = | Z | e j arg where the magnitude | Z | represents the ratio of the voltage difference amplitude to the current amplitude, while the argument arg gives the phase difference between voltage and current. J is the imaginary unit, is used instead of i in this context to avoid confusion with the symbol for electric current. In Cartesian form, impedance is defined as Z = R + j X where the real part of impedance is the resistance R and the imaginary part is the reactance X.
Where it is needed to add or subtract impedances, the cartesian form is more convenient. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers. To simplify calculations, sinusoidal voltage and current waves are represented as complex-valued functions of time denoted as V and I. V = | V | e j; the impedance of a bipolar circuit is defined as the ratio of these quantities: Z = V I = | V | | I | e j ( ϕ V − ϕ I
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T
Oscillation is the repetitive variation in time, of some measure about a central value or between two or more different states. The term vibration is used to describe mechanical oscillation. Familiar examples of oscillation include a swinging pendulum and alternating current. Oscillations occur not only in mechanical systems but in dynamic systems in every area of science: for example the beating of the human heart, business cycles in economics, predator–prey population cycles in ecology, geothermal geysers in geology, vibration of strings in guitar and other string instruments, periodic firing of nerve cells in the brain, the periodic swelling of Cepheid variable stars in astronomy; the simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air ice surface; the system is in an equilibrium state. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium.
However, in moving the mass back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position, establishing a new restoring force in the opposite sense. If a constant force such as gravity is added to the system, the point of equilibrium is shifted; the time taken for an oscillation to occur is referred to as the oscillatory period. The systems where the restoring force on a body is directly proportional to its displacement, such as the dynamics of the spring-mass system, are described mathematically by the simple harmonic oscillator and the regular periodic motion is known as simple harmonic motion. In the spring-mass system, oscillations occur because, at the static equilibrium displacement, the mass has kinetic energy, converted into potential energy stored in the spring at the extremes of its path; the spring-mass system illustrates some common features of oscillation, namely the existence of an equilibrium and the presence of a restoring force which grows stronger the further the system deviates from equilibrium.
All real-world oscillator systems are thermodynamically irreversible. This means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment; this is called damping. Thus, oscillations tend to decay with time unless there is some net source of energy into the system; the simplest description of this decay process can be illustrated by oscillation decay of the harmonic oscillator. In addition, an oscillating system may be subject to some external force, as when an AC circuit is connected to an outside power source. In this case the oscillation is said to be driven; some systems can be excited by energy transfer from the environment. This transfer occurs where systems are embedded in some fluid flow. For example, the phenomenon of flutter in aerodynamics occurs when an arbitrarily small displacement of an aircraft wing results in an increase in the angle of attack of the wing on the air flow and a consequential increase in lift coefficient, leading to a still greater displacement.
At sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. The harmonic oscillator and the systems it models have a single degree of freedom. More complicated systems have more degrees of freedom, for example three springs. In such cases, the behavior of each variable influences that of the others; this leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks mounted on a common wall will tend to synchronise; this phenomenon was first observed by Christiaan Huygens in 1665. The apparent motions of the compound oscillations appears complicated but a more economic, computationally simpler and conceptually deeper description is given by resolving the motion into normal modes. More special cases are the coupled oscillators where energy alternates between two forms of oscillation. Well-known is the Wilberforce pendulum, where the oscillation alternates between an elongation of a vertical spring and the rotation of an object at the end of that spring.
As the number of degrees of freedom becomes arbitrarily large, a system approaches continuity. Such systems have an infinite number of normal modes and their oscillations occur in the form of waves that can characteristically propagate; the mathematics of oscillation deals with the quantification of the amount that a sequence or function tends to move between extremes. There are several related notions: oscillation of a sequence of real numbers, oscillation of a real valued function at a point, oscillation of a function on an interval. Crystal oscillator Neutron stars Cyclic Model Neutral particle oscillation, e.g. neutrino oscillations Quantum harmonic oscillator Cellular Automata oscillator Media related to Oscillation at Wikimedia Commons Vibrations – a chapter from an online textbook
Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields. Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms. All-atomistic molecular mechanics methods have the following properties: Each atom is simulated as one particle Each particle is assigned a radius, a constant net charge Bonded interactions are treated as springs with an equilibrium distance equal to the experimental or calculated bond lengthVariants on this theme are possible. For example, many simulations have used a united-atom representation in which each terminal methyl group or intermediate methylene unit was considered one particle, large protein systems are simulated using a bead model that assigns two to four particles per amino acid.
The following functional abstraction, termed an interatomic potential function or force field in chemistry, calculates the molecular system's potential energy in a given conformation as a sum of individual energy terms. E = E covalent + E noncovalent where the components of the covalent and noncovalent contributions are given by the following summations: E covalent = E bond + E angle + E dihedral E noncovalent = E electrostatic + E van der Waals The exact functional form of the potential function, or force field, depends on the particular simulation program being used; the bond and angle terms are modeled as harmonic potentials centered around equilibrium bond-length values derived from experiment or theoretical calculations of electronic structure performed with software which does ab-initio type calculations such as Gaussian. For accurate reproduction of vibrational spectra, the Morse potential can be used instead, at computational cost; the dihedral or torsional terms have multiple minima and thus cannot be modeled as harmonic oscillators, though their specific functional form varies with the implementation.
This class of terms may include improper dihedral terms, which function as correction factors for out-of-plane deviations. The non-bonded terms are much more computationally costly to calculate in full, since a typical atom is bonded to only a few of its neighbors, but interacts with every other atom in the molecule; the van der Waals term falls off rapidly. It is modeled using a 6–12 Lennard-Jones potential, which means that attractive forces fall off with distance as r−6 and repulsive forces as r−12, where r represents the distance between two atoms; the repulsive part r − 12 is however unphysical. Description of van der Waals forces by the Lennard-Jones 6–12 potential introduces inaccuracies, which become significant at short distances. A cutoff radius is used to speed up the calculation so that atom pairs which distances are greater than the cutoff have a van der Waals interaction energy of zero; the electrostatic terms are notoriously difficult to calculate well because they do not fall off with distance, long-range electrostatic interactions are important features of the system under study.
The basic functional form is the Coulomb potential, which only falls off as r−1. A variety of methods are used to address this problem, the simplest being a cutoff radius similar to that used for the van der Waals terms. However, this introduces atoms outside the radius. Switching or scaling functions that modulate the apparent electrostatic energy are somewhat more accurate methods that multiply the calculated energy by a smoothly varying scaling factor from 0 to 1 at the outer and inner cutoff radii. Other more sophisticated but computationally intensive methods are particle mesh Ewald and the multipole algorithm. In addition to the functional form of each energy term, a useful energy function must be assigned parameters for force constants, van der Waals multipliers, other constant terms; these terms, together with the equilibrium bond and dihedral values, partial charge values, atomic masses and radii, energy function definitions, are collectively termed a force field. Parameterization is done through agreement with experimental values and theoretical calculations results.
Allingers force field in the last MM4 version calculate for hydrocarbons heats of formation with a rms error of 0.35 kcal/mol. The main use of molecular mechanics is in the field of molecular dynamics
Biomechanics is the study of the structure and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs and cell organelles, using the methods of mechanics. The word "biomechanics" and the related "biomechanical" come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms their movement and structure. Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An studied liquid biofluids problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modelled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails. At the microscopic scale, the effects of individual red blood cells become significant, whole blood can no longer be modelled as a continuum.
When the diameter of the blood vessel is just larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and can only pass in single file. In this case, the inverse Fahraeus -- the wall shear stress increases. An example of a gaseous biofluids problem is that of human respiration. Respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices; the main aspects of Contact mechanics and tribology are related to friction and lubrication. When the two surfaces come in contact during motion i.e. rub against each other, friction and lubrication effects are important to analyze in order to determine the performance of the material. Biotribology is a study of friction and lubrication of biological systems human joints such as hips and knees. For example and tibial components of knee implant rub against each other during daily activity such as walking or stair climbing.
If the performance of tibial component needs to be analyzed, the principles of biotribology are used to determine the wear performance of the implant and lubrication effects of synovial fluid. In addition, the theory of contact mechanics becomes important for wear analysis. Additional aspects of biotribology can include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue engineered cartilage. Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans or into the functions and adaptations of the organisms themselves. Common areas of investigation are Animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion, has many manifestations, including running and flying. Locomotion requires energy to overcome friction, drag and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps with many other fields, including ecology, developmental biology and paleontology, to the extent of publishing papers in the journals of these other fields. Comparative biomechanics is applied in medicine as well as in biomimetics, which looks to nature for solutions to engineering problems. Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment.
One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have adopted an open source philosophy; the mechanical analysis of biomaterials and biofluids is carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the micro structural details of the material. One of the most remarkable characteristic of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels. Biomaterials are classified in two groups and soft tissues. Mechanical deformation of hard tissues may be analysed with the theory of linear elasticity.
On the other hand, soft tissues undergo large deformations and thus their analysis rely on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation
The interdisciplinary field of materials science commonly termed materials science and engineering is the design and discovery of new materials solids. The intellectual origins of materials science stem from the Enlightenment, when researchers began to use analytical thinking from chemistry and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics and engineering; as such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more recognized as a specific and distinct field of science and engineering, major technical universities around the world created dedicated schools of the study, within either the Science or Engineering schools, hence the naming. Materials science is a syncretic discipline hybridizing metallurgy, solid-state physics, chemistry, it is the first example of a new academic discipline emerging by fusion rather than fission.
Many of the most pressing scientific problems humans face are due to the limits of the materials that are available and how they are used. Thus, breakthroughs in materials science are to affect the future of technology significantly. Materials scientists emphasize understanding how the history of a material influences its structure, thus the material's properties and performance; the understanding of processing-structure-properties relationships is called the § materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology and metallurgy. Materials science is an important part of forensic engineering and failure analysis – investigating materials, structures or components which fail or do not function as intended, causing personal injury or damage to property; such investigations are key to understanding, for example, the causes of various aviation accidents and incidents. The material of choice of a given era is a defining point. Phrases such as Stone Age, Bronze Age, Iron Age, Steel Age are historic, if arbitrary examples.
Deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from mining and ceramics and earlier from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science are a product of the space race: the understanding and engineering of the metallic alloys, silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, been driven by, the development of revolutionary technologies such as rubbers, plastics and biomaterials. Before the 1960s, many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th century emphasis on metals and ceramics.
The growth of materials science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s "to expand the national program of basic research and training in the materials sciences." The field has since broadened to include every class of materials, including ceramics, semiconductors, magnetic materials and nanomaterials classified into three distinct groups: ceramics and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties, understand phenomena. A material is defined as a substance, intended to be used for certain applications. There are a myriad of materials around us—they can be found in anything from buildings to spacecraft. Materials can be further divided into two classes: crystalline and non-crystalline; the traditional examples of materials are metals, semiconductors and polymers.
New and advanced materials that are being developed include nanomaterials and energy materials to name a few. The basis of materials science involves studying the structure of materials, relating them to their properties. Once a materials scientist knows about this structure-property correlation, they can go on to study the relative performance of a material in a given application; the major determinants of the structure of a material and thus of its properties are its constituent chemical elements and the way in which it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure, thus its properties; as mentioned above, structure is one of the most important components of the field of materials science. Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way; this involves methods such as diffraction with X-rays, electrons, or neutrons, various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, thermal analysis, electron microscope analysis, etc.