In physics, a plasmon is a quantum of plasma oscillation. Just as light consists of photons, the plasma oscillation consists of plasmons; the plasmon can be considered as a quasiparticle since it arises from the quantization of plasma oscillations, just like phonons are quantizations of mechanical vibrations. Thus, plasmons are collective oscillations of the free electron gas density. For example, at optical frequencies, plasmons can couple with a photon to create another quasiparticle called a plasmon polariton; the plasmon was proposed in 1952 by David Pines and David Bohm and was shown to arise from a Hamiltonian for the long-range electron-electron correlations. Since plasmons are the quantization of classical plasma oscillations, most of their properties can be derived directly from Maxwell's equations. Plasmons can be described in the classical picture as an oscillation of electron density with respect to the fixed positive ions in a metal. To visualize a plasma oscillation, imagine a cube of metal placed in an external electric field pointing to the right.
Electrons will move to the left side. If the electric field is removed, the electrons move to the right, repelled by each other and attracted to the positive ions left bare on the right side, they oscillate back and forth at the plasma frequency until the energy is lost in some kind of resistance or damping. Plasmons are a quantization of this kind of oscillation. Plasmons play a large role in the optical properties of semiconductors. Light of frequencies below the plasma frequency is reflected by a material because the electrons in the material screen the electric field of the light. Light of frequencies above the plasma frequency is transmitted by a material because the electrons in the material cannot respond fast enough to screen it. In most metals, the plasma frequency is in the ultraviolet; some metals, such as copper and gold, have electronic interband transitions in the visible range, whereby specific light energies are absorbed, yielding their distinct color. In semiconductors, the valence electron plasmon frequency is in the deep ultraviolet, while their electronic interband transitions are in the visible range, whereby specific light energies are absorbed, yielding their distinct color, why they are reflective.
It has been shown that the plasmon frequency may occur in the mid-infrared and near-infrared region when semiconductors are in the form of nanoparticles with heavy doping. The plasmon energy can be estimated in the free electron model as E p = ℏ n e 2 m ϵ 0 = ℏ ω p, where n is the conduction electron density, e is the elementary charge, m is the electron mass, ϵ 0 the permittivity of free space, ℏ the reduced Planck constant and ω p the plasmon frequency. Surface plasmons are those plasmons that are confined to surfaces and that interact with light resulting in a polariton, they occur at the interface of a material exhibiting positive real part of their relative permittivity, i.e. dielectric constant, a material whose real part of permittivity is negative at the given frequency of light a metal or doped semiconductors. In addition to opposite sign of the real part of the permittivity, the magnitude of the real part of the permittivity in the negative permittivity region should be larger than the magnitude of the permittivity in the positive permittivity region, otherwise the light is not bound to the surface as shown in the famous book by Raether.
At visible wavelengths of light, e.g. 632.8 nm wavelength provided by a He-Ne laser, interfaces supporting surface plasmons are formed by metals like silver or gold in contact with dielectrics such as air or silicon dioxide. The particular choice of materials can have a drastic effect on the degree of light confinement and propagation distance due to losses. Surface plasmons can exist on interfaces other than flat surfaces, such as particles, or rectangular strips, v-grooves and other structures. Many structures have been investigated due to the capability of surface plasmons to confine light below the diffraction limit of light. Surface plasmons can play a role in surface-enhanced Raman spectroscopy and in explaining anomalies in diffraction from metal gratings, among other things. Surface plasmon resonance is used by biochemists to study the mechanisms and kinetics of ligands binding to receptors. Multi-parametric surface plasmon resonance can be used not only to measure molecular interactions, but nanolayer properties or structural changes in the adsorbed molecules, polymer layers or graphene, for instance.
Surface plasmons may be observed in the X-ray emission spectra of metals. A dispersion relation for surface plasmons in the X-ray emission spectra of metals has been derived. More surface plasmons have been used to contro
Scanning transmission electron microscopy
A scanning transmission electron microscope is a type of transmission electron microscope. Pronunciation is or; as with a conventional transmission electron microscope, images are formed by electrons passing through a sufficiently thin specimen. However, unlike CTEM, in STEM the electron beam is focused to a fine spot, scanned over the sample in a raster illumination system constructed in a way that at each point sample illuminated with the beam parallel to the optical axis; the rastering of the beam across the sample makes STEM suitable for analytical techniques such as Z-contrast annular dark-field imaging, spectroscopic mapping by energy dispersive X-ray spectroscopy, or electron energy loss spectroscopy. These signals can be obtained allowing direct correlation of images and spectroscopic data. A typical STEM is a conventional transmission electron microscope equipped with additional scanning coils and necessary circuitry, which allows it to switch between operating as a STEM, or a CTEM.
High resolution scanning transmission electron microscopes require exceptionally stable room environments. In order to obtain atomic resolution images in STEM, the level of vibration, temperature fluctuations, electromagnetic waves, acoustic waves must be limited in the room housing the microscope. In 1925, Louis de Broglie first theorized the wave-like properties of an electron, with a wavelength smaller than visible light; this would allow the use of electrons to image objects much smaller than the previous diffraction limit set by visible light. The first STEM was built in 1938 by Baron Manfred von Ardenne. However, at the time the results were inferior to those of transmission electron microscopy, von Ardenne only spent two years working on the problem; the microscope was destroyed in an air raid in 1944, von Ardenne did not return to his work after World War II. The technique was not developed further until the 1970s, when Albert Crewe at the University of Chicago developed the field emission gun and added a high quality objective lens to create a modern STEM.
He demonstrated the ability to image atoms using an annular dark field detector. Crewe and coworkers at the University of Chicago developed the cold field emission electron source and built a STEM able to visualize single heavy atoms on thin carbon substrates. By the late 1980s and early 1990s, improvements in STEM technology allowed for samples to be imaged with better than 2 Å resolution, meaning that atomic structure could be imaged in some materials; the addition of an aberration corrector to STEMs enables electron probes to be focused to sub-ångström diameters, allowing images with sub-ångström resolution to be acquired. This has made it possible to identify individual atomic columns with unprecedented clarity. Aberration-corrected STEM was demonstrated with 1.9 Å resolution in 1997 and soon after in 2000 with 1.36 Å resolution. Advanced aberration-corrected STEMs have since been developed with sub-50 pm resolution. Aberration-corrected STEM provides the added resolution and beam current critical to the implementation of atomic resolution chemical and elemental spectroscopic mapping.
Scanning transmission electron microscopes are used to characterize the nanoscale, atomic scale structure of specimens, providing important insights into the properties and behaviour of materials and biological cells. Scanning transmission electron microscopy has been applied to characterize the structure of a wide range of material specimens, including solar cells, semiconductor devices, complex oxides, batteries,fuel cells,catalysts, 2D materials; the first application of STEM to the imaging of biological molecules was demonstrated in 1971. The advantage of STEM imaging of biological samples is the high contrast of annular dark-field images, which can allow imaging of biological samples without the need for staining. STEM has been used to solve a number of structural problems in molecular biology. In annular dark-field mode, images are formed by fore-scattered electrons incident on an annular detector, which lies outside of the path of the directly transmitted beam. By using a high-angle ADF detector, it is possible to form atomic resolution images where the contrast of an atomic column is directly related to the atomic number.
Directly interpretable Z-contrast imaging makes STEM imaging with a high-angle detector an appealing technique in contrast to conventional high resolution electron microscopy, in which phase-contrast effects mean that atomic resolution images must be compared to simulations to aid interpretation. In STEM, bright-field detectors are located in the path of the transmitted electron beam. Axial bright-field detectors are located in the centre of the cone of illumination of the transmitted beam, are used to provide complementary images to those obtained by ADF imaging. Annular bright-field detectors, located within the cone of illumination of the transmitted beam, have been used to obtain atomic resolution images in which the atomic columns of light elements such as oxygen are visible. Detectors have been developed for STEM that can record a complete convergent beam electron diffraction pattern of all scattered and unscattered electrons at every pixel in a scan of the sample in a large dataset.
The data can be analyzed to reconstruct images equivalent to those of any conventional detector geometry, can be used to map fields in the sample at high spatial resolution, including information about the strain and electric fields. This type of'4D' data can be used
Carbon is a chemical element with symbol C and atomic number 6. It is nonmetallic and tetravalent—making four electrons available to form covalent chemical bonds, it belongs to group 14 of the periodic table. Three isotopes occur 12C and 13C being stable, while 14C is a radionuclide, decaying with a half-life of about 5,730 years. Carbon is one of the few elements known since antiquity. Carbon is the 15th most abundant element in the Earth's crust, the fourth most abundant element in the universe by mass after hydrogen and oxygen. Carbon's abundance, its unique diversity of organic compounds, its unusual ability to form polymers at the temperatures encountered on Earth enables this element to serve as a common element of all known life, it is the second most abundant element in the human body by mass after oxygen. The atoms of carbon can bond together in different ways, termed allotropes of carbon; the best known are graphite and amorphous carbon. The physical properties of carbon vary with the allotropic form.
For example, graphite is opaque and black while diamond is transparent. Graphite is soft enough to form a streak on paper, while diamond is the hardest occurring material known. Graphite is a good electrical conductor. Under normal conditions, carbon nanotubes, graphene have the highest thermal conductivities of all known materials. All carbon allotropes are solids under normal conditions, with graphite being the most thermodynamically stable form at standard temperature and pressure, they are chemically resistant and require high temperature to react with oxygen. The most common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide and transition metal carbonyl complexes; the largest sources of inorganic carbon are limestones and carbon dioxide, but significant quantities occur in organic deposits of coal, peat and methane clathrates. Carbon forms a vast number of compounds, more than any other element, with ten million compounds described to date, yet that number is but a fraction of the number of theoretically possible compounds under standard conditions.
For this reason, carbon has been referred to as the "king of the elements". The allotropes of carbon include graphite, one of the softest known substances, diamond, the hardest occurring substance, it bonds with other small atoms, including other carbon atoms, is capable of forming multiple stable covalent bonds with suitable multivalent atoms. Carbon is known to form ten million different compounds, a large majority of all chemical compounds. Carbon has the highest sublimation point of all elements. At atmospheric pressure it has no melting point, as its triple point is at 10.8±0.2 MPa and 4,600 ± 300 K, so it sublimes at about 3,900 K. Graphite is much more reactive than diamond at standard conditions, despite being more thermodynamically stable, as its delocalised pi system is much more vulnerable to attack. For example, graphite can be oxidised by hot concentrated nitric acid at standard conditions to mellitic acid, C66, which preserves the hexagonal units of graphite while breaking up the larger structure.
Carbon sublimes in a carbon arc, which has a temperature of about 5800 K. Thus, irrespective of its allotropic form, carbon remains solid at higher temperatures than the highest-melting-point metals such as tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more than elements such as iron and copper, which are weaker reducing agents at room temperature. Carbon is the sixth element, with a ground-state electron configuration of 1s22s22p2, of which the four outer electrons are valence electrons, its first four ionisation energies, 1086.5, 2352.6, 4620.5 and 6222.7 kJ/mol, are much higher than those of the heavier group-14 elements. The electronegativity of carbon is 2.5 higher than the heavier group-14 elements, but close to most of the nearby nonmetals, as well as some of the second- and third-row transition metals. Carbon's covalent radii are taken as 77.2 pm, 66.7 pm and 60.3 pm, although these may vary depending on coordination number and what the carbon is bonded to.
In general, covalent radius decreases with higher bond order. Carbon compounds form the basis of all known life on Earth, the carbon–nitrogen cycle provides some of the energy produced by the Sun and other stars. Although it forms an extraordinary variety of compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard temperature and pressure, it resists all but the strongest oxidizers, it does not react with hydrochloric acid, chlorine or any alkalis. At elevated temperatures, carbon reacts with oxygen to form carbon oxides and will rob oxygen from metal oxides to leave the elemental metal; this exothermic reaction is used in the iron and steel industry to smelt iron and to control the carbon content of steel: Fe3O4 + 4 C → 3 Fe + 4 COCarbon monoxide can be recycled to smelt more iron: Fe3O4 + 4 CO → 3 Fe + 4 CO2with sulfur to form carbon disulfide and with steam in the coal-gas reaction: C + H2O → CO + H2. Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron carbide cementite in steel and tungsten carbide used as an abrasive and for making hard tips for cutting tools.
The system of carbon allotropes spans a range of extremes: Atomic carbon is a ver
Transmission electron microscopy
Transmission electron microscopy is a microscopy technique in which a beam of electrons is transmitted through a specimen to form an image. The specimen is most an ultrathin section less than 100 nm thick or a suspension on a grid. An image is formed from the interaction of the electrons with the sample as the beam is transmitted through the specimen; the image is magnified and focused onto an imaging device, such as a fluorescent screen, a layer of photographic film, or a sensor such as a scintillator attached to a charge-coupled device. Transmission electron microscopes are capable of imaging at a higher resolution than light microscopes, owing to the smaller de Broglie wavelength of electrons; this enables the instrument to capture fine detail—even as small as a single column of atoms, thousands of times smaller than a resolvable object seen in a light microscope. Transmission electron microscopy is a major analytical method in the physical and biological sciences. TEMs find application in cancer research and materials science as well as pollution and semiconductor research.
TEM instruments boast an enormous array of operating modes including conventional imaging, scanning TEM imaging, diffraction and combinations of these. Within conventional imaging, there are many fundamentally different ways that contrast is produced, called "image contrast mechanisms." Contrast can arise from position-to-position differences in the thickness or density, atomic number, crystal structure or orientation, the slight quantum-mechanical phase shifts that individual atoms produce in electrons that pass through them, the energy lost by electrons on passing through the sample and more. Each mechanism tells the user a different kind of information, depending not only on the contrast mechanism but on how the microscope is used—the settings of lenses and detectors. What this means is that a TEM is capable of returning an extraordinary variety of nanometer- and atomic-resolution information, in ideal cases revealing not only where all the atoms are but what kinds of atoms they are and how they are bonded to each other.
For this reason TEM is regarded as an essential tool for nanoscience in both biological and materials fields. The first TEM was demonstrated by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolution greater than that of light in 1933 and the first commercial TEM in 1939. In 1986, Ruska was awarded the Nobel Prize in physics for the development of transmission electron microscopy. In 1873, Ernst Abbe proposed that the ability to resolve detail in an object was limited by the wavelength of the light used in imaging or a few hundred nanometers for visible light microscopes. Developments in ultraviolet microscopes, led by Köhler and Rohr, increased resolving power by a factor of two; however this required expensive quartz optics, due to the absorption of UV by glass. It was believed that obtaining an image with sub-micrometer information was not possible due to this wavelength constraint. In 1858 Plücker observed the deflection of "cathode rays" with the use of magnetic fields.
This effect was used by Ferdinand Braun in 1897 to build simple cathode ray oscilloscopes measuring devices. In 1891 Riecke noticed that the cathode rays could be focused by magnetic fields, allowing for simple electromagnetic lens designs. In 1926 Hans Busch published work extending this theory and showed that the lens maker's equation could, with appropriate assumptions, be applied to electrons. In 1928, at the Technical University of Berlin, Adolf Matthias, Professor of High voltage Technology and Electrical Installations, appointed Max Knoll to lead a team of researchers to advance the CRO design; the team consisted of several PhD students including Bodo von Borries. The research team worked on lens design and CRO column placement, to optimize parameters to construct better CROs, make electron optical components to generate low magnification images. In 1931 the group generated magnified images of mesh grids placed over the anode aperture; the device used two magnetic lenses to achieve higher magnifications, arguably creating the first electron microscope.
In that same year, Reinhold Rudenberg, the scientific director of the Siemens company, patented an electrostatic lens electron microscope. At the time, electrons were understood to be charged particles of matter; the research group was unaware of this publication until 1932, when they realized that the De Broglie wavelength of electrons was many orders of magnitude smaller than that for light, theoretically allowing for imaging at atomic scales. In April 1932, Ruska suggested the construction of a new electron microscope for direct imaging of specimens inserted into the microscope, rather than simple mesh grids or images of apertures. With this device successful diffraction and normal imaging of an aluminium sheet was achieved; however the magnification achievable was lower than with light microscopy. Magnifications higher than those available with a light microscope were achieved in September 1933 with images of cotton fibers acquired before being damaged by the electron beam. At this time, interest in the electron microscope had increased, with other groups, such as Paul Anderson and Kenneth Fitzsimmons of Washington State Univ
In physics, the electronvolt is a unit of energy equal to 1.6×10−19 joules in SI units. The electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge q has an energy E = qV after passing through the potential V. Like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C √2hα / μ0c0, it is a common unit of energy within physics used in solid state, atomic and particle physics. It is used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa-. In some older documents, in the name Bevatron, the symbol BeV is used, which stands for billion electronvolts. An electronvolt is the amount of kinetic energy gained or lost by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, 1 J/C, multiplied by the electron's elementary charge e, 1.6021766208×10−19 C.
Therefore, one electronvolt is equal to 1.6021766208×10−19 J. The electronvolt, as opposed to volt, is not an SI unit, its derivation is empirical, which means its value in SI units must be obtained by experiment and is therefore not known unlike the litre, the light-year and such other non-SI units. Electronvolt is a unit of energy; the SI unit for energy is joule. 1 eV is equal to 1.6021766208×10−19 J. By mass–energy equivalence, the electronvolt is a unit of mass, it is common in particle physics, where units of mass and energy are interchanged, to express mass in units of eV/c2, where c is the speed of light in vacuum. It is common to express mass in terms of "eV" as a unit of mass using a system of natural units with c set to 1; the mass equivalent of 1 eV/c2 is 1 eV / c 2 = ⋅ 1 V 2 = 1.783 × 10 − 36 kg. For example, an electron and a positron, each with a mass of 0.511 MeV/c2, can annihilate to yield 1.022 MeV of energy. The proton has a mass of 0.938 GeV/c2. In general, the masses of all hadrons are of the order of 1 GeV/c2, which makes the GeV a convenient unit of mass for particle physics: 1 GeV/c2 = 1.783×10−27 kg.
The unified atomic mass unit, 1 gram divided by Avogadro's number, is the mass of a hydrogen atom, the mass of the proton. To convert to megaelectronvolts, use the formula: 1 u = 931.4941 MeV/c2 = 0.9314941 GeV/c2. In high-energy physics, the electronvolt is used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy; this gives rise to usage of eV as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of momentum units are LMT−1; the dimensions of energy units are L2MT−2. Dividing the units of energy by a fundamental constant that has units of velocity, facilitates the required conversion of using energy units to describe momentum. In the field of high-energy particle physics, the fundamental velocity unit is the speed of light in vacuum c. By dividing energy in eV by the speed of light, one can describe the momentum of an electron in units of eV/c; the fundamental velocity constant c is dropped from the units of momentum by way of defining units of length such that the value of c is unity.
For example, if the momentum p of an electron is said to be 1 GeV the conversion to MKS can be achieved by: p = 1 GeV / c = ⋅ ⋅ = 5.344286 × 10 − 19 kg ⋅ m / s. In particle physics, a system of "natural units" in which the speed of light in vacuum c and the reduced Planck constant ħ are dimensionless and equal to unity is used: c = ħ = 1. In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mas
X-ray absorption spectroscopy
X-ray absorption spectroscopy is a used technique for determining the local geometric and/or electronic structure of matter. The experiment is performed at synchrotron radiation facilities, which provide intense and tunable X-ray beams. Samples can be as solids. XAS data is obtained by tuning the photon energy, using a crystalline monochromator, to a range where core electrons can be excited; the edges are, in part, named by which core electron is excited: the principal quantum numbers n = 1, 2, 3, correspond to the K-, L-, M-edges, respectively. For instance, excitation of a 1s electron occurs at the K-edge, while excitation of a 2s or 2p electron occurs at an L-edge. There are three main regions found on a spectrum generated by XAS data: The absorption threshold determined by the transition to the lowest unoccupied states: The X-ray Absorption Near-Edge Structure XANES introduced in 1980 and in 1983 called NEXAFS which are dominated by core transitions to quasi bound states for photoelectrons with kinetic energy in the range from 10 to 150 eV above the chemical potential, called "shape resonances" in molecular spectra since they are due to final states of short life-time degenerate with the continuum with the Fano line-shape.
In this range multi-electron excitations and many-body final states in correlated systems are relevant. After it was shown in 1985 that multiple scattering theory can interpret both XANES and EXAFS the experimental analysis focusing on both regions is called XAFS. XAS is a type of absorption spectroscopy from a core initial state with a well defined symmetry therefore the quantum mechanical selection rules select the symmetry of the final states in the continuum which are mixture of multiple components; the most intense features are due to electric-dipole allowed transitions to unoccupied final states. For example, the most intense features of a K-edge are due to core transitions from 1s → p-like final states, while the most intense features of the L3-edge are due to 2p → d-like final states. XAS methodology can be broadly divided into four experimental categories that can give complementary results to each other: metal K-edge, metal L-edge, ligand K-edge, EXAFS. XAS is a technique used in different scientific fields including molecular and condensed matter physics, materials science and engineering, earth science, biology.
In particular, its unique sensitivity to the local structure, as compared to x-ray diffraction, have been exploited for studying: Amorphous solids and liquid systems Solid solutions Doping and ion implantation materials for electronics Local distortions of crystal lattices Organometallic compounds Metalloproteins Metal clusters Catalysis Vibrational dynamics Ions in solutions Speciation of elements Liquid water and aqueous solutions Used to detect bone fracture Used to determine the concentration of any liquid in any tank http://chemwiki.ucdavis.edu/Physical_Chemistry/Spectroscopy/X-ray_Spectroscopy/XANES%3A_Theory https://web.archive.org/web/20110722032112/http://xafs.org/Tutorials?action=AttachFile&do=view&target=Newville_xas_fundamentals.pdf
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the