Portugal the Portuguese Republic, is a country located on the Iberian Peninsula in southwestern Europe. It is the westernmost sovereign state of mainland Europe, being bordered to the west and south by the Atlantic Ocean and to the north and east by Spain, its territory includes the Atlantic archipelagos of the Azores and Madeira, both autonomous regions with their own regional governments. Portugal is the oldest state on the Iberian Peninsula and one of the oldest in Europe, its territory having been continuously settled and fought over since prehistoric times; the pre-Celtic people, Celts and Romans were followed by the invasions of the Visigoths and Suebi Germanic peoples. Portugal as a country was established during the Christian Reconquista against the Moors who had invaded the Iberian Peninsula in 711 AD. Founded in 868, the County of Portugal gained prominence after the Battle of São Mamede in 1128; the Kingdom of Portugal was proclaimed following the Battle of Ourique in 1139, independence from León was recognised by the Treaty of Zamora in 1143.
In the 15th and 16th centuries, Portugal established the first global empire, becoming one of the world's major economic and military powers. During this period, today referred to as the Age of Discovery, Portuguese explorers pioneered maritime exploration, notably under royal patronage of Prince Henry the Navigator and King John II, with such notable voyages as Bartolomeu Dias' sailing beyond the Cape of Good Hope, Vasco da Gama's discovery of the sea route to India and the European discovery of Brazil. During this time Portugal monopolized the spice trade, divided the world into hemispheres of dominion with Castille, the empire expanded with military campaigns in Asia. However, events such as the 1755 Lisbon earthquake, the country's occupation during the Napoleonic Wars, the independence of Brazil, a late industrialization compared to other European powers, erased to a great extent Portugal's prior opulence. After the 1910 revolution deposed the monarchy, the democratic but unstable Portuguese First Republic was established being superseded by the Estado Novo right-wing authoritarian regime.
Democracy was restored after the Carnation Revolution in 1974. Shortly after, independence was granted to all its overseas territories; the handover of Macau to China in 1999 marked the end of what can be considered the longest-lived colonial empire. Portugal has left a profound cultural and architectural influence across the globe, a legacy of around 250 million Portuguese speakers, many Portuguese-based creoles, it is a developed country with a high-income advanced economy and high living standards. Additionally, it is placed in rankings of moral freedom, democracy, press freedom, social progress, LGBT rights. A member of the United Nations and the European Union, Portugal was one of the founding members of NATO, the eurozone, the OECD, the Community of Portuguese Language Countries; the word Portugal derives from the Roman-Celtic place name Portus Cale. Portus, the Latin word for port or harbour, Cala or Cailleach was the name of a Celtic goddess – in Scotland she is known as Beira – and the name of an early settlement located at the mouth of the Douro River which flows into the Atlantic Ocean in the north of what is now Portugal.
At the time the land of a specific people was named after its deity. Those names are the origins of the - gal in Galicia. Incidentally, the meaning of Cale or Calle is a derivation of the Celtic word for port which would confirm old links to pre-Roman, Celtic languages which compare to today's Irish caladh or Scottish cala, both meaning port; some French scholars believe it may have come from ` Portus Gallus', the port of the Celts. Around 200 BC, the Romans took the Iberian Peninsula from the Carthaginians during the Second Punic War, in the process conquered Cale and renamed it Portus Cale incorporating it to the province of Gaellicia with capital in Bracara Augusta. During the Middle Ages, the region around Portus Cale became known by the Suebi and Visigoths as Portucale; the name Portucale evolved into Portugale during the 7th and 8th centuries, by the 9th century, that term was used extensively to refer to the region between the rivers Douro and Minho. By the 11th and 12th centuries, Portugallia or Portvgalliae was referred to as Portugal.
The early history of Portugal is shared with the rest of the Iberian Peninsula located in South Western Europe. The name of Portugal derives from the joined Romano-Celtic name Portus Cale; the region was settled by Pre-Celts and Celts, giving origin to peoples like the Gallaeci, Lusitanians and Cynetes, visited by Phoenicians, Ancient Greeks and Carthaginians, incorporated in the Roman Republic dominions as Lusitania and part of Gallaecia, after 45 BC until 298 AD. The region of present-day Portugal was inhabited by Neanderthals and by Homo sapiens, who roamed the border-less region of the northern Iberian peninsula; these were subsistence societies that, although they did not establish prosperous settlements, did form organized societies. Neolithic Portugal experimented with domestication of herding animals, the raising of some cereal crops and fluvial or marine fishing, it is believed by some scholars that early in the first millennium BC, several waves of Celts invaded Portugal from Central Europe and inter-married with the local populations, forming differe
In chemistry, spectrophotometry is the quantitative measurement of the reflection or transmission properties of a material as a function of wavelength. It is more specific than the general term electromagnetic spectroscopy in that spectrophotometry deals with visible light, near-ultraviolet, near-infrared, but does not cover time-resolved spectroscopic techniques. Spectrophotometry is a tool that hinges on the quantitative analysis of molecules depending on how much light is absorbed by colored compounds. Spectrophotometry uses photometers, known as spectrophotometers, that can measure a light beam's intensity as a function of its color. Important features of spectrophotometers are spectral bandwidth, the percentage of sample-transmission, the logarithmic range of sample-absorption, sometimes a percentage of reflectance measurement. A spectrophotometer is used for the measurement of transmittance or reflectance of solutions, transparent or opaque solids, such as polished glass, or gases. Although many biochemicals are colored, as in, they absorb visible light and therefore can be measured by colorimetric procedures colorless biochemicals can be converted to colored compounds suitable for chromogenic color-forming reactions to yield compounds suitable for colorimetric analysis.
However, they can be designed to measure the diffusivity on any of the listed light ranges that cover around 200 nm - 2500 nm using different controls and calibrations. Within these ranges of light, calibrations are needed on the machine using standards that vary in type depending on the wavelength of the photometric determination. An example of an experiment in which spectrophotometry is used is the determination of the equilibrium constant of a solution. A certain chemical reaction within a solution may occur in a forward and reverse direction, where reactants form products and products break down into reactants. At some point, this chemical reaction will reach a point of balance called an equilibrium point. In order to determine the respective concentrations of reactants and products at this point, the light transmittance of the solution can be tested using spectrophotometry; the amount of light that passes through the solution is indicative of the concentration of certain chemicals that do not allow light to pass through.
The absorption of light is due to the interaction of light with the electronic and vibrational modes of molecules. Each type of molecule has an individual set of energy levels associated with the makeup of its chemical bonds and nuclei, thus will absorb light of specific wavelengths, or energies, resulting in unique spectral properties; this is based upon its distinct makeup. The use of spectrophotometers spans various scientific fields, such as physics, materials science, biochemistry,Chemical Engineering, molecular biology, they are used in many industries including semiconductors and optical manufacturing and forensic examination, as well in laboratories for the study of chemical substances. Spectrophotometry is used in measurements of enzyme activities, determinations of protein concentrations, determinations of enzymatic kinetic constants, measurements of ligand binding reactions. A spectrophotometer is able to determine, depending on the control or calibration, what substances are present in a target and how much through calculations of observed wavelengths.
In astronomy, the term spectrophotometry refers to the measurement of the spectrum of a celestial object in which the flux scale of the spectrum is calibrated as a function of wavelength by comparison with an observation of a spectrophotometric standard star, corrected for the absorption of light by the Earth's atmosphere. Invented by Arnold O. Beckman in 1940, the spectrophotometer was created with the aid of his colleagues at his company National Technical Laboratories founded in 1935 which would become Beckman Instrument Company and Beckman Coulter; this would come as a solution to the created spectrophotometers which were unable to absorb the ultraviolet correctly. He would start with the invention of Model A, it would be found that this did not give satisfactory results, therefore in Model B, there was a shift from a glass to a quartz prism which allowed for better absorbance results. From there, Model C was born with an adjustment to the wavelength resolution which ended up having three units of it produced.
The last and most popular model became Model D, better recognized now as the DU spectrophotometer which contained the instrument case, hydrogen lamp with ultraviolent continuum and a better monochromator. It was produced from 1941 to 1976 where the price for it in 1941 was US$723. In the words of Nobel chemistry laureate Bruce Merrifield, it was "probably the most important instrument developed towards the advancement of bioscience."Once it became discontinued in 1976, another company known as Hewlett-Packard created the first commercially available diode-assay spectrophotometer in 1979 known as the HP 8450A. Diode-assay spectrophotometers differed from the original spectrophotometer created by Beckman because it was the first single-beam microprocessor-controlled spectrophotometer that scanned multiple wavelengths at a time in seconds, it irradiates the sample with polychromatic light which the sample absorbs depending on its properties. It is transmitted back by grating the photodiode array which detects the wavelength region of the spectrum.
Since the creation and implementation of spectrophotometry devices has increased immensely and has become one of the mos
Johann Heinrich Lambert
Johann Heinrich Lambert was a Swiss polymath who made important contributions to the subjects of mathematics, philosophy and map projections. Edward Tufte calls him and William Playfair "The two great inventors of modern graphical designs". Lambert was born in 1728 into a Huguenot family in the city of Mulhouse, at that time an exclave of Switzerland. Leaving school at 12, he continued to study in his free time whilst undertaking a series of jobs; these included assistant to his father, a clerk at a nearby iron works, a private tutor, secretary to the editor of Basler Zeitung and, at the age of 20, private tutor to the sons of Count Salis in Chur. Travelling Europe with his charges allowed him to meet established mathematicians in the German states, The Netherlands and the Italian states. On his return to Chur he began to seek an academic post. After a few short posts he was rewarded by an invitation to a position at the Prussian Academy of Sciences in Berlin, where he gained the sponsorship of Frederick II of Prussia, became a friend of Euler.
In this stimulating and financially stable environment, he worked prodigiously until his death in 1777. Lambert was the first to introduce hyperbolic functions into trigonometry, he made conjectures regarding non-Euclidean space. Lambert is credited with the first proof that π is irrational by using a generalized continued fraction for the function tan x. Euler believed the conjecture but could not prove that π was irrational, it is speculated that Aryabhata believed this, in 500 CE. Lambert devised theorems regarding conic sections that made the calculation of the orbits of comets simpler. Lambert devised a formula for the relationship between the angles and the area of hyperbolic triangles; these are triangles drawn on a concave surface, as on a saddle, instead of the usual flat Euclidean surface. Lambert showed that the angles added up to less than π, or 180°; the amount of shortfall, called the defect, increases with the area. The larger the triangle's area, the smaller the sum of the angles and hence the larger the defect C△ = π —.
That is, the area of a hyperbolic triangle is equal to π, or 180°, minus the sum of the angles α, β, γ. Here C denotes, in the present sense, the negative of the curvature of the surface; as the triangle gets larger or smaller, the angles change in a way that forbids the existence of similar hyperbolic triangles, as only triangles that have the same angles will have the same area. Hence, instead of expressing the area of the triangle in terms of the lengths of its sides, as in Euclid's geometry, the area of Lambert's hyperbolic triangle can be expressed in terms of its angles. Lambert was the first mathematician to address the general properties of map projections. In particular he was the first to discuss the properties of conformality and equal area preservation and to point out that they were mutually exclusive.. In 1772, Lambert published seven new map projections under the title Anmerkungen und Zusätze zur Entwerfung der Land- und Himmelscharten. Lambert did not give names to any of his projections but they are now known as: Lambert conformal conic Transverse Mercator Lambert azimuthal equal area Lagrange projection Lambert cylindrical equal area Transverse cylindrical equal area Lambert conical equal areaThe first three of these are of great importance.
Further details may be found in several texts. Lambert invented the first practical hygrometer. In 1760, he published a book on the Photometria. From the assumption that light travels in straight lines, he showed that illumination was proportional to the strength of the source, inversely proportional to the square of the distance of the illuminated surface and the sine of the angle of inclination of the light's direction to that of the surface; these results were supported by experiments involving the visual comparison of illuminations and used for the calculation of illumination. In Photometria Lambert formulated the law of light absorption—the Beer–Lambert law) and introduced the term albedo. Lambertian reflectance is named after Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book Photometria, he contributed to geometrical optics. The photometric unit lambert is named in recognition of his work in establishing the study of photometry. Lambert was a pioneer in the development of three-dimensional colour models.
Late in life, he published a description of a triangular colour pyramid, which shows a total of 107 colours on six different levels, variously combining red and blue pigments, with an increasing amount of white to provide the vertical component. His investigations were built on the earlier theoretical proposals of Tobias Mayer extending these early ideas. Lambert was assisted in this project by the court painter Benjamin Calau. In his main philosophical work, Neues Organon, Lambert studied the rules for distinguishing subjective from objective appearances; this connects with his work in the science of optics. In 1765 he began corresponding with Immanuel Kant who intended to dedicate to him the Critique of Pure Reason but
Polymer degradation is a change in the properties—tensile strength, shape, etc.—of a polymer or polymer-based product under the influence of one or more environmental factors such as heat, light or chemicals such as acids and some salts. These changes are undesirable, such as cracking and chemical disintegration of products or, more desirable, as in biodegradation, or deliberately lowering the molecular weight of a polymer for recycling; the changes in properties are termed "aging". In a finished product such a change is to be delayed. Degradation can be useful for recycling/reusing the polymer waste to prevent or reduce environmental pollution. Degradation can be induced deliberately to assist structure determination. Polymeric molecules are large, their unique and useful properties are a result of their size. Any loss in chain length is a primary cause of premature cracking. Today there are seven commodity polymers in use: polyethylene, polyvinyl chloride, polyethylene terephthalate, polystyrene and poly.
These make up nearly 98 % of all plastics encountered in daily life. Each of these polymers has its own characteristic modes of degradation and resistances to heat and chemicals. Polyethylene and poly are sensitive to oxidation and UV radiation, while PVC may discolor at high temperatures due to loss of hydrogen chloride gas, become brittle. PET is sensitive to hydrolysis and attack by strong acids, while polycarbonate depolymerizes when exposed to strong alkalis. For example, polyethylene degrades by random scission—that is by a random breakage of the linkages that hold the atoms of the polymer together; when this polymer is heated above 450 Celsius it becomes a complex mixture of molecules of various sizes that resemble gasoline. Other polymers—like polyalphamethylstyrene—undergo'specific' chain scission with breakage occurring only at the ends. Most polymers can be degraded by photolysis to give lower molecular weight molecules. Electromagnetic waves with the energy of visible light or higher, such as ultraviolet light, X-rays and gamma rays are involved in such reactions.
Chain-growth polymers like poly can be degraded by thermolysis at high temperatures to give monomers, oils and water. The degradation takes place by: Step-growth polymers like polyesters and polycarbonates can be degraded by solvolysis and hydrolysis to give lower molecular weight molecules; the hydrolysis takes place in the presence of water containing a base as catalyst. Polyamide is sensitive to degradation by acids and polyamide mouldings will crack when attacked by strong acids. For example, the fracture surface of a fuel connector showed the progressive growth of the crack from acid attack to the final cusp of polymer; the problem is known as stress corrosion cracking, in this case was caused by hydrolysis of the polymer. It was the reverse reaction of the synthesis of the polymer: Cracks can be formed in many different elastomers by ozone attack. Tiny traces of the gas in the air will attack double bonds in rubber chains, with Natural rubber, Styrene-butadiene rubber and NBR being most sensitive to degradation.
Ozone cracks form in products under tension, but the critical strain is small. The cracks are always oriented at right angles to the strain axis, so will form around the circumference in a rubber tube bent over; such cracks are dangerous when they occur in fuel pipes because the cracks will grow from the outside exposed surfaces into the bore of the pipe, fuel leakage and fire may follow. The problem of ozone cracking can be prevented by adding anti-ozonants to the rubber before vulcanization. Ozone cracks were seen in automobile tire sidewalls, but are now seen thanks to these additives. On the other hand, the problem seals; the polymers are susceptible to attack by atmospheric oxygen at elevated temperatures encountered during processing to shape. Many process methods such as extrusion and injection moulding involve pumping molten polymer into tools, the high temperatures needed for melting may result in oxidation unless precautions are taken. For example, a forearm crutch snapped and the user was injured in the resulting fall.
The crutch had fractured across a polypropylene insert within the aluminium tube of the device, infra-red spectroscopy of the material showed that it had oxidized, possible as a result of poor moulding. Oxidation is relatively easy to detect owing to the strong absorption by the carbonyl group in the spectrum of polyolefins. Polypropylene has a simple spectrum with few peaks at the carbonyl position. Oxidation tends to start at tertiary carbon atoms because the free radicals formed here are more stable and longer lasting, making them more susceptible to attack by oxygen; the carbonyl group can be further oxidised to break the chain, this weakens the material by lowering its molecular weight, cracks start to grow in the regions affected. Polymer degradation by galvanic action was first described in the technical literature in 1990; this was the discovery that "plastics can corrode", i.e. polymer degradation may occur through galvanic action similar to that of metals under certain conditions and has been referred to as the "Faudree Effect".
In the aerospace field, this finding has contributed to aircraft safety those aircraft that use C
An aerosol is a suspension of fine solid particles or liquid droplets, in air or another gas. Aerosols can be anthropogenic. Examples of natural aerosols are fog, forest exudates and geyser steam. Examples of anthropogenic aerosols are haze, particulate air pollutants and smoke; the liquid or solid particles have diameters <1 μm. In general conversation, aerosol refers to an aerosol spray that delivers a consumer product from a can or similar container. Other technological applications of aerosols include dispersal of pesticides, medical treatment of respiratory illnesses, convincing technology. Diseases can spread by means of small droplets in the breath called aerosols. Aerosol science covers generation and removal of aerosols, technological application of aerosols, effects of aerosols on the environment and people, other topics. An aerosol is defined as a suspension system of liquid particles in a gas. An aerosol includes both the particles and the suspending gas, air. Frederick G. Donnan first used the term aerosol during World War I to describe an aero-solution, clouds of microscopic particles in air.
This term developed analogously to the term hydrosol, a colloid system with water as the dispersed medium. Primary aerosols contain. Various types of aerosol, classified according to physical form and how they were generated, include dust, mist and fog. There are several measures of aerosol concentration. Environmental science and health uses the mass concentration, defined as the mass of particulate matter per unit volume with units such as μg/m3. Used is the number concentration, the number of particles per unit volume with units such as number/m3 or number/cm3; the size of particles has a major influence on their properties, the aerosol particle radius or diameter is a key property used to characterise aerosols. Aerosols vary in their dispersity. A monodisperse aerosol, producible in the laboratory, contains particles of uniform size. Most aerosols, however, as polydisperse colloidal systems, exhibit a range of particle sizes. Liquid droplets are always nearly spherical, but scientists use an equivalent diameter to characterize the properities of various shapes of solid particles, some irregular.
The equivalent diameter is the diameter of a spherical particle with the same value of some physical property as the irregular particle. The equivalent volume diameter is defined as the diameter of a sphere of the same volume as that of the irregular particle. Used is the aerodynamic diameter. For a monodisperse aerosol, a single number—the particle diameter—suffices to describe the size of the particles. However, more complicated particle-size distributions describe the sizes of the particles in a polydisperse aerosol; this distribution defines the relative amounts of particles, sorted according to size. One approach to defining the particle size distribution uses a list of the sizes of every particle in a sample. However, this approach proves tedious to ascertain in aerosols with millions of particles and awkward to use. Another approach splits the complete size range into intervals and finds the number of particles in each interval. One can visualize these data in a histogram with the area of each bar representing the proportion of particles in that size bin normalised by dividing the number of particles in a bin by the width of the interval so that the area of each bar is proportionate to the number of particles in the size range that it represents.
If the width of the bins tends to zero, one gets the frequency function: d f = f d d p where d p is the diameter of the particles d f is the fraction of particles having diameters between d p and d p + d d p f is the frequency functionTherefore, the area under the frequency curve between two sizes a and b represents the total fraction of the particles in that size range: f a b = ∫ a b f d d p It can be formulated in terms of the total number density N: d N = N d d p Assuming spherical aerosol particles, the aerosol surface area per unit volume is given by the second moment: S = π / 2 ∫ 0 ∞ N d p 2 d d p And the third moment gives the total volume concentration of the particles: V = π / 6 ∫ 0 ∞ N (
A molecule is an electrically neutral group of two or more atoms held together by chemical bonds. Molecules are distinguished from ions by their lack of electrical charge. However, in quantum physics, organic chemistry, biochemistry, the term molecule is used less also being applied to polyatomic ions. In the kinetic theory of gases, the term molecule is used for any gaseous particle regardless of its composition. According to this definition, noble gas atoms are considered molecules as they are monatomic molecules. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, as with oxygen. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are not considered single molecules. Molecules as components of matter are common in organic substances, they make up most of the oceans and atmosphere. However, the majority of familiar solid substances on Earth, including most of the minerals that make up the crust and core of the Earth, contain many chemical bonds, but are not made of identifiable molecules.
No typical molecule can be defined for ionic crystals and covalent crystals, although these are composed of repeating unit cells that extend either in a plane or three-dimensionally. The theme of repeated unit-cellular-structure holds for most condensed phases with metallic bonding, which means that solid metals are not made of molecules. In glasses, atoms may be held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating units that characterizes crystals; the science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, this distinction is vague. In molecular sciences, a molecule consists of a stable system composed of two or more atoms.
Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for reactive species, i.e. short-lived assemblies of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate. According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. Molecule – "extremely minute particle", from French molécule, from New Latin molecula, diminutive of Latin moles "mass, barrier". A vague meaning at first; the definition of the molecule has evolved. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties; this definition breaks down since many substances in ordinary experience, such as rocks and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules.
Molecules are held together by ionic bonding. Several types of non-metal elements exist only as molecules in the environment. For example, hydrogen only exists as hydrogen molecule. A molecule of a compound is made out of two or more elements. A covalent bond is a chemical bond; these electron pairs are termed shared pairs or bonding pairs, the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding. Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, is the primary interaction occurring in ionic compounds; the ions are atoms that have lost one or more electrons and atoms that have gained one or more electrons. This transfer of electrons is termed electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. An ionic bond is the transfer of electrons from a metal to a non-metal for both atoms to obtain a full valence shell.
Most molecules are far too small to be seen with the naked eye. DNA, a macromolecule, can reach macroscopic sizes, as can molecules of many polymers. Molecules used as building blocks for organic synthesis have a dimension of a few angstroms to several dozen Å, or around one billionth of a meter. Single molecules cannot be observed by light, but small molecules and the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope; some of the largest molecules are supermolecules. The smallest molecule is the diatomic hydrogen, with a bond length of 0.74 Å. Effective molecular radius is the size; the table of permselectivity for different substances contains examples. The chemical formula for a molecule uses one line of chemical element symbols and sometimes al
Radiation protection known as radiological protection, is defined by the International Atomic Energy Agency as "The protection of people from harmful effects of exposure to ionizing radiation, the means for achieving this". The IAEA states "The accepted understanding of the term radiation protection is restricted to protection of people. Suggestions to extend the definition to include the protection of non-human species or the protection of the environment are controversial". Exposure can be from a radiation source external to the human body or due to the bodily intake of a radioactive material. Ionizing radiation is used in industry and medicine, can present a significant health hazard by causing microscopic damage to living tissue; this can result in skin burns and radiation sickness at high exposures, known as "tissue" or "deterministic" effects, statistically elevated risks of cancer at low exposures, known as "stochastic effects". Fundamental to radiation protection is the reduction of expected dose and the measurement of dose uptake.
For radiation protection and dosimetry assessment the International Committee on Radiation Protection and International Commission on Radiation Units and Measurements publish recommendations and data, used to calculate the biological effects on the human body of certain levels of radiation, thereby advise acceptable dose uptake limits. Supporting these are preventive dose reduction techniques such as radiation shielding, exposure planning and avoidance of ingestion of radioactive substances. Radiation protection instruments are used to indicate radiation hazards, personal dosimeters and bioassay techniques are used to measure personal dose uptake; the ICRP recommends and maintains the International System of Radiological Protection, based on evaluation of the large body of scientific studies available to equate risk to received dose levels. The system's health objectives are "to manage and control exposures to ionising radiation so that deterministic effects are prevented, the risks of stochastic effects are reduced to the extent reasonably achievable".
The ICRP's recommendations flow down to national and regional regulators, which have the opportunity to incorporate them into their own law. In most countries a national regulatory authority works towards ensuring a secure radiation environment in society by setting dose limitation requirements that are based on the recommendations of the ICRP; the ICRP recognises planned and existing exposure situations, as described below. These are such as in occupational exposure situations, where it is necessary for personnel to work in a known radiation environment. Emergency exposure – defined as "...unexpected situations that may require urgent protective actions". This would be such as an emergency nuclear event. Existing exposure – defined as "...being those that exist when a decision on control has to be taken". These can be such as from occurring radioactive materials which exist in the environment; the ICRP uses the following overall principles for all controllable exposure situations. Justification: No unnecessary use of radiation is permitted, which means that the advantages must outweigh the disadvantages.
Limitation: Each individual must be protected against risks that are too great, through the application of individual radiation dose limits. Optimization: This process is intended for application to those situations that have been deemed to be justified, it means "the likelihood of incurring exposures, the number of people exposed, the magnitude of their individual doses" should all be kept as Low As Reasonably Achievable. It takes into societal factors. There are three factors that control the dose, of radiation received from a source. Radiation exposure can be managed by a combination of these factors: Time: Reducing the time of an exposure reduces the effective dose proportionally. An example of reducing radiation doses by reducing the time of exposures might be improving operator training to reduce the time they take to handle a radioactive source. Distance: Increasing distance reduces dose due to the inverse square law. Distance can be as simple as handling a source with forceps rather than fingers.
Shielding: Sources of radiation can be shielded with solid or liquid material, which absorbs the energy of the radiation. The term'biological shield' is used for absorbing material placed around a nuclear reactor, or other source of radiation, to reduce the radiation to a level safe for humans. Internal dose, due to the inhalation or ingestion of radioactive substances, can result in stochastic or deterministic effects, depending on the amount of radioactive material ingested and other biokinetic factors; the risk from a low level internal source is represented by the dose quantity committed dose, which has the same risk as the same amount of external effective dose. The intake of radioactive material can occur through four pathways: inhalation of airborne contaminants such as radon gas and radioactive particles ingestion of radioactive contamination in food or liquids absorption of vapours such as tritium oxide through the skin injection of medical radioisotopes such as technetium-99mThe occupational hazards from airborne radioactive particles in nuclear and radio-chemical applications are reduced by the extensive use of gloveboxes to contain such material.
To protect against breat