Shack–Hartmann wavefront sensor
A Shack–Hartmann wavefront sensor is an optical instrument used for characterizing an imaging system. It is a wavefront sensor used in adaptive optics systems, it consists of an array of lenses of the same focal length. Each is focused onto a photon sensor; the local tilt of the wavefront across each lens can be calculated from the position of the focal spot on the sensor. Any phase aberration can be approximated by a set of discrete tilts. By sampling an array of lenslets, all of these tilts can be measured and the whole wavefront approximated. Since only tilts are measured the Shack–Hartmann cannot detect discontinuous steps in the wavefront; the design of this sensor was based on an aperture array, developed in 1900 by Johannes Franz Hartmann as a means of tracing individual rays of light through the optical system of a large telescope, thereby testing the quality of the image. In the late 1960s, Roland Shack and Ben Platt modified the Hartmann screen by replacing the apertures in an opaque screen by an array of lenslets.
The terminology as proposed by Shack and Platt was Hartmann screen. The fundamental principle seems to be documented before Huygens by the Jesuit philosopher, Christopher Scheiner, in Austria. Shack–Hartmann sensors are used to characterize eyes for corneal treatment of complex refractive errors. Pamplona et al. developed an inverse of the Shack–Hartmann system to measure one's eye lens aberrations. While Shack–Hartmann sensors measure localized slope of the wavefront error using spot displacement in sensor plane, Pamplona et al. make the user shift the spots till they are aligned. The knowledge of this shift provides data to estimate the first-order parameters such as radius of curvature and hence error due to defocus and spherical aberration. Optical Telescope Element
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f
In physics refraction is the change in direction of a wave passing from one medium to another or from a gradual change in the medium. Refraction of light is the most observed phenomenon, but other waves such as sound waves and water waves experience refraction. How much a wave is refracted is determined by the change in wave speed and the initial direction of wave propagation relative to the direction of change in speed. For light, refraction follows Snell's law, which states that, for a given pair of media, the ratio of the sines of the angle of incidence θ1 and angle of refraction θ2 is equal to the ratio of phase velocities in the two media, or equivalently, to the indices of refraction of the two media. Sin θ 1 sin θ 2 = v 1 v 2 = n 2 n 1 Optical prisms and lenses utilize refraction to redirect light, as does the human eye; the refractive index of materials varies with the wavelength of light, thus the angle of the refraction varies correspondingly. This is called dispersion and causes prisms and rainbows to divide white light into its constituent spectral colors.
Consider a wave going from one material to another where its speed is slower as in the figure. If it reaches the interface between the materials at an angle one side of the wave will reach the second material first, therefore slow down earlier. With one side of the wave going slower the whole wave will pivot towards that side; this is why a wave will bend away from the surface or toward the normal when going into a slower material. In the opposite case of a wave reaching a material where the speed is higher, one side of the wave will speed up and the wave will pivot away from that side. Another way of understanding the same thing is to consider the change in wavelength at the interface; when the wave goes from one material to another where the wave has a different speed v, the frequency f of the wave will stay the same, but the distance between wavefronts or wavelength λ=v/f will change. If the speed is decreased, such as in the figure to the right, the wavelength will decrease. With an angle between the wave fronts and the interface and change in distance between the wave fronts the angle must change over the interface to keep the wave fronts intact.
From these considerations the relationship between the angle of incidence θ1, angle of transmission θ2 and the wave speeds v1 and v2 in the two materials can be derived. This is the law of refraction or Snell's law and can be written as sin θ 1 sin θ 2 = v 1 v 2; the phenomenon of refraction can in a more fundamental way be derived from the 2 or 3-dimensional wave equation. The boundary condition at the interface will require the tangential component of the wave vector to be identical on the two sides of the interface. Since the magnitude of the wave vector depend on the wave speed this requires a change in direction of the wave vector; the relevant wave speed in the discussion above is the phase velocity of the wave. This is close to the group velocity which can be seen as the truer speed of a wave, but when they differ it is important to use the phase velocity in all calculations relating to refraction. A wave traveling perpendicular to a boundary, i.e. having its wavefronts parallel to the boundary, will not change direction if the speed of the wave changes.
Refraction of light can be seen in many places in our everyday life. It makes objects under a water surface appear closer than they are, it is what optical lenses are based on, allowing for instruments such as glasses, binoculars and the human eye. Refraction is responsible for some natural optical phenomena including rainbows and mirages. For light, the refractive index n of a material is more used than the wave phase speed v in the material, they are, directly related through the speed of light in vacuum c as n = c v. In optics, the law of refraction is written as n 1 sin θ 1 = n 2 sin θ 2. Refraction occurs when light goes through a water surface since water has a refractive index of 1.33 and air has a refractive index of about 1. Looking at a straight object, such as a pencil in the figure here, placed at a slant in the water, the object appears to bend at the water's surface; this is due to the bending of light rays. Once the rays reach the eye, the eye traces them back as straight lines.
The lines of sight intersect at a higher position than. This causes the pencil to appear higher and the water to appear shallower than it is; the depth that the water appears to be when viewed from above is known as the apparent depth. This is an important consideration for spearfishing from the surface because it will make the target fish appear to be in a different place, the fisher must aim lower to catch the fish. Conversely
Spherical aberration is a type of aberration found in optical systems that use elements with spherical surfaces. Lenses and curved mirrors are most made with surfaces that are spherical, because this shape is easier to form than non-spherical curved surfaces. Light rays that strike a spherical surface off-centre are refracted or reflected more or less than those that strike close to the centre; this deviation reduces the quality of images produced by optical systems. A spherical lens has an aplanatic point only at a radius that equals the radius of the sphere divided by the index of refraction of the lens material. A typical value of refractive index for crown glass is 1.5, which indicates that only about 43% of the area of a spherical lens is useful. It is considered to be an imperfection of telescopes and other instruments which makes their focusing less than ideal due to the spherical shape of lenses and mirrors; this is an important effect, because spherical shapes are much easier to produce than aspherical ones.
In many cases, it is cheaper to use multiple spherical elements to compensate for spherical aberration than it is to use a single aspheric lens. "Positive" spherical aberration means. "Negative" spherical aberration means. The effect is proportional to the fourth power of the diameter and inversely proportional to the third power of the focal length, so it is much more pronounced at short focal ratios, i.e. "fast" lenses. In lens systems, the effect can be minimized using special combinations of convex and concave lenses, as well as using aspheric lenses or aplanatic lenses. For simple designs one can sometimes calculate parameters. For example, in a design consisting of a single lens with spherical surfaces and a given object distance o, image distance i, refractive index n, one can minimize spherical aberration by adjusting the radii of curvature R 1 and R 2 of the front and back surfaces of the lens such that R 1 + R 2 R 1 − R 2 = 2 n + 2. For small telescopes using spherical mirrors with focal ratios shorter than f/10, light from a distant point source is not all focused at the same point.
Light striking the inner part of the mirror focuses farther from the mirror than light striking the outer part. As a result, the image cannot be focused as as if the aberration were not present; because of spherical aberration, telescopes with focal ratio less than f/10 are made with non-spherical mirrors or with correcting lenses. Many ways to estimate the diameter of the focused spot due to spherical aberration are based on ray optics. Ray optics, does not consider that light is an electromagnetic wave. Therefore, the results can be wrong due to interference effects. A rather simple formalism based on ray optics, which holds for thin lenses only, is the Coddington notation. In the following, n is the lens' refractive index, o is the object distance, i is the image distance, h is the distance from the optical axis at which the outermost ray enters the lens, R 1 is the first lens radius, R 2 is the second lens radius, f is the lens' focal length; the distance h can be understood as half of the clear aperture.
By using the Coddington factors for shape, s, position, p, s = 1 − R 1 R 2 1 + R 1 R 2 p = 1 − o i 1 + o i, one can write the longitudinal spherical aberration as L S A = 1 8 n ⋅ h 2 i 2 f 3 ( n + 2 n − 1 s 2 + 2 s p + 2 p 2 +
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the
The human eye is an organ which reacts to light and pressure. As a sense organ, the mammalian eye allows vision. Human eyes help to provide a three dimensional, moving image coloured in daylight. Rod and cone cells in the retina allow conscious light perception and vision including color differentiation and the perception of depth; the human eye can differentiate between about 10 million colors and is capable of detecting a single photon. Similar to the eyes of other mammals, the human eye's non-image-forming photosensitive ganglion cells in the retina receive light signals which affect adjustment of the size of the pupil and suppression of the hormone melatonin and entrainment of the body clock; the eye is not shaped like a perfect sphere, rather it is a fused two-piece unit, composed of the anterior segment and the posterior segment. The anterior segment is made up of the cornea and lens; the cornea is transparent and more curved, is linked to the larger posterior segment, composed of the vitreous, retina and the outer white shell called the sclera.
The cornea is about 11.5 mm in diameter, 1/2 mm in thickness near its center. The posterior chamber constitutes the remaining five-sixths; the cornea and sclera are connected by an area termed the limbus. The iris is the pigmented circular structure concentrically surrounding the center of the eye, the pupil, which appears to be black; the size of the pupil, which controls the amount of light entering the eye, is adjusted by the iris' dilator and sphincter muscles. Light energy enters the eye through the cornea, through the pupil and through the lens; the lens shape is controlled by the ciliary muscle. Photons of light falling on the light-sensitive cells of the retina are converted into electrical signals that are transmitted to the brain by the optic nerve and interpreted as sight and vision. Dimensions differ among adults by only one or two millimetres, remarkably consistent across different ethnicities; the vertical measure less than the horizontal, is about 24 mm. The transverse size of a human adult eye is 24.2 mm and the sagittal size is 23.7 mm with no significant difference between sexes and age groups.
Strong correlation has been found between the width of the orbit. The typical adult eye has an anterior to posterior diameter of 24 millimetres, a volume of six cubic centimetres, a mass of 7.5 grams.. The eyeball grows increasing from about 16–17 millimetres at birth to 22.5–23 mm by three years of age. By age 12, the eye attains its full size; the eye is made up of layers, enclosing various anatomical structures. The outermost layer, known as the fibrous tunic, is composed of the sclera; the middle layer, known as the vascular tunic or uvea, consists of the choroid, ciliary body, pigmented epithelium and iris. The innermost is the retina, which gets its oxygenation from the blood vessels of the choroid as well as the retinal vessels; the spaces of the eye are filled with the aqueous humour anteriorly, between the cornea and lens, the vitreous body, a jelly-like substance, behind the lens, filling the entire posterior cavity. The aqueous humour is a clear watery fluid, contained in two areas: the anterior chamber between the cornea and the iris, the posterior chamber between the iris and the lens.
The lens is suspended to the ciliary body by the suspensory ligament, made up of hundreds of fine transparent fibers which transmit muscular forces to change the shape of the lens for accommodation. The vitreous body is a clear substance composed of water and proteins, which give it a jelly-like and sticky composition; the approximate field of view of an individual human eye varies by facial anatomy, but is 30° superior, 45° nasal, 70° inferior, 100° temporal. For both eyes combined visual field is 200 ° horizontal, it is 13700 square degrees for binocular vision. When viewed at large angles from the side, the iris and pupil may still be visible by the viewer, indicating the person has peripheral vision possible at that angle. About 15° temporal and 1.5° below the horizontal is the blind spot created by the optic nerve nasally, 7.5° high and 5.5° wide. The retina has a static contrast ratio of around 100:1; as soon as the eye moves to acquire a target, it re-adjusts its exposure by adjusting the iris, which adjusts the size of the pupil.
Initial dark adaptation takes place in four seconds of profound, uninterrupted darkness. The process is nonlinear and multifaceted, so an interruption by light exposure requires restarting the dark adaptation process over again. Full adaptation is dependent on good blood flow; the human eye can detect a luminance range of 1014, or one hundred trillion, from 10−6 cd/m2, or one millionth of a candela per square meter to 108 cd/m2 or one hundred million candelas per square meter. This range does not include looking at the midday lightning discharge. At the low end o
Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric circuits. The equations provide a mathematical model for electric and radio technologies, such as power generation, electric motors, wireless communication, radar etc. Maxwell's equations describe how electric and magnetic fields are generated by charges and changes of the fields. One important consequence of the equations is that they demonstrate how fluctuating electric and magnetic fields propagate at the speed of light. Known as electromagnetic radiation, these waves may occur at various wavelengths to produce a spectrum from radio waves to γ-rays; the equations are named after the physicist and mathematician James Clerk Maxwell, who between 1861 and 1862 published an early form of the equations that included the Lorentz force law. He first used the equations to propose that light is an electromagnetic phenomenon.
The equations have two major variants. The microscopic Maxwell equations have universal applicability, but are unwieldy for common calculations, they relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. The "macroscopic" Maxwell equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials; the term "Maxwell's equations" is also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics; the spacetime formulations, are used in high energy and gravitational physics because they make the compatibility of the equations with special and general relativity manifest.
In fact, Einstein developed special and general relativity to accommodate the invariant speed of light that drops out of the Maxwell equations with the principle that only relative movement has physical consequences. Since the mid-20th century, it has been understood that Maxwell's equations are not exact, but a classical limit of the fundamental theory of quantum electrodynamics. Gauss's law describes the relationship between a static electric field and the electric charges that cause it: The static electric field points away from positive charges and towards negative charges, the net outflow of the electric field through any closed surface is proportional to the charge enclosed by the surface. Picturing the electric field by its field lines, this means the field lines begin at positive electric charges and end at negative electric charges.'Counting' the number of field lines passing through a closed surface yields the total charge enclosed by that surface, divided by dielectricity of free space.
Gauss's law for magnetism states that there are no "magnetic charges", analogous to electric charges. Instead, the magnetic field due to materials is generated by a configuration called a dipole, the net outflow of the magnetic field through any closed surface is zero. Magnetic dipoles are best represented as loops of current but resemble positive and negative'magnetic charges', inseparably bound together, having no net'magnetic charge'. In terms of field lines, this equation states that magnetic field lines neither begin nor end but make loops or extend to infinity and back. In other words, any magnetic field line that enters a given volume must somewhere exit that volume. Equivalent technical statements are that the sum total magnetic flux through any Gaussian surface is zero, or that the magnetic field is a solenoidal vector field; the Maxwell–Faraday version of Faraday's law of induction describes how a time varying magnetic field creates an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of decrease of the magnetic flux through the enclosed surface.
The dynamically induced electric field has closed field lines similar to a magnetic field, unless superposed by a static electric field. This aspect of electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field, which in turn generates an electric field in a nearby wire. Ampère's law with Maxwell's addition states that magnetic fields can be generated in two ways: by electric current and by changing electric fields. In integral form, the magnetic field induced around any closed loop is proportional to the electric current plus displacement current through the enclosed surface. Maxwell's addition to Ampère's law is important: it makes the set of equations mathematically consistent for non static fields, without changing the laws of Ampere and Gauss for static fields. However, as a consequence, it predicts that a changing magnetic field induces an electric field and vice versa. Therefore, these equations allow self-sustaining "electromagnetic waves" to travel through empty space.
The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, e