In optics, the Fresnel diffraction equation for near-field diffraction is an approximation of the Kirchhoff–Fresnel diffraction that can be applied to the propagation of waves in the near field. It is used to calculate the diffraction pattern created by waves passing through an aperture or around an object, when viewed from close to the object. In contrast the diffraction pattern in the far field region is given by the Fraunhofer diffraction equation; the near field can be specified by F of the optical arrangement. When F ≫ 1 the diffracted wave is considered to be in the near field. However, the validity of the Fresnel diffraction integral is deduced by the approximations derived below; the phase terms of third order and higher must be negligible, a condition that may be written as F θ 2 4 ≪ 1, where θ is the maximal angle described by θ ≈ a / L, a and L the same as in the definition of the Fresnel number. The multiple Fresnel diffraction at spaced periodical ridges causes the specular reflection.
Some of the earliest work on what would become known as Fresnel diffraction was carried out by Francesco Maria Grimaldi in Italy in the 17th century. In his monograph entitled "Light", Richard C. MacLaurin explains Fresnel diffraction by asking what happens when light propagates, how that process is affected when a barrier with a slit or hole in it is interposed in the beam produced by a distant source of light, he uses the Principle of Huygens to investigate, in classical terms. The wave front that proceeds from the slit and on to a detection screen some distance away closely approximates a wave front originating across the area of the gap without regard to any minute interactions with the actual physical edge; the result is that if the gap is narrow only diffraction patterns with bright centers can occur. If the gap is made progressively wider diffraction patterns with dark centers will alternate with diffraction patterns with bright centers; as the gap becomes larger, the differentials between dark and light bands decrease until a diffraction effect can no longer be detected.
MacLaurin does not mention the possibility that the center of the series of diffraction rings produced when light is shone through a small hole may be black, but he does point to the inverse situation wherein the shadow produced by a small circular object can paradoxically have a bright center. In his Optics, Francis Weston Sears offers a mathematical approximation suggested by Fresnel that predicts the main features of diffraction patterns and uses only simple mathematics. By considering the perpendicular distance from the hole in a barrier screen to a nearby detection screen along with the wavelength of the incident light, it is possible to compute a number of regions called half-period elements or Fresnel zones; the inner zone is a circle and each succeeding zone will be a concentric annular ring. If the diameter of the circular hole in the screen is sufficient to expose the first or central Fresnel zone, the amplitude of light at the center of the detection screen will be double what it would be if the detection screen were not obstructed.
If the diameter of the circular hole in the screen is sufficient to expose two Fresnel zones the amplitude at the center is zero. That means; these patterns can be seen and measured, correspond well to the values calculated for them. The electric field diffraction pattern at a point is given by: E = 1 i λ ∬ − ∞ + ∞ E e i k r r c o s d x ′ d y ′ where E is the electric field at the aperture, r = 2 + 2 + z 2, k is the wavenumber 2 π / λ i is the imaginary unit. Analytical solution of this integral is impossible for all but the simplest diffraction geometries. Therefore, it is calculated numerically; the main problem for solving the integral is the expression of r. First, we can simplify the algebra by introducing the substitution: ρ 2 = ( x − x ′
Diffraction refers to various phenomena that occur when a wave encounters an obstacle or a slit. It is defined as the bending of waves around the corners of an obstacle or aperture into the region of geometrical shadow of the obstacle. In classical physics, the diffraction phenomenon is described as the interference of waves according to the Huygens–Fresnel principle that treats each point in the wave-front as a collection of individual spherical wavelets; these characteristic behaviors are exhibited when a wave encounters an obstacle or a slit, comparable in size to its wavelength. Similar effects occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance. Diffraction has an impact on the acoustic space. Diffraction occurs with all waves, including sound waves, water waves, electromagnetic waves such as visible light, X-rays and radio waves. Since physical objects have wave-like properties, diffraction occurs with matter and can be studied according to the principles of quantum mechanics.
Italian scientist Francesco Maria Grimaldi coined the word "diffraction" and was the first to record accurate observations of the phenomenon in 1660. While diffraction occurs whenever propagating waves encounter such changes, its effects are most pronounced for waves whose wavelength is comparable to the dimensions of the diffracting object or slit. If the obstructing object provides multiple spaced openings, a complex pattern of varying intensity can result; this is due to the addition, or interference, of different parts of a wave that travel to the observer by different paths, where different path lengths result in different phases. The formalism of diffraction can describe the way in which waves of finite extent propagate in free space. For example, the expanding profile of a laser beam, the beam shape of a radar antenna and the field of view of an ultrasonic transducer can all be analyzed using diffraction equations; the effects of diffraction are seen in everyday life. The most striking examples of diffraction are those.
This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired. Diffraction in the atmosphere by small particles can cause a bright ring to be visible around a bright light source like the sun or the moon. A shadow of a solid object, using light from a compact source, shows small fringes near its edges; the speckle pattern, observed when laser light falls on an optically rough surface is a diffraction phenomenon. When deli meat appears to be iridescent, diffraction off the meat fibers. All these effects are a consequence of the fact. Diffraction can occur with any kind of wave. Ocean waves diffract around other obstacles. Sound waves can diffract around objects, why one can still hear someone calling when hiding behind a tree. Diffraction can be a concern in some technical applications; the effects of diffraction of light were first observed and characterized by Francesco Maria Grimaldi, who coined the term diffraction, from the Latin diffringere,'to break into pieces', referring to light breaking up into different directions.
The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton attributed them to inflexion of light rays. James Gregory observed the diffraction patterns caused by a bird feather, the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, made public in 1815 and 1818, thereby gave great support to the wave theory of light, advanced by Christiaan Huygens and reinvigorated by Young, against Newton's particle theory. In traditional classical physics diffraction arises because of the way; the propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves.
When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns have a series of maxima and minima. In the modern quantum mechanical understanding of light propagation through a slit every photon has what is known as a wavefunction which describes its path from the emitter through the slit to the screen; the wavefunction is determined by the physical surroundings such as slit geometry, screen distance and initial conditions when the photon is created. In important experiments the existence of the photon's wavef
A camera is an optical instrument to capture still images or to record moving images, which are stored in a physical medium such as in a digital system or on photographic film. A camera consists of a lens which focuses light from the scene, a camera body which holds the image capture mechanism; the still image camera is the main instrument in the art of photography and captured images may be reproduced as a part of the process of photography, digital imaging, photographic printing. The similar artistic fields in the moving image camera domain are film and cinematography; the word camera comes from camera obscura, which means "dark chamber" and is the Latin name of the original device for projecting an image of external reality onto a flat surface. The modern photographic camera evolved from the camera obscura; the functioning of the camera is similar to the functioning of the human eye. The first permanent photograph was made in 1825 by Joseph Nicéphore Niépce. A camera works with the light of the visible spectrum or with other portions of the electromagnetic spectrum.
A still camera is an optical device which creates a single image of an object or scene and records it on an electronic sensor or photographic film. All cameras use the same basic design: light enters an enclosed box through a converging/convex lens and an image is recorded on a light-sensitive medium. A shutter mechanism controls the length of time. Most photographic cameras have functions that allow a person to view the scene to be recorded, allow for a desired part of the scene to be in focus, to control the exposure so that it is not too bright or too dim. On most digital cameras a display a liquid crystal display, permits the user to view the scene to be recorded and settings such as ISO speed and shutter speed. A movie camera or a video camera operates to a still camera, except it records a series of static images in rapid succession at a rate of 24 frames per second; when the images are combined and displayed in order, the illusion of motion is achieved. Traditional cameras capture light onto photographic film.
Video and digital cameras use an electronic image sensor a charge coupled device or a CMOS sensor to capture images which can be transferred or stored in a memory card or other storage inside the camera for playback or processing. Cameras that capture many images in sequence are known as movie cameras or as ciné cameras in Europe; however these categories overlap as still cameras are used to capture moving images in special effects work and many modern cameras can switch between still and motion recording modes. A wide range of film and plate formats have been used by cameras. In the early history plate sizes were specific for the make and model of camera although there developed some standardisation for the more popular cameras; the introduction of roll film drove the standardization process still further so that by the 1950s only a few standard roll films were in use. These included 120 film providing 8, 12 or 16 exposures, 220 film providing 16 or 24 exposures, 127 film providing 8 or 12 exposures and 135 providing 12, 20 or 36 exposures – or up to 72 exposures in the half-frame format or in bulk cassettes for the Leica Camera range.
For cine cameras, film 35 mm wide and perforated with sprocket holes was established as the standard format in the 1890s. It was used for nearly all film-based professional motion picture production. For amateur use, several smaller and therefore less expensive formats were introduced. 17.5 mm film, created by splitting 35 mm film, was one early amateur format, but 9.5 mm film, introduced in Europe in 1922, 16 mm film, introduced in the US in 1923, soon became the standards for "home movies" in their respective hemispheres. In 1932, the more economical 8 mm format was created by doubling the number of perforations in 16 mm film splitting it after exposure and processing; the Super 8 format, still 8 mm wide but with smaller perforations to make room for larger film frames, was introduced in 1965. Traditionally used to "tell the camera" the film speed of the selected film on film cameras, film speed numbers are employed on modern digital cameras as an indication of the system's gain from light to numerical output and to control the automatic exposure system.
Film speed is measured via the ISO system. The higher the film speed number the greater the film sensitivity to light, whereas with a lower number, the film is less sensitive to light. On digital cameras, electronic compensation for the color temperature associated with a given set of lighting conditions, ensuring that white light is registered as such on the imaging chip and therefore that the colors in the frame will appear natural. On mechanical, film-based cameras, this function is served by the operator's choice of film stock or with color correction filters. In addition to using white balance to register natural coloration of the image, photographers may employ white balance to aesthetic end, for example, white balancing to a blue object in order to obtain a warm color temperature; the lens of a camera brings it to a focus on the sensor. The design and manufacture of the lens is critical to the quality of the photograph being taken; the technological revolution in camera design in the 19th century revolutionized optical glass manufacture and lens design with great benefits for modern lens manufacture in a wide range of optical instruments from reading glasses to microscopes.
Pioneers included Leitz. Camera lenses are
Sony Corporation is a Japanese multinational conglomerate corporation headquartered in Kōnan, Tokyo. Its diversified business includes consumer and professional electronics, gaming and financial services; the company owns the largest music entertainment business in the world, the largest video game console business and one of the largest video game publishing businesses, is one of the leading manufacturers of electronic products for the consumer and professional markets, a leading player in the film and television entertainment industry. Sony was ranked 97th on the 2018 Fortune Global 500 list. Sony Corporation is the electronics business unit and the parent company of the Sony Group, engaged in business through its four operating components: electronics, motion pictures and financial services; these make Sony one of the most comprehensive entertainment companies in the world. The group consists of Sony Corporation, Sony Pictures, Sony Mobile, Sony Interactive Entertainment, Sony Music, Sony/ATV Music Publishing, Sony Financial Holdings, others.
Sony is among the semiconductor sales leaders and since 2015, the fifth-largest television manufacturer in the world after Samsung Electronics, LG Electronics, TCL and Hisense. The company's current slogan is Be Moved, their former slogans were The One and Only, It's like.no.other and make.believe. Sony has a weak tie to the Sumitomo Mitsui Financial Group corporate group, the successor to the Mitsui group. Sony began in the wake of World War II. In 1946, Masaru Ibuka started an electronics shop in a department store building in Tokyo; the company started with a total of eight employees. In May 1946, Ibuka was joined by Akio Morita to establish a company called Tokyo Tsushin Kogyo; the company built Japan's first tape recorder, called the Type-G. In 1958, the company changed its name to "Sony"; when Tokyo Tsushin Kogyo was looking for a romanized name to use to market themselves, they considered using their initials, TTK. The primary reason they did not is that the railway company Tokyo Kyuko was known as TTK.
The company used the acronym "Totsuko" in Japan, but during his visit to the United States, Morita discovered that Americans had trouble pronouncing that name. Another early name, tried out for a while was "Tokyo Teletech" until Akio Morita discovered that there was an American company using Teletech as a brand name; the name "Sony" was chosen for the brand as a mix of two words: one was the Latin word "sonus", the root of sonic and sound, the other was "sonny", a common slang term used in 1950s America to call a young boy. In 1950s Japan, "sonny boys" was a loan word in Japanese, which connoted smart and presentable young men, which Sony founders Akio Morita and Masaru Ibuka considered themselves to be; the first Sony-branded product, the TR-55 transistor radio, appeared in 1955 but the company name did not change to Sony until January 1958. At the time of the change, it was unusual for a Japanese company to use Roman letters to spell its name instead of writing it in kanji; the move was not without opposition: TTK's principal bank at the time, had strong feelings about the name.
They pushed for a name such as Sony Teletech. Akio Morita was firm, however. Both Ibuka and Mitsui Bank's chairman gave their approval. According to Schiffer, Sony's TR-63 radio "cracked open the U. S. market and launched the new industry of consumer microelectronics." By the mid-1950s, American teens had begun buying portable transistor radios in huge numbers, helping to propel the fledgling industry from an estimated 100,000 units in 1955 to 5 million units by the end of 1968. Sony co-founder Akio Morita founded Sony Corporation of America in 1960. In the process, he was struck by the mobility of employees between American companies, unheard of in Japan at that time; when he returned to Japan, he encouraged experienced, middle-aged employees of other companies to reevaluate their careers and consider joining Sony. The company filled many positions in this manner, inspired other Japanese companies to do the same. Moreover, Sony played a major role in the development of Japan as a powerful exporter during the 1960s, 1970s and 1980s.
It helped to improve American perceptions of "made in Japan" products. Known for its production quality, Sony was able to charge above-market prices for its consumer electronics and resisted lowering prices. In 1971, Masaru Ibuka handed the position of president over to his co-founder Akio Morita. Sony began a life insurance company in one of its many peripheral businesses. Amid a global recession in the early 1980s, electronics sales dropped and the company was forced to cut prices. Sony's profits fell sharply. "It's over for Sony," one analyst concluded. "The company's best days are behind it." Around that time, Norio Ohga took up the role of president. He encouraged the development of the Compact Disc in the 1970s and 1980s, of the PlayStation in the early 1990s. Ohga went on to purchase CBS Records in 1988 and Columbia Pictures in 1989 expanding Sony's media presence. Ohga would succeed Morita as chief executive officer in 1989. Under the vision of co-founder Akio Morita and his successors, the company had aggressively expanded in
Fast Fourier transform
A fast Fourier transform is an algorithm that computes the discrete Fourier transform of a sequence, or its inverse. Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa; the DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is too slow to be practical. An FFT computes such transformations by factorizing the DFT matrix into a product of sparse factors; as a result, it manages to reduce the complexity of computing the DFT from O, which arises if one applies the definition of DFT, to O, where n is the data size. The difference in speed can be enormous for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.
Fast Fourier transforms are used for applications in engineering and mathematics. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805. In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime", it was included in Top 10 Algorithms of 20th Century by the IEEE journal Computing in Science & Engineering; the best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O complexity for all N for prime N. Many FFT algorithms only depend on the fact that e − 2 π i / N is an N-th primitive root of unity, thus can be applied to analogous transforms over any finite field, such as number-theoretic transforms. Since the inverse DFT is the same as the DFT, but with the opposite sign in the exponent and a 1/N factor, any FFT algorithm can be adapted for it; the development of fast algorithms for DFT can be traced to Gauss's unpublished work in 1805 when he needed it to interpolate the orbit of asteroids Pallas and Juno from sample observations.
His method was similar to the one published in 1965 by Cooley and Tukey, who are credited for the invention of the modern generic FFT algorithm. While Gauss's work predated Fourier's results in 1822, he did not analyze the computation time and used other methods to achieve his goal. Between 1805 and 1965, some versions of FFT were published by other authors. Frank Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard and Walsh transforms. Yates' algorithm is still used in the field of statistical analysis of experiments. In 1942, G. C. Danielson and Cornelius Lanczos published their version to compute DFT for x-ray crystallography, a field where calculation of Fourier transforms presented a formidable bottleneck. While many methods in the past had focused on reducing the constant factor for O computation by taking advantage of "symmetries", Danielson and Lanczos realized that one could use the "periodicity" and apply a "doubling trick" to get O runtime.
James Cooley and John Tukey published a more general version of FFT in 1965, applicable when N is composite and not a power of 2. Tukey came up with the idea during a meeting of President Kennedy's Science Advisory Committee where a discussion topic involved detecting nuclear tests by the Soviet Union by setting up sensors to surround the country from outside. To analyze the output of these sensors, a fast Fourier transform algorithm would be needed. In discussion with Tukey, Richard Garwin recognized the general applicability of the algorithm not just to national security problems, but to a wide range of problems including one of immediate interest to him, determining the periodicities of the spin orientations in a 3-D crystal of Helium-3. Garwin gave Tukey's idea to Cooley for implementation. Cooley and Tukey published the paper in a short time of six months; as Tukey did not work at IBM, the patentability of the idea was doubted and the algorithm went into the public domain, through the computing revolution of the next decade, made FFT one of the indispensable algorithms in digital signal processing.
Let x0.... XN−1 be complex numbers; the DFT is defined by the formula X k = ∑ n = 0 N − 1 x n e − i 2 π k n / N = ∑ n = 0 N − 1 x n w − k n k = 0, …, N − 1. Where w = e i 2 π / N
In optics, a Gaussian beam is a beam of monochromatic electromagnetic radiation whose transverse magnetic and electric field amplitude profiles are given by the Gaussian function. This fundamental transverse gaussian mode describes the intended output of most lasers, as such a beam can be focused into the most concentrated spot; when such a beam is refocused by a lens, the transverse phase dependence is altered. The electric and magnetic field amplitude profiles along any such circular Gaussian beam are determined by a single parameter: the so-called waist w0. At any position z relative to the waist along a beam having a specified w0, the field amplitudes and phases are thereby determined as detailed below; the equations below assume a beam with a circular cross-section at all values of z. Beams with elliptical cross-sections, or with waists at different positions in z for the two transverse dimensions can be described as Gaussian beams, but with distinct values of w0 and of the z = 0 location for the two transverse dimensions x and y.
Arbitrary solutions of the paraxial Helmholtz equation can be expressed as combinations of Hermite–Gaussian modes or as combinations of Laguerre–Gaussian modes. At any point along the beam z these modes include the same Gaussian factor as the fundamental Gaussian mode multiplying the additional geometrical factors for the specified mode; however different modes propagate with a different Gouy phase, why the net transverse profile due to a superposition of modes evolves in z, whereas the propagation of any single Hermite–Gaussian mode retains the same form along a beam. Although there are other possible modal decompositions, these families of solutions are the most useful for problems involving compact beams, that is, where the optical power is rather confined along an axis; when a laser is not operating in the fundamental Gaussian mode, its power will be found among the lowest-order modes using these decompositions, as the spatial extent of higher order modes will tend to exceed the bounds of a laser's resonator.
"Gaussian beam" implies radiation confined to the fundamental Gaussian mode. The Gaussian beam is a transverse electromagnetic mode; the mathematical expression for the electric field amplitude is a solution to the paraxial Helmholtz equation. Assuming polarization in the x direction and propagation in the +z direction, the electric field in phasor notation is given by: E = E 0 x ^ w 0 w exp exp, where r is the radial distance from the center axis of the beam, z is the axial distance from the beam's focus, i is the imaginary unit, k = 2 π / λ is the wave number for a wavelength λ, E 0 = E, the electric field amplitude at the origin at time 0, w is the radius at which the field amplitudes fall to 1/e of their axial values, at the plane z along the beam, w 0 = w is the waist radius, R is the radius of curvature of the beam's wavefronts at z, ψ is the Gouy phase at z, an extra phase term beyond that attributable to the phase velocity of light. There is an understood time dependence e i ω t multiplying such phasor quantities.
Since this solution relies on the paraxial approximation, it is not accurate for strongly diverging beams. In most practical cases the above form is valid; the wave's associated magnetic field is everywhere directly proportional to the electric field and perpendicular to it. Since the electric field is
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics describes the behaviour of visible and infrared light; because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays and radio waves exhibit similar properties. Most optical phenomena can be accounted for using the classical electromagnetic description of light. Complete electromagnetic descriptions of light are, however difficult to apply in practice. Practical optics is done using simplified models; the most common of these, geometric optics, treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics; the ray-based model of light was developed first, followed by the wave model of light.
Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on the fact that light has both particle-like properties. Explanation of these effects requires quantum mechanics; when considering light's particle-like properties, the light is modelled as a collection of particles called "photons". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy, various engineering fields and medicine. Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors, telescopes, microscopes and fibre optics. Optics began with the development of lenses by Mesopotamians; the earliest known lenses, made from polished crystal quartz, date from as early as 700 BC for Assyrian lenses such as the Layard/Nimrud lens. The ancient Romans and Greeks filled glass spheres with water to make lenses.
These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, the development of geometrical optics in the Greco-Roman world. The word optics comes from the ancient Greek word ὀπτική, meaning "appearance, look". Greek philosophy on optics broke down into two opposing theories on how vision worked, the "intromission theory" and the "emission theory"; the intro-mission approach saw vision as coming from objects casting off copies of themselves that were captured by the eye. With many propagators including Democritus, Epicurus and their followers, this theory seems to have some contact with modern theories of what vision is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory, the idea that visual perception is accomplished by rays emitted by the eyes, he commented on the parity reversal of mirrors in Timaeus. Some hundred years Euclid wrote a treatise entitled Optics where he linked vision to geometry, creating geometrical optics.
He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. Ptolemy, in his treatise Optics, held an extramission-intromission theory of vision: the rays from the eye formed a cone, the vertex being within the eye, the base defining the visual field; the rays were sensitive, conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarised much of Euclid and went on to describe a way to measure the angle of refraction, though he failed to notice the empirical relationship between it and the angle of incidence. During the Middle Ages, Greek ideas about optics were resurrected and extended by writers in the Muslim world. One of the earliest of these was Al-Kindi who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena.
In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses" describing a law of refraction equivalent to Snell's law. He used this law to compute optimum shapes for curved mirrors. In the early 11th century, Alhazen wrote the Book of Optics in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment, he rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and entered the eye, although he was unable to explain how the eye captured the rays. Alhazen's work was ignored in the Arabic world but it was anonymously translated into Latin around 1200 A. D. and further summarised and expanded on by the Polish monk Witelo making it a standard text on optics in Europe for the next 400 years. In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, a theology of light, basing it on the works Aristotle and Platonism.
Grosseteste's most famous disciple, Roger Bacon, wrote w