1.
Image editing
–
Image editing encompasses the processes of altering images, whether they are digital photographes, traditional photochemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using such as an airbrush to modify photographs. Many image editing programs are used to render or create computer art from scratch. Raster images are stored in a computer in the form of a grid of picture elements and these pixels contain the images color and brightness information. Image editors can change the pixels to enhance the image in many ways, the pixels can be changed as a group, or individually, by the sophisticated algorithms within the image editors. This article mostly refers to bitmap graphics editors, which are used to alter photographs. It is easier to rasterize a vector image than to vectorize a raster image, vector images can be modified more easily, because they contain descriptions of the shapes for easy rearrangement. They are also scalable, being rasterizable at any resolution and these are called automatic because generally they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction, many image file formats use data compression to reduce file size and save storage space. Digital compression of images may take place in the camera, or can be done in the computer with the image editor, when images are stored in JPEG format, compression has already taken place. Both cameras and computer programs allow the user to set the level of compression, some compression algorithms, such as those used in PNG file format, are lossless, which means no information is lost when the file is saved. JPEG uses knowledge of the way the brain and eyes perceive color to make this loss of detail less noticeable. Listed below are some of the most used capabilities of the better graphic manipulation programs, the list is by no means all inclusive. There are a myriad of choices associated with the application of most of these features, one of the prerequisites for many of the applications mentioned below is a method of selecting part of an image, thus applying a change selectively without affecting the entire picture. The border of an area in an image is often animated with the marching ants effect to help the user to distinguish the selection border from the image background. Image editors can resize images in a process often called image scaling, making them larger, High image resolution cameras can produce large images which are often reduced in size for Internet use. Image editor programs use a process called resampling to calculate new pixel values whose spacing is larger or smaller than the original pixel values. Images for Internet use are kept small, say 640 x 480 pixels which would equal 0.3 megapixels, digital editors are used to crop images
2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
3.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
4.
Digital image processing
–
Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of signal processing, digital image processing has many advantages over analog image processing. It allows a wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise. Since images are defined over two dimensions digital image processing may be modeled in the form of multidimensional systems, the cost of processing was fairly high, however, with the computing equipment of that era. That changed in the 1970s, when digital image processing proliferated as cheaper computers, images then could be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized, Digital image processing technology for medical applications was inducted into the Space Foundation Space Technology Hall of Fame in 1994. In 2002 Raanan Fattel, introduced Gradient domain image processing, a new way to process images in which the differences between pixels are manipulated rather than the values themselves. J. Fundamentals of Digital Image Processing, A Practical Approach with Examples in Matlab, Digital Image Processing, An Algorithmic Approach Using Java. R. Fisher, K Dawson-Howe, A. Fitzgibbon, C, dictionary of Computer Vision and Image Processing. Gonzalez, Richard E. Woods, Steven L. Eddins, milan Sonka, Vaclav Hlavac, Roger Boyle. Image Processing, Analysis, and Machine Vision, lectures on Image Processing, by Alan Peters. IPRG Open group related to image processing research resources Processing digital images with computer algorithms IPOL Open research journal on image processing with software and web demos
5.
Digital signal processing
–
Digital signal processing is the use of digital processing, such as by computers, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a variable in a domain such as time, space. Digital signal processing and analog signal processing are subfields of signal processing, digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing is closely related to system identification and can be implemented in the time, frequency. DSP is applicable to both streaming data and static data, the increasing use of computers has resulted in the increased use of, and need for, digital signal processing. To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter, sampling is usually carried out in two stages, discretization and quantization. Discretization means that the signal is divided into equal intervals of time, quantization means each amplitude measurement is approximated by a value from a finite set. Rounding real numbers to integers is an example, the Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal. In practice, the frequency is often significantly higher than twice that required by the signals limited bandwidth. Theoretical DSP analyses and derivations are typically performed on discrete-time signal models with no amplitude inaccuracies, numerical methods require a quantized signal, such as those produced by an analog-to-digital converter. The processed result might be a frequency spectrum or a set of statistics, but often it is another quantized signal that is converted back to analog form by a digital-to-analog converter. In DSP, engineers usually study digital signals in one of the domains, time domain, spatial domain, frequency domain. They choose the domain in which to process a signal by making an assumption as to which domain best represents the essential characteristics of the signal. The most common processing approach in the time or space domain is enhancement of the signal through a method called filtering. Digital filtering generally consists of linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters, for example, A linear filter is a transformation of input samples. A causal filter uses only samples of the input or output signals. A non-causal filter can usually be changed into a filter by adding a delay to it
6.
Jet Propulsion Laboratory
–
The Jet Propulsion Laboratory is a federally funded research and development center and NASA field center in La Cañada Flintridge, California and Pasadena, California, United States. The JPL is managed by the nearby California Institute of Technology for NASA, the laboratorys primary function is the construction and operation of planetary robotic spacecraft, though it also conducts Earth-orbit and astronomy missions. It is also responsible for operating NASAs Deep Space Network and they are also responsible for managing the JPL Small-Body Database, and provides physical data and lists of publications for all known small Solar System bodies. The JPLs Space Flight Operations Facility and Twenty-Five-Foot Space Simulator are designated National Historic Landmarks, JPL traces its beginnings to 1936 in the Guggenheim Aeronautical Laboratory at the California Institute of Technology when the first set of rocket experiments were carried out in the Arroyo Seco. Malinas thesis advisor was engineer/aerodynamicist Theodore von Kármán, who arranged for U. S. Army financial support for this GALCIT Rocket Project in 1939. In 1941, Malina, Parsons, Forman, Martin Summerfield, in 1943, von Kármán, Malina, Parsons, and Forman established the Aerojet Corporation to manufacture JATO motors. The project took on the name Jet Propulsion Laboratory in November 1943, during JPLs Army years, the laboratory developed two deployed weapon systems, the MGM-5 Corporal and MGM-29 Sergeant intermediate range ballistic missiles. These missiles were the first US ballistic missiles developed at JPL and it also developed a number of other weapons system prototypes, such as the Loki anti-aircraft missile system, and the forerunner of the Aerobee sounding rocket. At various times, it carried out testing at the White Sands Proving Ground, Edwards Air Force Base. A lunar lander was developed in 1938-39 which influenced design of the Apollo Lunar Module in the 1960s. The team lost that proposal to Project Vanguard, and instead embarked on a project to demonstrate ablative re-entry technology using a Jupiter-C rocket. They carried out three successful flights in 1956 and 1957. Using a spare Juno I, the two organizations then launched the United States first satellite, Explorer 1, on February 1,1958, JPL was transferred to NASA in December 1958, becoming the agencys primary planetary spacecraft center. JPL engineers designed and operated Ranger and Surveyor missions to the Moon that prepared the way for Apollo, JPL also led the way in interplanetary exploration with the Mariner missions to Venus, Mars, and Mercury. In 1998, JPL opened the Near-Earth Object Program Office for NASA, as of 2013, it has found 95% of asteroids that are a kilometer or more in diameter that cross Earths orbit. JPL was early to employ women mathematicians, in the 1940s and 1950s, using mechanical calculators, women in an all-female computations group performed trajectory calculations. In 1961, JPL hired Dana Ulery as their first woman engineer to work alongside male engineers as part of the Ranger and Mariner mission tracking teams, when founded, JPLs site was a rocky flood-plain just outside the city limits of Pasadena. Almost all of the 177 acres of the U. S, the city of La Cañada Flintridge, California was incorporated in 1976, well after JPL attained international recognition with a Pasadena address
7.
Massachusetts Institute of Technology
–
The Massachusetts Institute of Technology is a private research university in Cambridge, Massachusetts, often cited as one of the worlds most prestigious universities. Researchers worked on computers, radar, and inertial guidance during World War II, post-war defense research contributed to the rapid expansion of the faculty and campus under James Killian. The current 168-acre campus opened in 1916 and extends over 1 mile along the bank of the Charles River basin. The Institute is traditionally known for its research and education in the sciences and engineering, and more recently in biology, economics, linguistics. Air Force and 6 Fields Medalists have been affiliated with MIT, the school has a strong entrepreneurial culture, and the aggregated revenues of companies founded by MIT alumni would rank as the eleventh-largest economy in the world. In 1859, a proposal was submitted to the Massachusetts General Court to use newly filled lands in Back Bay, Boston for a Conservatory of Art and Science, but the proposal failed. A charter for the incorporation of the Massachusetts Institute of Technology, Rogers, a professor from the University of Virginia, wanted to establish an institution to address rapid scientific and technological advances. The Rogers Plan reflected the German research university model, emphasizing an independent faculty engaged in research, as well as instruction oriented around seminars, two days after the charter was issued, the first battle of the Civil War broke out. After a long delay through the war years, MITs first classes were held in the Mercantile Building in Boston in 1865, in 1863 under the same act, the Commonwealth of Massachusetts founded the Massachusetts Agricultural College, which developed as the University of Massachusetts Amherst. In 1866, the proceeds from sales went toward new buildings in the Back Bay. MIT was informally called Boston Tech, the institute adopted the European polytechnic university model and emphasized laboratory instruction from an early date. Despite chronic financial problems, the institute saw growth in the last two decades of the 19th century under President Francis Amasa Walker. Programs in electrical, chemical, marine, and sanitary engineering were introduced, new buildings were built, the curriculum drifted to a vocational emphasis, with less focus on theoretical science. The fledgling school still suffered from chronic financial shortages which diverted the attention of the MIT leadership, during these Boston Tech years, MIT faculty and alumni rebuffed Harvard University president Charles W. Eliots repeated attempts to merge MIT with Harvard Colleges Lawrence Scientific School. There would be at least six attempts to absorb MIT into Harvard, in its cramped Back Bay location, MIT could not afford to expand its overcrowded facilities, driving a desperate search for a new campus and funding. Eventually the MIT Corporation approved an agreement to merge with Harvard, over the vehement objections of MIT faculty, students. However, a 1917 decision by the Massachusetts Supreme Judicial Court effectively put an end to the merger scheme, the neoclassical New Technology campus was designed by William W. Bosworth and had been funded largely by anonymous donations from a mysterious Mr. Smith, starting in 1912. In January 1920, the donor was revealed to be the industrialist George Eastman of Rochester, New York, who had invented methods of production and processing
8.
Bell Labs
–
Nokia Bell Labs is an American research and scientific development company, owned by Finnish company Nokia. Its headquarters are located in Murray Hill, New Jersey, in addition to laboratories around the rest of the United States. The historic laboratory originated in the late 19th century as the Volta Laboratory, Bell Labs was also at one time a division of the American Telephone & Telegraph Company, half-owned through its Western Electric manufacturing subsidiary. Eight Nobel Prizes have been awarded for work completed at Bell Laboratories, in 1880, the French government awarded Alexander Graham Bell the Volta Prize of 50,000 francs, approximately US$10,000 at that time for the invention of the telephone. Bell used the award to fund the Volta Laboratory in Washington, D. C. in collaboration with Sumner Tainter, the laboratory is also variously known as the Volta Bureau, the Bell Carriage House, the Bell Laboratory and the Volta Laboratory. The laboratory focused on the analysis, recording, and transmission of sound, Bell used his considerable profits from the laboratory for further research and education to permit the diffusion of knowledge relating to the deaf. This resulted in the founding of the Volta Bureau c,1887, located at Bells fathers house at 1527 35th Street in Washington, D. C. where its carriage house became their headquarters in 1889. In 1893, Bell constructed a new building, close by at 1537 35th St. specifically to house the lab, the building was declared a National Historic Landmark in 1972. In 1884, the American Bell Telephone Company created the Mechanical Department from the Electrical, the first president of research was Frank B. Jewett, who stayed there until 1940, ownership of Bell Laboratories was evenly split between AT&T and the Western Electric Company. Its principal work was to plan, design, and support the equipment that Western Electric built for Bell System operating companies and this included everything from telephones, telephone exchange switches, and transmission equipment. Bell Labs also carried out consulting work for the Bell Telephone Company, a few workers were assigned to basic research, and this attracted much attention, especially since they produced several Nobel Prize winners. Until the 1940s, the principal locations were in and around the Bell Labs Building in New York City. Of these, Murray Hill and Crawford Hill remain in existence, the largest grouping of people in the company was in Illinois, at Naperville-Lisle, in the Chicago area, which had the largest concentration of employees prior to 2001. Since 2001, many of the locations have been scaled down or closed. The Holmdel site, a 1.9 million square foot structure set on 473 acres, was closed in 2007, the mirrored-glass building was designed by Eero Saarinen. In August 2013, Somerset Development bought the building, intending to redevelop it into a commercial and residential project. The prospects of success are clouded by the difficulty of readapting Saarinens design and by the current glut of aging, eight Nobel Prizes have been awarded for work completed at Bell Laboratories
9.
University of Maryland, College Park
–
Founded in 1856, the university is the flagship institution of the University System of Maryland. It is a member of the Association of American Universities and competes in athletics as a member of the Big Ten Conference, the University of Marylands proximity to the nations capital has resulted in research partnerships with the Federal government. The operating budget of the University of Maryland during the 2009 fiscal year was projected to be approximately $1.531 billion, for the same fiscal year, the University of Maryland received a total of $518 million in research funding, surpassing its 2008 mark by $118 million. As of December 12,2012, the universitys Great Expectations campaign had exceeded $1 billion in private donations, on March 6,1856, the forerunner of todays University of Maryland was chartered as the Maryland Agricultural College. Two years later, Charles Benedict Calvert, a future U. S. Congressman, Calvert founded the school later that year. On October 5,1859, the first 34 students entered the Maryland Agricultural College, the school became a land grant college in February 1864. During the Civil War, Confederate soldiers under Brigadier General Bradley Tyler Johnson moved past the college on July 12,1864 as part of Jubal Earlys raid on Washington, D. C. By the end of the war, financial problems forced the administrators to sell off 200 acres of land, for the next two years the campus was used as a boys preparatory school. Following the Civil War, in February 1866 the Maryland legislature assumed half ownership of the school, the college thus became in part a state institution. By October 1867, the school reopened with 11 students, in the next six years, enrollment grew and the schools debt was paid off. In 1873, Samuel Jones, a former Confederate Major General, twenty years later, the federally funded Agricultural Experiment Station was established there. Morrill Hall was built the following year, on November 29,1912, a fire destroyed the barracks where the students were housed, all the schools records, and most of the academic buildings, leaving only Morrill Hall untouched. There were no injuries or fatalities, and all but two returned to the university and insisted on classes continuing. Students were housed by families in neighboring towns until housing could be rebuilt, a large brick and concrete compass inlaid in the ground designates the former center of campus as it existed in 1912. The state took control of the school in 1916, and the institution was renamed Maryland State College and that year, the first female students enrolled at the school. On April 9,1920, the became part of the existing University of Maryland, replacing St. Johns College. In the same year, the school on the College Park campus awarded its first PhD degrees. In 1925 the university was accredited by the Association of American Universities, by the time the first black students enrolled at the university in 1951, enrollment had grown to nearly 10,000 students—4,000 of whom were women
10.
Satellite imagery
–
Satellite imagery consists of images of Earth or other planets collected by satellites. Imaging satellites are operated by governments and businesses around the world, Satellite imaging companies sell images under licence. Images are licensed to governments and businesses such as Apple Maps, the first images from space were taken on sub-orbital flights. The U. S-launched V-2 flight on October 24,1946 took one image every 1.5 seconds. With an apogee of 65 miles, these photos were from five times higher than the previous record, the first satellite photographs of Earth were made on August 14,1959 by the U. S. The first satellite photographs of the Moon might have made on October 6,1959 by the Soviet satellite Luna 3. The Blue Marble photograph was taken from space in 1972, and has very popular in the media. Also in 1972 the United States started the Landsat program, the largest program for acquisition of imagery of Earth from space, Landsat Data Continuity Mission, the most recent Landsat satellite, was launched on 11 February 2013. In 1977, the first real time satellite imagery was acquired by the United Statess KH-11 satellite system, all satellite images produced by NASA are published by NASA Earth Observatory and are freely available to the public. Several other countries have satellite imaging programs, and a collaborative European effort launched the ERS, there are also private companies that provide commercial satellite imagery. Images can be in visible colours and in other spectra, there are also elevation maps, usually made by radar images. Interpretation and analysis of imagery is conducted using specialized remote sensing applications. There are four types of resolution when discussing satellite imagery in remote sensing, spatial, spectral, temporal, and radiometric. GSD is a term containing the optical and systemic noise sources and is useful for comparing how well one sensor can see an object on the ground within a single pixel. For example, the GSD of Landsat is ~30m, which means the smallest unit that maps to a single pixel within an image is ~30m x 30m, the latest commercial satellite has a GSD of 0.41 m. This compares to a 0.3 m resolution obtained by some early military film based Reconnaissance satellite such as Corona, the resolution of satellite images varies depending on the instrument used and the altitude of the satellites orbit. For example, the Landsat archive offers repeated imagery at 30 meter resolution for the planet, Landsat 7 has an average return period of 16 days. For many smaller areas, images with resolution as high as 41 cm can be available, Satellite imagery is sometimes supplemented with aerial photography, which has higher resolution, but is more expensive per square meter
11.
Wirephoto
–
Wirephoto, telephotography or radiophoto is the sending of pictures by telegraph, telephone or radio. Western Union transmitted its first halftone photograph in 1921, AT&T followed in 1924, and RCA sent a Radiophoto in 1926. The Associated Press began its Wirephoto service in 1935 and held a trademark on the term AP Wirephoto between 1963 and 2004, the first AP photo sent by wire depicted the crash of a small plane in New Yorks Adirondack Mountains. Édouard Belins Belinograph of 1913, which scanned using a photocell and transmitted over ordinary phone lines, in Europe, services similar to a wirephoto were called a Belino. The first wirephoto systems were slow and did not reproduce well, in the 1930s, wirephoto machines of any reasonable speed were very large and expensive and required a dedicated phone line. News media firms like Associated Press used expensive leased lines to transmit wirephotos. In the mid-1930s a technology battle began for less expensive portable wirephoto equipment that could transmit photos over standard phone lines, a prototype device in the experimental stage was available in San Francisco in 1935 when the large Navy airship Macon crashed into the Pacific off the coast of California. A photo was taken and transmitted to New York over regular phone lines, later, a wirephoto copier and transmitter that could be carried anywhere and needed only a standard long-distance phone line was put into use by International News Photos. During the US leaflet dropping campaign aimed towards the then Empire of Japan, honolulu would transmit some radiophoto images to Saipan depicting proposed leaflet messages for the printing press on Saipan to produce. Frederick Bakewell Alexander Bain Giovanni Caselli Fax Hellschreiber Pantelegraph SSTV Pictorial Telegraphy, Literary Digest, vol
12.
Videotelephony
–
Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations, for communication between people in real-time. A videophone is a telephone with a display, capable of simultaneous video. Videoconferencing implies the use of technology for a group or organizational meeting rather than for individuals. Telepresence may refer either to a high-quality videotelephony system or to meetup technology which goes beyond video into robotics, Videoconferencing has also been called visual collaboration and is a type of groupware. It is also used in commercial and corporate settings to facilitate meetings and conferences, Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via cable or radio. An example of that was the German Reich Postzentralamt video telephone network serving Berlin, the development of the crucial video technology first started in the latter half of the 1920s in the United Kingdom and the United States, spurred notably by John Logie Baird and AT&Ts Bell Labs. This occurred in part, at least with AT&T, to serve as an adjunct supplementing the use of the telephone, a number of organizations believed that videotelephony would be superior to plain voice communications. However video technology was to be deployed in analog television broadcasting long before it could become practical—or popular—for videophones, during the first manned space flights, NASA used two radio-frequency video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations, the news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase. This technique was very expensive, though, and could not be used for such as telemedicine, distance education. Videotelephony developed in parallel with conventional voice telephone systems from the mid-to-late 20th century, Only in the late 20th century with the advent of powerful video codecs combined with high-speed Internet broadband and ISDN service did videotelephony become a practical technology for regular use. In the 1980s, digital telephony transmission networks became possible, such as with ISDN networks, assuring a bit rate for compressed video. During this time, there was research into other forms of digital video. Many of these technologies, such as the Media space, are not as used today as videoconferencing but were still an important area of research. The first dedicated systems started to appear in the market as ISDN networks were expanding throughout the world, one of the first commercial videoconferencing systems sold to companies came from PictureTel Corp. which had an Initial Public Offering in November,1984. The company also secured a patent for a codec for full-motion videoconferencing, in 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town
13.
Television standards conversion
–
Television standards conversion is the process of changing one type of TV system to another. The most common is from NTSC to PAL or the way around. This is done so TV programs in one nation may be viewed in a nation with a different standard, the TV video is fed through a video standards converter that changes the video to a different video system. Converting between a different numbers of pixels and different frame rates in video pictures is a technical problem. However, the exchange of TV programming makes standards conversion necessary. The first known case of TV systems conversion probably was in Europe a few years after World War II – mainly with the RTF, the problem got worse with the introduction of PAL, SECAM, and the French 819 line service. Until the 1980s, standards conversion was so difficult that 24 frame/s 16 mm or 35 mm film was the medium of programming interchange. Perhaps the most technically challenging conversion to make is the PAL to NTSC, PAL is 625 lines at 50 fields/s NTSC is 525 lines at 59.94 fields/s The two TV standards are for all practical purposes, temporally and spatially incompatible with each other. Aside from the count being different, converting to a format that requires 60 fields every second from a format that has only 50 fields poses difficulty. Every second, an additional 10 fields must be generated--the converter has to create new frames in real time, one signal type that is not transferred, except on some very expensive converters, is the closed captioning signal. Teletext signals do not need to be transferred, but the data stream should be if it is technologically possible to do so. With HDTV broadcasting, this is less of an issue, for the most part meaning only passing the captioning datastream on to the new source material, however, DVB and ATSC have significantly different captioning datastream types. The three terms of the ratio are, the number of samples, followed by the number of samples of the two color components, U/Cb then V/Cr, for each complete sample area. The sampling principles above apply to both digital and analog television, the 3,2 pulldown conversion process for 24 frame/s film to television creates a slight error in the video signal compared to the original film frames. This is one reason why NTSC films viewed on typical home equipment may not appear as smooth as when viewed in a cinema, the phenomenon is particularly apparent during slow, steady camera movements which appear slightly jerky when telecined. This process is referred to as telecine judder. PAL material in which 2,2,2,2,2,2,2,2,2,2,2,3 pulldown has been applied, suffers from a lack of smoothness. In effect, every 12th film frame is displayed for the duration of 3 PAL fields whereas the other 11 frames are all displayed for the duration of 2 PAL fields and this causes a slight hiccup in the video about twice a second
14.
Statistical classification
–
An example would be assigning a given email into spam or non-spam classes or assigning a diagnosis to a given patient as described by observed characteristics of the patient. Classification is an example of pattern recognition, in the terminology of machine learning, classification is considered an instance of supervised learning, i. e. learning where a training set of correctly identified observations is available. The corresponding unsupervised procedure is known as clustering, and involves grouping data into categories based on some measure of inherent similarity or distance, often, the individual observations are analyzed into a set of quantifiable properties, known variously as explanatory variables or features. These properties may variously be categorical, ordinal, integer-valued or real-valued, other classifiers work by comparing observations to previous observations by means of a similarity or distance function. An algorithm that implements classification, especially in an implementation, is known as a classifier. The term classifier sometimes also refers to the function, implemented by a classification algorithm. Terminology across fields is quite varied, in machine learning, the observations are often known as instances, the explanatory variables are termed features, and the possible categories to be predicted are classes. Classification and clustering are examples of the general problem of pattern recognition. A common subclass of classification is probabilistic classification, algorithms of this nature use statistical inference to find the best class for a given instance. Unlike other algorithms, which output a best class, probabilistic algorithms output a probability of the instance being a member of each of the possible classes. The best class is then selected as the one with the highest probability. However, such an algorithm has numerous advantages over non-probabilistic classifiers, correspondingly, it can abstain when its confidence of choosing any particular output is too low. This early work assumed that data-values within each of the two groups had a normal distribution. The extension of this context to more than two-groups has also been considered with a restriction imposed that the classification rule should be linear. Bayesian procedures tend to be expensive and, in the days before Markov chain Monte Carlo computations were developed. Classification can be thought of as two separate problems – binary classification and multiclass classification, in binary classification, a better understood task, only two classes are involved, whereas multiclass classification involves assigning an object to one of several classes. Since many classification methods have developed specifically for binary classification. Most algorithms describe an individual instance whose category is to be predicted using a vector of individual
15.
Signal processing
–
According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. Oppenheim and Schafer further state that the digitalization or digital refinement of techniques can be found in the digital control systems of the 1940s and 1950s. Feature extraction, such as understanding and speech recognition. Quality improvement, such as reduction, image enhancement. Including audio compression, image compression, and video compression and this involves linear electronic circuits as well as non-linear ones. The former are, for instance, passive filters, active filters, additive mixers, integrators, non-linear circuits include compandors, multiplicators, voltage-controlled filters, voltage-controlled oscillators and phase-locked loops. Continuous-time signal processing is for signals that vary with the change of continuous domain, the methods of signal processing include time domain, frequency domain, and complex frequency domain. This technology was a predecessor of digital processing, and is still used in advanced processing of gigahertz signals. Digital signal processing is the processing of digitized discrete-time sampled signals, processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued, other typical operations supported by the hardware are circular buffers and look-up tables. Examples of algorithms are the Fast Fourier transform, finite impulse response filter, Infinite impulse response filter, nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatio-temporal domains. Nonlinear systems can produce complex behaviors including bifurcations, chaos, harmonics
16.
Pattern recognition
–
Pattern recognition systems are in many cases trained from labeled training data, but when no labeled data are available other algorithms can be used to discover previously unknown patterns. The terms pattern recognition, machine learning, data mining and knowledge discovery in databases are hard to separate, in pattern recognition, there may be a higher interest to formalize, explain and visualize the pattern, while machine learning traditionally focuses on maximizing the recognition rates. In machine learning, pattern recognition is the assignment of a label to an input value. In statistics, discriminant analysis was introduced for this purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a set of classes. However, pattern recognition is a general problem that encompasses other types of output as well. Pattern recognition algorithms generally aim to provide an answer for all possible inputs and to perform most likely matching of the inputs. This is opposed to pattern matching algorithms, which look for matches in the input with pre-existing patterns. Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value, supervised learning assumes that a set of training data has been provided, consisting of a set of instances that have been properly labeled by hand with the correct output. A combination of the two that has recently been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data. Note that in cases of unsupervised learning, there may be no training data at all to speak of, in other words, Note that sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. Note also that in some fields, the terminology is different, For example, in community ecology, the piece of input data for which an output value is generated is formally termed an instance. The instance is formally described by a vector of features, which constitute a description of all known characteristics of the instance. Typically, features are either categorical, ordinal, integer-valued or real-valued, often, categorical and ordinal data are grouped together, likewise for integer-valued and real-valued data. Furthermore, many algorithms work only in terms of categorical data, many common pattern recognition algorithms are probabilistic in nature, in that they use statistical inference to find the best label for a given instance. Unlike other algorithms, which output a best label, often probabilistic algorithms also output a probability of the instance being described by the given label. In addition, many probabilistic algorithms output a list of the N-best labels with associated probabilities, for some value of N, when the number of possible labels is fairly small, N may be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms, They output a confidence value associated with their choice, correspondingly, they can abstain when the confidence of choosing any particular output is too low
17.
Graphical projection
–
Graphical projection is a protocol, used in technical drawing, by which an image of a three-dimensional object is projected onto a planar surface without the aid of numerical calculation. The projection is achieved by the use of imaginary projectors, the projected, mental image becomes the technician’s vision of the desired, finished picture. By following the protocol the technician may produce the picture on a planar surface such as drawing paper. The protocols provide a uniform imaging procedure among people trained in technical graphics, the orthographic projection is derived from the principles of descriptive geometry and is a two-dimensional representation of a three-dimensional object. It is the type of choice for working drawings. Within parallel projection there is a known as Pictorials. Pictorials show an image of an object as viewed from a direction in order to reveal all three directions of space in one picture. Parallel projection pictorial instrument drawings are used to approximate graphical perspective projections. Because pictorial projections inherently have this distortion, in the instrument drawing of pictorials, great liberties may then be taken for economy of effort, parallel projection pictorials rely on the technique of axonometric projection. Axonometric projection is a type of projection used to create a pictorial drawing of an object. There are three types of axonometric projection, isometric, dimetric, and trimetric projection. In isometric pictorials, the direction of viewing is such that the three axes of space appear equally foreshortened, and there is an angle of 120° between them. As the distortion caused by foreshortening is uniform the proportionality of all sides and lengths are preserved, and this enables measurements to be read or taken directly from the drawing. Approximations are common in dimetric drawings, in trimetric pictorials, the direction of viewing is such that all of the three axes of space appear unequally foreshortened. The scale along each of the three axes and the angles among them are determined separately as dictated by the angle of viewing, approximations in Trimetric drawings are common. In oblique projections the parallel projection rays are not perpendicular to the plane as with orthographic projection. In both orthographic and oblique projection, parallel lines in space appear parallel on the projected image, because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for formal, working drawings. In an oblique pictorial drawing, the angles among the axes as well as the foreshortening factors are arbitrary
18.
Hidden Markov model
–
A hidden Markov model is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved states. An HMM can be presented as the simplest dynamic Bayesian network, the mathematics behind the HMM were developed by L. E. Baum and coworkers. It is closely related to a work on the optimal nonlinear filtering problem by Ruslan L. Stratonovich. In simpler Markov models, the state is visible to the observer. In a hidden Markov model, the state is not directly visible, each state has a probability distribution over the possible output tokens. Therefore, the sequence of tokens generated by an HMM gives some information about the sequence of states, in its discrete form, a hidden Markov process can be visualized as a generalization of the Urn problem with replacement. Consider this example, in a room that is not visible to an observer there is a genie, the room contains urns X1, X2, X3, … each of which contains a known mix of balls, each ball labeled y1, y2, y3, …. The genie chooses an urn in that room and randomly draws a ball from that urn and it then puts the ball onto a conveyor belt, where the observer can observe the sequence of the balls but not the sequence of urns from which they were drawn. The genie has some procedure to choose urns, the choice of the urn for the n-th ball depends only upon a random number, the choice of urn does not directly depend on the urns chosen before this single previous urn, therefore, this is called a Markov process. It can be described by the part of Figure 1. The Markov process itself cannot be observed, only the sequence of labeled balls and this is illustrated by the lower part of the diagram shown in Figure 1, where one can see that balls y1, y2, y3, y4 can be drawn at each state. However, the observer can work out other information, such as the likelihood that the ball came from each of the urns. The diagram below shows the architecture of an instantiated HMM. Each oval shape represents a variable that can adopt any of a number of values. The random variable x is the state at time t. The random variable y is the observation at time t, the arrows in the diagram denote conditional dependencies. This is called the Markov property, similarly, the value of the observed variable y only depends on the value of the hidden variable x. In the standard type of hidden Markov model considered here, the space of the hidden variables is discrete
19.
Image restoration
–
Image Restoration is the operation of taking a corrupt/noisy image and estimating the clean, original image. Corruption may come in forms such as motion blur, noise. Image enhancement techniques provided by imaging packages use no a priori model of the process created the image. With image enhancement noise can effectively be removed by sacrificing some resolution, in a fluorescence microscope, resolution in the z-direction is bad as it is. More advanced image processing techniques must be applied to recover the object, the objective of image restoration techniques is to reduce noise and recover resolution loss. Image processing techniques are performed either in the domain or the frequency domain. This deconvolution technique, because of its direct inversion of the PSF which typically has poor matrix condition number, amplifies noise, also, conventionally the blurring process is assumed to be shift-invariant. Hence more sophisticated techniques, such as regularized deblurring, have developed to offer robust recovery under different types of noises
20.
Independent component analysis
–
In signal processing, independent component analysis is a computational method for separating a multivariate signal into additive subcomponents. This is done by assuming that the subcomponents are non-Gaussian signals, ICA is a special case of blind source separation. A common example application is the cocktail party problem of listening in on one persons speech in a noisy room, Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. As an example, sound is usually a signal that is composed of the addition, at each time t. The question then is whether it is possible to separate these contributing sources from the total signal. When the statistical independence assumption is correct, blind ICA separation of a mixed signal gives very good results and it is also used for signals that are not supposed to be generated by a mixing for analysis purposes. A simple application of ICA is the cocktail party problem, where the speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no delays or echoes. An important note to consider is that if N sources are present, other cases of underdetermined and overdetermined have been investigated. That the ICA separation of mixed signals gives very good results is based on two assumptions and three effects of mixing source signals, two assumptions, The source signals are independent of each other. The values in each source signal have non-Gaussian distributions, three effects of mixing source signals, Independence, As per assumption 1, the source signals are independent, however, their signal mixtures are not. This is because the signal mixtures share the source signals. Normality, According to the Central Limit Theorem, the distribution of a sum of independent random variables with finite variance tends towards a Gaussian distribution. Loosely speaking, a sum of two independent random variables usually has a distribution that is closer to Gaussian than any of the two original variables, here we consider the value of each signal as the random variable. Complexity, The temporal complexity of any signal mixture is greater than that of its simplest constituent source signal and those principles contribute to the basic establishment of ICA. ICA finds the independent components by maximizing the statistical independence of the estimated components and we may choose one of many ways to define a proxy for independence, and this choice governs the form of the ICA algorithm. The non-Gaussianity family of ICA algorithms, motivated by the limit theorem, uses kurtosis. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition, whitening ensures that all dimensions are treated equally a priori before the algorithm is run
21.
Linear filter
–
Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. This results from systems composed solely of components classified as having a linear response, most filters implemented in analog electronics, in digital signal processing, or in mechanical systems are classified as causal, time invariant, and linear signal processing filters. The general concept of linear filtering is used in statistics, data analysis. This includes non-causal filters and filters in more than one such as those used in image processing. The frequency response, given by the transfer function H, is an alternative characterization of the filter. The frequency response may be tailored to, for instance, eliminate unwanted frequency components from an input signal, or to limit an amplifier to signals within a particular band of frequencies. The impulse response h of a linear time-invariant causal filter specifies the output that the filter would produce if it were to receive an input consisting of an impulse at time 0. An impulse in a continuous time filter means a Dirac delta function, the impulse response completely characterizes the response of any such filter, inasmuch as any possible input signal can be expressed as a combination of weighted delta functions. The second equation is a version used, for example, by digital filters implemented in software. The impulse response h completely characterizes any linear time-invariant filter, the input x is said to be convolved with the impulse response h having a duration of time T. The complexity of a filter may be specified according to the order of the filter, among the time-domain filters we here consider, there are two general classes of filter transfer functions that can approximate a desired frequency response. Consider a physical system that acts as a filter, such as a system of springs and masses. Such a system is said to have an impulse response. The convolution integral above extends over all time, T must be set to infinity, for instance, consider a damped harmonic oscillator such as a pendulum, or a resonant L-C tank circuit. If the pendulum has been at rest and we were to strike it with a hammer, setting it in motion, it would swing back and forth, say, with an amplitude of 10 cm. After 10 minutes, say, the pendulum would still be swinging but the amplitude would have decreased to 5 cm, after another 10 minutes its amplitude would be only 2.5 cm, then 1.25 cm, etc. However it would never come to a complete rest, and we call that response to the impulse infinite in duration. The complexity of such a system is specified by its order N, a filter implemented in a computer program is a discrete-time system, a different set of mathematical concepts defines the behavior of such systems
22.
Artificial neural network
–
Each neural unit is connected with many others, and links can enhance or inhibit the activation state of adjoining neural units. Each individual neural unit computes using summation function, There may be a threshold function or limiting function on each connection and on the unit itself, such that the signal must surpass the limit before propagating to other neurons. These systems are self-learning and trained, rather than explicitly programmed, Neural networks typically consist of multiple layers or a cube design, and the signal path traverses from the first, to the last layer of neural units. Back propagation is the use of stimulation to reset weights on the front neural units. More modern networks are a bit more free flowing in terms of stimulation and inhibition with connections interacting in a more chaotic. Dynamic neural networks are the most advanced, in that they dynamically can, based on rules, form new connections, the goal of the neural network is to solve problems in the same way that the human brain would, although several neural networks are more abstract. New brain research often stimulates new patterns in neural networks, one new approach is using connections which span much further and link processing layers rather than always being localized to adjacent neurons. Neural networks are based on numbers, with the value of the core. An interesting facet of these systems is that they are unpredictable in their success with self-learning, after training, some become great problem solvers and others dont perform as well. In order to them, several thousand cycles of interaction typically occur. Warren McCulloch and Walter Pitts created a model for neural networks based on mathematics. This model paved the way for neural network research to split into two distinct approaches, one approach focused on biological processes in the brain and the other focused on the application of neural networks to artificial intelligence. This work led to the paper by Kleene on nerve networks. “Representation of events in nerve nets and finite automata. ”In, Automata Studies, ed. by C. E. Shannon, annals of Mathematics Studies, no.34. Princeton University Press, Princeton, N. J.1956, in the late 1940s psychologist Donald Hebb created a hypothesis of learning based on the mechanism of neural plasticity that is now known as Hebbian learning. Hebbian learning is considered to be a typical unsupervised learning rule, researchers started applying these ideas to computational models in 1948 with Turings B-type machines. Farley and Wesley A. Clark first used computational machines, then called calculators, other neural network computational machines were created by Rochester, Holland, Habit, and Duda. Frank Rosenblatt created the perceptron, an algorithm for pattern recognition based on a computer learning network using simple addition and subtraction
23.
Pixelation
–
Such an image is said to be pixelated. Early graphical applications such as video games ran at low resolutions with a small number of colors. The resulting sharp edges gave curved objects and diagonal lines an unnatural appearance, higher resolutions would soon make this type of pixelation all but invisible on the screen, but pixelation is still visible if a low-resolution image is printed on paper. In the realm of real-time 3D computer graphics, pixelation can be a problem, here, bitmaps are applied to polygons as textures. As a camera approaches a textured polygon, simplistic nearest neighbor texture filtering would simply zoom in on the bitmap, creating drastic pixelation. The most common solution is a technique called pixel interpolation that smoothly blends or interpolates the color of one pixel into the color of the adjacent pixel at high levels of zoom. This creates a more organic, but also much blurrier image, there are a number of ways of doing this, see texture filtering for details. Pixelation is a unique to bitmaps. Alternatives such as graphics or purely geometric polygon models can scale to any level of detail. Another solution sometimes used is procedural textures, textures such as fractals that can be generated on-the-fly at arbitrary levels of detail, colour banding Macroblocking Posterization Pixel art Zooming Without Pixelation, digital camera advice by Mark Coffman Pixelization of a Font by Stephen Wolfram, The Wolfram Demonstrations Project
24.
Principal component analysis
–
The number of principal components is less than or equal to the smaller of the number of original variables or the number of observations. The resulting vectors are an orthogonal basis set. PCA is sensitive to the scaling of the original variables. PCA was invented in 1901 by Karl Pearson, as an analogue of the principal theorem in mechanics. PCA is mostly used as a tool in exploratory data analysis, PCA can be done by eigenvalue decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centering the data matrix for each attribute. The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores, PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the structure of the data in a way that best explains the variance in the data. This is done by using only the first few principal components so that the dimensionality of the data is reduced. PCA is closely related to factor analysis, Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis, PCA can be thought of as fitting an n-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin, then, we compute the covariance matrix of the data, and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then, we must orthogonalize the set of eigenvectors, and normalize each to become unit vectors, once this is done, each of the mutually orthogonal, unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. The proportion of the variance that each eigenvector represents can be calculated by dividing the corresponding to that eigenvector by the sum of all eigenvalues. This procedure is sensitive to the scaling of the data, a standard result for a symmetric matrix such as XTX is that the quotients maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector. With w found, the first component of a vector x can then be given as a score t1 = x ⋅ w in the transformed co-ordinates, or as the corresponding vector in the original variables. Thus the loading vectors are eigenvectors of XTX, however eigenvectors w and w corresponding to eigenvalues of a symmetric matrix are orthogonal, or can be orthogonalised. The product in the line is therefore zero, there is no sample covariance between different principal components over the dataset. Another way to characterise the principal components transformation is therefore as the transformation to coordinates which diagonalise the empirical sample covariance matrix, however, not all the principal components need to be kept
25.
Self-organizing map
–
This makes SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling. The artificial neural network introduced by the Finnish professor Teuvo Kohonen in the 1980s is sometimes called a Kohonen map or network. The Kohonen net is a convenient abstraction building on work on biologically neural models from the 1970s. Like most artificial neural networks, SOMs operate in two modes, training and mapping, training builds the map using input examples, while mapping automatically classifies a new input vector. A self-organizing map consists of components called nodes or neurons, associated with each node are a weight vector of the same dimension as the input data vectors, and a position in the map space. The usual arrangement of nodes is a regular spacing in a hexagonal or rectangular grid. The self-organizing map describes a mapping from a higher-dimensional input space to a lower-dimensional map space, the procedure for placing a vector from data space onto the map is to find the node with the closest weight vector to the data space vector. Useful extensions include using toroidal grids where opposite edges are connected and it is also common to use the U-Matrix. The U-Matrix value of a node is the average distance between the nodes weight vector and that of its closest neighbors. In a square grid, for instance, we consider the closest 4 or 8 nodes. In maps consisting of thousands of nodes, it is possible to perform operations on the map itself. The goal of learning in the map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or other information is handled in separate parts of the cerebral cortex in the human brain. The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largest principal component eigenvectors, with the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights. The network must be fed a number of example vectors that represent, as close as possible. The examples are usually administered several times as iterations, when a training example is fed to the network, its Euclidean distance to all weight vectors is computed. The neuron whose weight vector is most similar to the input is called the best matching unit, the weights of the BMU and neurons close to it in the SOM lattice are adjusted towards the input vector. The magnitude of the change decreases with time and with distance from the BMU, depending on the implementations, t can scan the training data set systematically, be randomly drawn from the data set, or implement some other sampling method
26.
Wavelet
–
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a brief oscillation like one recorded by a seismograph or heart monitor, generally, wavelets are intentionally crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a reverse, shift, multiply and integrate technique called convolution, for example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly a 32nd note. If this wavelet were to be convolved with a created from the recording of a song. Mathematically, the wavelet will correlate with the if the unknown signal contains information of similar frequency. This concept of correlation is at the core of many applications of wavelet theory. As a mathematical tool, wavelets can be used to extract information from different kinds of data, including – but certainly not limited to – audio signals. Sets of wavelets are generally needed to analyze data fully, a set of complementary wavelets will decompose data without gaps or overlap so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the information with minimal loss. This is accomplished through coherent states, the word wavelet has been used for decades in digital signal processing and exploration geophysics. The equivalent French word ondelette meaning small wave was used by Morlet, Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of representation for continuous-time signals. Almost all practically useful discrete wavelet transforms use discrete-time filterbanks and these filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response or infinite impulse response filters, the product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a wavelet transform of this signal, such an event marks an entire region in the time-scale plane. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle, Wavelet transforms are broadly divided into three classes, continuous, discrete and multiresolution-based. In continuous wavelet transforms, a signal of finite energy is projected on a continuous family of frequency bands. For instance the signal may be represented on every frequency band of the form for all frequencies f >0
27.
Low-pass filter
–
A low-pass filter is a filter that passes signals with a frequency lower than a certain cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design, the filter is sometimes called a high-cut filter, or treble cut filter in audio applications. A low-pass filter is the complement of a high-pass filter, Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations, and leaving the longer-term trend. Filter designers will often use the form as a prototype filter. That is, a filter with unity bandwidth and impedance, the desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform. Examples of low-pass filters occur in acoustics, optics and electronics, a stiff physical barrier tends to reflect higher sound frequencies, and so acts as a low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard, an optical filter with the same function can correctly be called a low-pass filter, but conventionally is called a longpass filter, to avoid confusion. For current signals, a circuit, using a resistor and capacitor in parallel. Electronic low-pass filters are used on inputs to subwoofers and other types of loudspeakers, radio transmitters use low-pass filters to block harmonic emissions that might interfere with other communications. The tone knob on many electric guitars is a filter used to reduce the amount of treble in the sound. An integrator is another time constant low-pass filter, telephone lines fitted with DSL splitters use low-pass and high-pass filters to separate DSL and POTS signals sharing the same pair of wires. Low-pass filters also play a significant role in the sculpting of sound created by analogue, the transition region present in practical filters does not exist in an ideal filter. The filter would therefore need to have infinite delay, or knowledge of the future and past. It is effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, or more typically by making the signal repetitive and this delay is manifested as phase shift. Greater accuracy in approximation requires a longer delay, an ideal low-pass filter results in ringing artifacts via the Gibbs phenomenon. These can be reduced or worsened by choice of windowing function, for example, simple truncation causes severe ringing artifacts, in signal reconstruction, and to reduce these artifacts one uses window functions which drop off more smoothly at the edges. The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a signal from a sampled digital signal. Real digital-to-analog converters use real filter approximations, there are many different types of filter circuits, with different responses to changing frequency
28.
Edge detection
–
Edge detection includes a variety of mathematical methods that aim at identifying points in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The points at which image brightness changes sharply are typically organized into a set of curved line segments termed edges, the same problem of finding discontinuities in one-dimensional signals is known as step detection and the problem of finding signal discontinuities over time is known as change detection. Edge detection is a tool in image processing, machine vision and computer vision, particularly in the areas of feature detection. The purpose of detecting changes in image brightness is to capture important events. If the edge detection step is successful, the subsequent task of interpreting the information contents in the image may therefore be substantially simplified. However, it is not always possible to obtain such ideal edges from real life images of moderate complexity, edge detection is one of the fundamental steps in image processing, image analysis, image pattern recognition, and computer vision techniques. The edges extracted from an image of a three-dimensional scene can be classified as either viewpoint dependent or viewpoint independent. A viewpoint independent edge typically reflects inherent properties of the objects, such as surface markings. A viewpoint dependent edge may change as the viewpoint changes, and typically reflects the geometry of the scene, a typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line can be a number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line, although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are affected by one or several of the following effects, focal blur caused by a finite depth-of-field. Penumbral blur caused by shadows created by light sources of non-zero radius, thus, a one-dimensional image f which has exactly one edge placed at x =0 may be modeled as, f = I r − I l 2 + I l. At the left side of the edge, the intensity is I l = lim x → − ∞ f, the scale parameter σ is called the blur scale of the edge. Ideally this scale parameter should be adjusted based on the quality of image to avoid destroying true edges of the image, to illustrate why edge detection is not a trivial task, consider the problem of detecting edges in the following one-dimensional signal. Here, we may say that there should be an edge between the 4th and 5th pixels. Moreover, one could argue that this case is one in there are several edges. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple, there are many methods for edge detection, but most of them can be grouped into two categories, search-based and zero-crossing based
29.
Fast Fourier transform
–
A fast Fourier transform algorithm computes the discrete Fourier transform of a sequence, or its inverse. Fourier analysis converts a signal from its domain to a representation in the frequency domain. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from O, which if one simply applies the definition of DFT, to O. Fast Fourier transforms are used for many applications in engineering, science. The basic ideas were popularized in 1965, but some algorithms had been derived as early as 1805, the DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields but computing it directly from the definition is too slow to be practical. The difference in speed can be enormous, especially for data sets where N may be in the thousands or millions. In practice, the time can be reduced by several orders of magnitude in such cases. The best-known FFT algorithms depend upon the factorization of N, but there are FFTs with O complexity for all N, even for prime N. Since the inverse DFT is the same as the DFT, but with the sign in the exponent. The development of fast algorithms for DFT can be traced to Gausss unpublished work in 1805 when he needed it to interpolate the orbit of asteroids Pallas and Juno from sample observations. His method was similar to the one published in 1965 by Cooley and Tukey. While Gausss work predated even Fouriers results in 1822, he did not analyze the computation time, between 1805 and 1965, some versions of FFT were published by other authors. Yates in 1932 published his version called interaction algorithm, which provided efficient computation of Hadamard, yates algorithm is still used in the field of statistical design and analysis of experiments. In 1942, Danielson and Lanczos published their version to compute DFT for x-ray crystallography, Cooley and Tukey published a more general version of FFT in 1965 that is applicable when N is composite and not necessarily a power of 2. To analyze the output of these sensors, a fast Fourier transform algorithm would be needed, garwin gave Tukeys idea to Cooley for implementation. Cooley and Tukey published the paper in a short six months
30.
Affine transformation
–
In geometry, an affine transformation, affine map or an affinity is a function between affine spaces which preserves points, straight lines and planes. Also, sets of parallel lines remain parallel after an affine transformation, an affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line. Examples of affine transformations include translation, scaling, homothety, similarity transformation, reflection, rotation, shear mapping, and compositions of them in any combination and sequence. If X and Y are affine spaces, then every affine transformation f, X → Y is of the form x ↦ M x + b, unlike a purely linear transformation, an affine map need not preserve the zero point in a linear space. Thus, every linear transformation is affine, but not every affine transformation is linear, all Euclidean spaces are affine, but there are affine spaces that are non-Euclidean. In affine coordinates, which include Cartesian coordinates in Euclidean spaces, another way to deal with affine transformations systematically is to select a point as the origin, then, any affine transformation is equivalent to a linear transformation followed by a translation. An affine map f, A → B between two spaces is a map on the points that acts linearly on the vectors. In symbols, f determines a linear transformation φ such that and we can interpret this definition in a few other ways, as follows. If an origin O ∈ A is chosen, and B denotes its image f ∈ B, the conclusion is that, intuitively, f consists of a translation and a linear map. In other words, f preserves barycenters, as shown above, an affine map is the composition of two functions, a translation and a linear map. Ordinary vector algebra uses matrix multiplication to represent linear maps, using an augmented matrix and an augmented vector, it is possible to represent both the translation and the linear map using a single matrix multiplication. If A is a matrix, = is equivalent to the following y → = A x → + b →, the above-mentioned augmented matrix is called an affine transformation matrix, or projective transformation matrix. This representation exhibits the set of all affine transformations as the semidirect product of K n and G L. This is a group under the operation of composition of functions, ordinary matrix-vector multiplication always maps the origin to the origin, and could therefore never represent a translation, in which the origin must necessarily be mapped to some other point. By appending the additional coordinate 1 to every vector, one considers the space to be mapped as a subset of a space with an additional dimension. In that space, the original space occupies the subset in which the coordinate is 1. Thus the origin of the space can be found at. A translation within the space by means of a linear transformation of the higher-dimensional space is then possible
31.
Identity function
–
In mathematics, an identity function, also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. In equations, the function is given by f = x, formally, if M is a set, the identity function f on M is defined to be that function with domain and codomain M which satisfies f = x for all elements x in M. In other words, the value f in M is always the same input element x of M. The identity function on M is clearly a function as well as a surjective function. The identity function f on M is often denoted by idM, in set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or diagonal of M. If f, M → N is any function, then we have f ∘ idM = f = idN ∘ f, in particular, idM is the identity element of the monoid of all functions from M to M. Since the identity element of a monoid is unique, one can define the identity function on M to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, the identity function is a linear operator, when applied to vector spaces. The identity function on the integers is a completely multiplicative function. In an n-dimensional vector space the identity function is represented by the identity matrix In, in a metric space the identity is trivially an isometry. An object without any symmetry has as symmetry group the group only containing this isometry. In a topological space, the identity function is always continuous
32.
Reflection (mathematics)
–
In mathematics, a reflection is a mapping from a Euclidean space to itself that is an isometry with a hyperplane as a set of fixed points, this set is called the axis or plane of reflection. The image of a figure by a reflection is its image in the axis or plane of reflection. For example the image of the small Latin letter p for a reflection with respect to a vertical axis would look like q. Its image by reflection in a horizontal axis would look like b, a reflection is an involution, when applied twice in succession, every point returns to its original location, and every geometrical object is restored to its original state. The term reflection is used for a larger class of mappings from a Euclidean space to itself. Such isometries have a set of fixed points that is an affine subspace, for instance a reflection through a point is an involutive isometry with just one fixed point, the image of the letter p under it would look like a d. This operation is known as a central inversion, and exhibits Euclidean space as a symmetric space. In a Euclidean vector space, the reflection in the point situated at the origin is the same as vector negation, other examples include reflections in a line in three-dimensional space. Typically, however, unqualified use of the term reflection means reflection in a hyperplane, a figure that does not change upon undergoing a reflection is said to have reflectional symmetry. Some mathematicians use flip as a synonym for reflection, in a plane geometry, to find the reflection of a point drop a perpendicular from the point to the line used for reflection, and extend it the same distance on the other side. To find the reflection of a figure, reflect each point in the figure, step 2, construct circles centered at A′ and B′ having radius r. P and Q will be the points of intersection of two circles. Point Q is then the reflection of point P through line AB, the matrix for a reflection is orthogonal with determinant −1 and eigenvalues −1,1,1. The product of two matrices is a special orthogonal matrix that represents a rotation. Every rotation is the result of reflecting in an number of reflections in hyperplanes through the origin. Thus reflections generate the group, and this result is known as the Cartan–Dieudonné theorem. Similarly the Euclidean group, which consists of all isometries of Euclidean space, is generated by reflections in affine hyperplanes, in general, a group generated by reflections in affine hyperplanes is known as a reflection group. The finite groups generated in this way are examples of Coxeter groups, note that the second term in the above equation is just twice the vector projection of v onto a
33.
Scale (ratio)
–
The scale ratio of a model represents the proportional ratio of a linear dimension of the model to the same feature of the original. Examples include a 3-dimensional scale model of a building or the drawings of the elevations or plans of a building. In such cases the scale is dimensionless and exact throughout the model or drawing, the scale can be expressed in four ways, in words, as a ratio, as a fraction and as a graphical scale. Thus on an architects drawing one might read one centimetre to one metre or 1,100 or 1/100, in general a representation may involve more than one scale at the same time. For example, a showing a new road in elevation might use different horizontal and vertical scales. An elevation of a bridge might be annotated with arrows with a proportional to a force loading, as in 1 cm to 1000 newtons. A weather map at some scale may be annotated with arrows at a dimensional scale of 1 cm to 20 mph. A town plan may be constructed as a scale drawing. In general the scale of a projection depends on position and direction, the variation of scale may be considerable in small scale maps which may cover the globe. In large scale maps of areas the variation of scale may be insignificant for most purposes. The scale of a map projection must be interpreted as a nominal scale, a scale model is a representation or copy of an object that is larger or smaller than the actual size of the object being represented. Very often the model is smaller than the original and used as a guide to making the object in full size. In mathematics, the idea of geometric scaling can be generalized, the scale means for 3 or more numbers to be in Place List of scale model sizes Scale Scale invariance Scale space Spatial scale
34.
Rotation
–
A rotation is a circular movement of an object around a center of rotation. A three-dimensional object always rotates around a line called a rotation axis. If the axis passes through the center of mass, the body is said to rotate upon itself. A rotation about a point, e. g. the Earth about the Sun, is called a revolution or orbital revolution. The axis is called a pole, mathematically, a rotation is a rigid body movement which, unlike a translation, keeps a point fixed. This definition applies to rotations within both two and three dimensions All rigid body movements are rotations, translations, or combinations of the two, a rotation is simply a progressive radial orientation to a common point. That common point lies within the axis of that motion, the axis is 90 degrees perpendicular to the plane of the motion. If the axis of the rotation lies external of the body in question then the body is said to orbit, there is no fundamental difference between a “rotation” and an “orbit” and or spin. The key distinction is simply where the axis of the rotation lies and this distinction can be demonstrated for both “rigid” and “non rigid” bodies. If a rotation around a point or axis is followed by a rotation around the same point/axis. The reverse of a rotation is also a rotation, thus, the rotations around a point/axis form a group. However, a rotation around a point or axis and a rotation around a different point/axis may result in something other than a rotation, Rotations around the x, y and z axes are called principal rotations. Rotation around any axis can be performed by taking a rotation around the x axis, followed by a rotation around the y axis and that is to say, any spatial rotation can be decomposed into a combination of principal rotations. In flight dynamics, the rotations are known as yaw, pitch. This terminology is used in computer graphics. In astronomy, rotation is an observed phenomenon. Stars, planets and similar bodies all spin around on their axes, the rotation rate of planets in the solar system was first measured by tracking visual features. Stellar rotation is measured through Doppler shift or by tracking active surface features and this rotation induces a centrifugal acceleration in the reference frame of the Earth which slightly counteracts the effect of gravity the closer one is to the equator
35.
Image sensor
–
An image sensor or imaging sensor is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of light waves into signals, the waves can be light or other electromagnetic radiation. As technology changes, digital imaging tends to replace analog imaging, early analog sensors for visible light were video camera tubes. Currently, used types are semiconductor charge-coupled devices or active pixel sensors in complementary metal–oxide–semiconductor or N-type metal-oxide-semiconductor technologies, analog sensors for invisible radiation tend to involve vacuum tubes of various kinds. Digital sensors include flat panel detectors, today, most digital cameras use a CMOS sensor, because CMOS sensors perform better than CCDs. An example is the fact that they incorporate an integrated circuit, CCD is still in use for cheap low entry cameras, but weak in burst mode. Both types of sensor accomplish the task of capturing light. Each cell of a CCD image sensor is an analog device, when light strikes the chip it is held as a small electrical charge in each photo sensor. This process is repeated until all the lines of pixels have had their charge amplified. A CMOS image sensor has an amplifier for each compared to the few amplifiers of a CCD. Some CMOS imaging sensors also use Back-side illumination to increase the number of photons that hit the photodiode, CMOS sensors can potentially be implemented with fewer components, use less power, and/or provide faster readout than CCD sensors. They are also vulnerable to static electricity discharges. Another approach is to utilize the very fine dimensions available in modern CMOS technology to implement a CCD like structure entirely in CMOS technology and this can be achieved by separating individual poly-silicon gates by a very small gap. These hybrid sensors are still in the phase and can potentially harness the benefits of both CCD and CMOS imagers. There are many parameters that can be used to evaluate the performance of a sensor, including dynamic range, signal-to-noise ratio. For sensors of comparable types, the ratio and dynamic range improve as the size increases. In order to avoid interpolated color information, techniques like color co-site sampling use a mechanism to shift the color sensor in pixel steps. 3CCD, using three discrete image sensors, with the color separation done by a dichroic prism, while in general digital cameras use a flat sensor, Sony prototyped a curved sensor in 2014 to reduce/eliminate Petzval field curvature that occurs with a flat sensor
36.
Color correction
–
Without color correction gels, a scene may have a mix of various colors. Applying color correction gels in front of light sources can alter the color of the light sources to match. Mixed lighting can produce an undesirable aesthetic when displayed on a television or in a theatre, conversely, gels may also be used to make a scene appear more natural by simulating the mix of color temperatures that occur naturally. This application is useful, especially where motivated lighting is the goal, color gels may also be used to tint lights for artistic effect. The particular color of a light source can be simplified into a correlated color temperature. The higher the CCT, the bluer the light appears, sunlight at 5600K, for example, appears much bluer than tungsten light at 3200K. Unlike a chromaticity diagram, the Kelvin scale reduces the light sources color into one dimension, thus, light sources of the same CCT may appear green or magenta in comparison with one another. Fluorescent lights, for example, are very green in comparison with other types of lighting. However, some fluorescents are designed to have a high faithfulness to an ideal light and this dimension, along lines of constant CCT, is sometimes measured in terms of green–magenta balance, this dimension is sometimes referred to as tint or CC. The main color correction gels are CTB and CTO, a CTB gel converts tungsten light to daylight color. A CTO gel performs the reverse, note that different manufacturers gels yield slightly different colors. As well, there is no definition of the color of daylight since it varies depending on the location. Gels that remove the green cast of fluorescent lights are called minus green, gels that add a green cast are called plus green. Fractions such as 3/4, 1/2, 1/4, and 1/8 indicate the strength of a gel, a 1/2 CTO gel is half the strength of a CTO gel. Color filters may be applied over a lens to adjust its white balance. In video systems, white balance can be achieved by digital or electronic manipulation of the signal, however, some digital cinema cameras can record an image without any digital filtering applied. Using physical color correction filters to white balance can maximize the range of the captured image. Some professional cameras designed for ENG use filter wheels containing color correction filters and are designed to optimize performance for different color temperatures, in film cameras, no electronic or digital manipulation of white balance is possible in the original camera negative
37.
Westworld (film)
–
Westworld is a 1973 American science fiction Western thriller film written and directed by novelist Michael Crichton about amusement park androids that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, the film served as Crichtons first theatrical feature. It was also the first feature film to use image processing. The film was nominated for Hugo, Nebula, and Saturn awards, Westworld was succeeded by a sequel, Futureworld, and a short-lived television series, Beyond Westworld. A new television series from HBO, based on the original film, in the then-future year of 1983, a high-tech, highly realistic adult amusement park called Delos features three themed worlds—Westworld, Medieval World, and Roman World. The resorts three worlds are populated with lifelike androids that are practically indistinguishable from human beings, each programmed in character for their assigned historical environment. For $1,000 per day, guests may indulge in any adventure with the population of the park, including sexual encounters. Deloss tagline in its promises, Boy, have we got a vacation for you. Peter Martin, a first-time Delos visitor, and his friend John Blane, on a repeat visit, one of the attractions is the Gunslinger, a robot programmed to instigate gunfights. The firearms issued to the guests have temperature sensors that prevent them from shooting humans or anything with a high body temperature. The Gunslingers programming allows guests to draw their guns and kill it, when one of the supervising computer scientists scoffs at the analogy of an infectious disease, he is told by the chief supervisor, We arent dealing with ordinary machines here. These are highly complicated pieces of equipment, almost as complicated as living organisms, in some cases, theyve been designed by other computers. We dont know exactly how they work, the malfunctions become more serious when a robotic rattlesnake bites Blane in Westworld, and, against its programming, an android refuses a guests advances in Medieval World. The failures escalate until Medieval Worlds Black Knight robot kills a guest in a swordfight, the resorts supervisors try to regain control by shutting down power to the entire park. However, the shutdown traps them in control when the doors automatically lock, unable to turn the power back on. Meanwhile, the robots in all three worlds run amok, operating on reserve power, Martin and Blane, recovering from a drunken bar-room brawl, wake up in Westworlds bordello, unaware of the parks massive breakdown. When the Gunslinger challenges the men to a showdown, Blane treats the confrontation as an amusement until the robot outdraws, shoots, Martin runs for his life and the robot implacably follows. Martin flees to the areas of the park, but finds only dead guests, damaged robots
38.
Computer graphics
–
Computer graphics are pictures and films created using computers. Usually, the term refers to computer-generated image data created with help from specialized hardware and software. It is a vast and recent area in computer science, the phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, though sometimes referred to as CGI. The overall methodology depends heavily on the sciences of geometry, optics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user and it is also used for processing image data received from the physical world. Computer graphic development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, the term computer graphics has been used a broad sense to describe almost everything on computers that is not text or sound. Such imagery is found in and on television, newspapers, weather reports, a well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media such graphs are used to illustrate papers, reports, thesis, many tools have been developed to visualize data. Computer generated imagery can be categorized into different types, two dimensional, three dimensional, and animated graphics. As technology has improved, 3D computer graphics have become more common, Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Screens could display art since the Lumiere brothers use of mattes to create effects for the earliest films dating from 1895. New kinds of displays were needed to process the wealth of information resulting from such projects, early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed an experiment in 1954 in which a small program he wrote captured the movement of his finger. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established ties with Stanford University through its founders. This began the transformation of the southern San Francisco Bay Area into the worlds leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware, further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MITs Lincoln Laboratory, the TX-2 integrated a number of new man-machine interfaces