Computer facial animation
Computer facial animation is an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character face. The character can be a humanoid, an animal, a fantasy creature or character, etc.. Due to its subject and output type, it is related to many other scientific and artistic fields from psychology to traditional animation; the importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific and artistic interests in computer facial animation. Although development of computer graphics methods for facial animation started in the early-1970s, major achievements in this field are more recent and happened since the late 1980s; the body of work around computer facial animation can be divided into two main areas: techniques to generate animation data, methods to apply such data to a character. Techniques such as motion capture and keyframing belong to the first group, while morph targets animation and skeletal animation belong to the second.
Facial animation has become well-known and popular through animated feature films and computer games but its applications include many more areas such as communication, scientific simulation, agent-based systems. With the recent advancements in computational power in personal and mobile devices, facial animation has transitioned from appearing in pre-rendered content to being created at runtime. Human facial expression has been the subject of scientific investigation for more than one hundred years. Study of facial movements and expressions started from a biological point of view. After some older investigations, for example by John Bulwer in the late 1640s, Charles Darwin’s book The Expression of the Emotions in Men and Animals can be considered a major departure for modern research in behavioural biology. Computer based facial expression modelling and animation is not a new endeavour; the earliest work with computer based facial representation was done in the early-1970s. The first three-dimensional facial animation was created by Parke in 1972.
In 1973, Gillenson developed an interactive system to edit line drawn facial images. In 1974, Parke developed a parameterized three-dimensional facial model. One of the most important attempts to describe facial movements was Facial Action Coding System. Developed by Carl-Herman Hjortsjö in the 1960s and updated by Ekman and Friesen in 1978, FACS defines 46 basic facial Action Units. A major group of these Action Units represent primitive movements of facial muscles in actions such as raising brows and talking. Eight AU's are for rigid three-dimensional head movements. FACS has been used for describing desired movements of synthetic faces and in tracking facial activities; the early-1980s saw the development of the first physically based muscle-controlled face model by Platt and the development of techniques for facial caricatures by Brennan. In 1985, the animated short film Tony de Peltrie was a landmark for facial animation; this marked the first time computer facial expression and speech animation were a fundamental part of telling the story.
The late-1980s saw the development of a new muscle-based model by Waters, the development of an abstract muscle action model by Magnenat-Thalmann and colleagues, approaches to automatic speech synchronization by Lewis and Hill. The 1990s have seen increasing activity in the development of facial animation techniques and the use of computer facial animation as a key storytelling component as illustrated in animated films such as Toy Story, Antz and Monsters, Inc. and computer games such as Sims. Casper, a milestone in this decade, was the first movie in which a lead actor was produced using digital facial animation; the sophistication of the films increased after 2000. In The Matrix Reloaded and The Matrix Revolutions, dense optical flow from several high-definition cameras was used to capture realistic facial movement at every point on the face. Polar Express used a large Vicon system to capture upward of 150 points. Although these systems are automated, a large amount of manual clean-up effort is still needed to make the data usable.
Another milestone in facial animation was reached by The Lord of the Rings, where a character specific shape base system was developed. Mark Sagar pioneered the use of FACS in entertainment facial animation, FACS based systems developed by Sagar were used on Monster House, King Kong, other films; the generation of facial animation data can be approached in different ways: 1.) Marker-based motion capture on points or marks on the face of a performer, 2.) Markerless motion capture techniques using different type of cameras, 3.) Audio-driven techniques, 4.) Keyframe animation. Motion capture uses cameras placed around a subject; the subject is fitted either with reflectors or sources that determine the subject's position in space. The data recorded by the cameras is digitized and converted into a three-dimensional computer model of the subject; until the size of the detectors/sources used by motion capture systems made the technology inappropriate for facial capture. However and other advancements have made motion capture a viable tool for computer facial animation.
Facial motion capture was used extensively in Polar Express by Imageworks where hundreds of motion points were captured. This film was accomplished a
In computing, an input device is a piece of computer hardware equipment used to provide data and control signals to an information processing system such as a computer or information appliance. Examples of input devices include keyboards, scanners, digital cameras and joysticks. Audio input devices may be used for purposes including speech recognition. Many companies are utilizing speech recognition to help assist users to use their device. Input devices can be categorized based on: modality of input whether the input is discrete or continuous the number of degrees of freedom involved'Keyboards' are a human interface device, represented as a layout of buttons; each button, or key, can be used to either input a linguistic character to a computer, or to call upon a particular function of the computer. They act as the main text entry interface for most users. Traditional keyboards use spring-based buttons, though newer variations employ virtual keys, or projected keyboards, it is typewriter like device composed of a matrix of switches.
There happens to be another keyboard, like an input device for musical instrument which helps to produce sound. Examples of types of keyboards include: Keyer Keyboard Lighted Program Function Keyboard Thumb Keyboard Pointing devices are the most used input devices today. A pointing device is any human interface device that allows a user to input spatial data to a computer. In the case of mouse and touchpads, this is achieved by detecting movement across a physical surface. Analog devices, such as 3D mice, joysticks, or pointing sticks, function by reporting their angle of deflection. Movements of the pointing device are echoed on the screen by movements of the pointer, creating a simple, intuitive way to navigate a computer's graphical user interface. Pointing devices, which are input devices used to specify a position in space, can further be classified according to: Whether the input is direct or indirect. With direct input, the input space coincides with the display space, i.e. pointing is done in the space where visual feedback or the pointer appears.
Touchscreens and light pens involve direct input. Examples involving indirect input include the trackball. Whether the positional information is absolute or relative For pointing devices, direct input is necessarily absolute, but indirect input may be either absolute or relative. For example, digitizing graphics tablets that do not have an embedded screen involve indirect input and sense absolute positions and are run in an absolute input mode, but they may be set up to simulate a relative input mode like that of a touchpad, where the stylus or puck can be lifted and repositioned. Embeded LCD tablets which are referred to as graphics tablet monitor is the extension of digitizing graphics tablets, it enables users to see the real-time positions via the screen while using. Examples of types of pointing devices include: mouse touchpad pointing stick touchscreen trackball Some devices allow many continuous degrees of freedom as input; these can be used as pointing devices, but are used in ways that don't involve pointing to a location in space, such as the control of a camera angle while in 3D applications.
These kinds of devices are used in virtual reality systems, where input that registers six degrees of freedom is required. Input devices, such as buttons and joysticks, can be combined on a single physical device that could be thought of as a composite device. Many gaming devices have controllers like this. Technically mice are composite devices, as they both track movement and provide buttons for clicking, but composite devices are considered to have more than two different forms of input. Examples of types of composite devices include: Joystick controller Gamepad Paddle Jog dial/shuttle Wii Remote Video input devices are used to digitize images or video from the outside world into the computer; the information can be stored in a multitude of formats depending on the user's requirement. Examples of types of a video input devices include: Digital camera Digital camcorder Portable media player Webcam Microsoft Kinect Sensor Image scanner Fingerprint scanner Barcode reader 3D scanner Laser rangefinder Eye gaze tracker Computed tomography Magnetic resonance imaging Positron emission tomography Medical ultrasonography Audio input devices are used to capture sound.
In some cases, an audio output device can be used as an input device, in order to capture produced sound. Audio input devices allow a user to send audio signals to a computer for processing, recording, or carrying out commands. Devices such as microphones allow users to speak to the computer in order to record a voice message or navigate software. Aside from recording, audio input devices are used with speech recognition software. Examples of types of audio input devices include: Microphones MIDI keyboard or other digital musical instrument Punched cards and punched tapes were much used in the 20th century. A punched hole represented a one. There were optical readers. Gesture recognition Digital pen Magnetic ink character recognition Sip-and-puff#Computer input device Peripheral Display device Output device N. P. Milner. 1988 A review of human performance and preferences with different input devices to computer systems. In Proceedings of the Fourth Conference of the
Active pixel sensor
An active-pixel sensor is an image sensor where each picture element has a photodetector and an active amplifier. There are many types of integrated circuit active pixel sensors including the complementary metal–oxide–semiconductor APS used most in cell phone cameras, web cameras, most digital pocket cameras since 2010, in most digital single-lens reflex cameras and Mirrorless interchangeable-lens cameras; such an image sensor is produced using CMOS technology, has emerged as an alternative to charge-coupled device image sensors. The term'active pixel sensor' is used to refer to the individual pixel sensor itself, as opposed to the image sensor; the term active pixel sensor was coined in 1985 by Tsutomu Nakamura who worked on the Charge Modulation Device active pixel sensor at Olympus, more broadly defined by Eric Fossum in a 1993 paper. Image sensor elements with in-pixel amplifiers were described by Noble in 1968, by Chamberlain in 1969, by Weimer et al. in 1969, at a time when passive-pixel sensors – that is, pixel sensors without their own amplifiers or active noise cancelling circuitry – were being investigated as a solid-state alternative to vacuum-tube imaging devices.
The MOS passive-pixel sensor used just a simple switch in the pixel to read out the photodiode integrated charge. Pixels were arrayed in a two-dimensional structure, with an access enable wire shared by pixels in the same row, output wire shared by column. At the end of each column was an amplifier. Passive-pixel sensors suffered from many limitations, such as high noise, slow readout, lack of scalability; the addition of an amplifier to each pixel addressed these problems, resulted in the creation of the active-pixel sensor. Noble in 1968 and Chamberlain in 1969 created sensor arrays with active MOS readout amplifiers per pixel, in the modern three-transistor configuration; the CCD was invented in October 1969 at Bell Labs. Because the MOS process was so variable and MOS transistors had characteristics that changed over time, the CCD's charge-domain operation was more manufacturable and eclipsed MOS passive and active pixel sensors. A low-resolution "mostly digital" N-channel MOSFET imager with intra-pixel amplification, for an optical mouse application, was demonstrated in 1981.
Another type of active pixel sensor is the hybrid infrared focal plane array designed to operate at cryogenic temperatures in the infrared spectrum. The devices are two chips that are put together like a sandwich: one chip contains detector elements made in InGaAs or HgCdTe, the other chip is made of silicon and is used to read out the photodetectors; the exact date of origin of these devices is classified, but by the mid-1980s they were in widespread use. By the late 1980s and early 1990s, the CMOS process was well established as a well controlled stable process and was the baseline process for all logic and microprocessors. There was a resurgence in the use of passive-pixel sensors for low-end imaging applications, active-pixel sensors for low-resolution high-function applications such as retina simulation and high energy particle detector. However, CCDs continued to have much lower temporal noise and fixed-pattern noise and were the dominant technology for consumer applications such as camcorders as well as for broadcast cameras, where they were displacing video camera tubes.
Fossum, who worked at NASA Jet Propulsion Laboratory et al. invented the image sensor that used intra-pixel charge transfer along with an in-pixel amplifier to achieve true correlated double sampling and low temporal noise operation, on-chip circuits for fixed-pattern noise reduction, published the first extensive article predicting the emergence of APS imagers as the commercial successor of CCDs. Between 1993 and 1995, the Jet Propulsion Laboratory developed a number of prototype devices, which validated the key features of the technology. Though primitive, these devices demonstrated good image performance with high readout speed and low power consumption. In 1995, being frustrated by the slow pace of the technology's adoption and his then-wife Dr. Sabrina Kemeny co-founded Photobit Corporation to commercialize the technology, it continued to develop and commercialize APS technology for a number of applications, such as web cams, high speed and motion capture cameras, digital radiography, endoscopy cameras, DSLRs and camera-phones.
Many other small image sensor companies sprang to life shortly thereafter due to the accessibility of the CMOS process and all adopted the active pixel sensor approach. Most recent, the CMOS sensor technology has spread to medium-format photography with Phase One being the first to launch a medium format digital back with a Sony-built CMOS sensor. Fossum now performs research on the Quanta Image Sensor technology; the QIS is a revolutionary change in the way we collect images in a camera, being invented at Dartmouth. In the QIS, the goal is to count every photon that strikes the image sensor, to provide resolution of 1 billion or more specialized photoelements per sensor, to read out jot bit planes hundreds or thousands of times per second resulting in terabits/sec of data. APS pixels solve the scalability issues of the passive-pixel sensor, they consume less power than CCDs, have less image lag, require less specialized manufacturing facilities. Unlike CCDs, APS sensors can combine the image sensor function and image processing functions within the same integrated circuit.
APS sensors have found markets in many consumer applications
A nostril is one of the two channels of the nose, from the point where they bifurcate to the external opening. In birds and mammals, they contain branched bones or cartilages called turbinates, whose function is to warm air on inhalation and remove moisture on exhalation. Fish do not breathe through their noses, but they do have two small holes used for smelling, which may, indeed, be called nostrils. In humans, the nasal cycle is the normal ultradian cycle of each nostril's blood vessels becoming engorged in swelling shrinking; the nostrils are separated by the septum. The septum can sometimes be deviated. With extreme damage to the septum and columella, the two nostrils are no longer separated and form a single larger external opening. Like other tetrapods, humans have two external nostrils and two additional nostrils at the back of the nasal cavity, inside the head; each choana contains 1000 strands of nasal hair. They connect the nose to the throat, aiding in respiration. Though all four nostrils were on the outside the head of our fish ancestors, the nostrils for outgoing water migrated to the inside of the mouth, as evidenced by the discovery of Kenichthys campbelli, a 395-million-year-old fossilized fish which shows this migration in progress.
It has two nostrils between its front teeth, similar to human embryos at an early stage. If these fail to join up, the result is a cleft palate, it is possible for humans to smell different olfactory inputs in the two nostrils and experience a perceptual rivalry akin to that of binocular rivalry when there are two different inputs to the two eyes. The Procellariiformes are distinguished from other birds by having tubular extensions of their nostrils. Dilator naris muscle Variant singular form nare "nares" at Dorland's Medical Dictionary
The Polar Express (film)
The Polar Express is a 2004 American 3D computer-animated film based on the 1985 children's book of the same name by Chris Van Allsburg, who served as one of the executive producers on the film. Directed, co-written and co-produced by Robert Zemeckis, the film features human characters animated using live action motion capture animation; the film tells the story of a young boy who, on Christmas Eve, sees a mysterious train bound for the North Pole stop outside his window and is invited aboard by its conductor. The boy joins several other children as they embark on a journey to visit Santa Claus preparing for Christmas; the film stars Daryl Sabara, Nona Gaye, Jimmy Bennett, Eddie Deezen, with Tom Hanks in six distinct roles. The film included a performance by Tinashe at age 9, as the CGI-model for the female protagonist. Castle Rock Entertainment produced the film in association with Shangri-La Entertainment, ImageMovers and Golden Mean for Warner Bros. Pictures, as Castle Rock's first animated production.
The visual effects and performance capture were done at Sony Pictures Imageworks. The film was made with a budget of $165 million, a record-breaking sum for an animated feature at the time; the film was released in both conventional and IMAX 3D theaters on November 10, 2004. It grossed $311.3 million worldwide, was listed in the 2006 Guinness World Records as the first all-digital capture film. The film marks Michael Jeter's last acting role before his death, the film was thus dedicated to his memory. In Grand Rapids, Michigan, on the night of Christmas Eve in the 1950s, a boy is growing bitterly skeptical of the existence of Santa Claus; as he struggles to sleep, he witnesses the arrival of a steam locomotive on the street outside his home, dons his robe to investigate, tearing the robe's pocket as he retrieves it. Outside, the train's conductor introduces the train as the Polar Express, bound for the North Pole; the boy declines to board, but jumps aboard the train as it pulls away. In a passenger car, he befriends a spirited and amicable girl, a condescending know-it-all boy.
The train stops to pick up an impoverished child, who declines to board. As Billy sits alone in the train's rear dining car, hot chocolate is served in the passenger car, the girl saves her hot chocolate for Billy; as she and the conductor cross to the dining car, the boy notices she left her ticket behind unpunched, but loses hold of the ticket between the cars when he attempts to return it. The ticket reenters the passenger car, but not before the conductor notices its absence and escorts the girl back to the rear car; the know-it-all claims. He meets a hobo camping on the roof, who offers him coffee and discusses the existence of Santa Claus and belief in ghosts; the hobo skis with the boy along the tops of the cars towards the train's coal tender, where the hobo disappears. In the locomotive's cab, the boy discovers that the girl has been made to supervise driving the train while engineers Steamer and Smokey replace the train's headlight; the boy applies the brakes, the train stops while the conductor witnesses a herd of caribou, blocking their way, The conductor pulls Smokey's beard, causing him to let out animal-likes noises, therefore clearing the caribou horde.
The train at extreme speed. The throttle's split pin sheers off, causing the train to accelerate uncontrollably down a 179-degree grade and onto a frozen lake. Smokey uses his hairpin to repair the throttle as the train drifts across the ice to realign with the tracks moments before the ice breaks; the boy returns the girl's ticket for the conductor to punch, as the three return to the passenger car, the boy is accosted by an Ebenezer Scrooge marionette, taunting him and calling him a doubter, in reality controlled by the hobo. The train arrives at the North Pole, where the conductor announces that one of the passengers will be chosen to receive the first gift of Christmas, from Santa himself; the girl discovers Billy still alone in the rear car, she and the boy persuade him to come along. The children sneak through an elf command center and a gift sorting office before accidentally being dumped into a giant sack full of presents, where they discover that the know-it-all kid stowed away; the elves escort them out.
A jingle bell flies loose from the galloping reindeers' reins. He shows the bell to Santa, Santa selects the boy to receive the bell as the first gift of Christmas; the boy asks to keep the jingle bell, Santa says yes, he places it in his robe pocket. The wayward rear car is returned to the train as the children board to return home, but the boy discovers that he had lost the bell through the hole in his robe pocket, he awakens Christmas morning to find a present containing the bell. He and his younger sister Sarah ring the bell to their delight; as an adult, the boy reflects on how his friends and his sister grew deaf to the bell as their belief faded over the years. However, the bell still rings for him. Tom Hanks as Hero Boy, Hero Boy's father, Hobo, Scrooge puppet, Santa Claus, the Narrator Daryl Sabara as Hero Boy
Motion capture is the process of recording the movement of objects or people. It is used in military, sports, medical applications, for validation of computer vision and robotics. In filmmaking and video game development, it refers to recording actions of human actors, using that information to animate digital character models in 2D or 3D computer animation; when it includes face and fingers or captures subtle expressions, it is referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking refers more to match moving. In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions, Often the purpose of motion capture is to record only the movements of the actor, not his or her visual appearance; this animation data is mapped to a 3D model so that the model performs the same actions as the actor.
This process may be contrasted with the older technique of rotoscoping, as seen in Ralph Bakshi's The Lord of the Rings and American Pop. The animated character movements were achieved in these films by tracing over a live-action actor, capturing the actor's motions and movements. To explain, an actor is filmed performing an action, the recorded film is projected onto an animation table frame-by-frame. Animators trace the live-action footage onto animation cels, capturing the actor's outline and motions frame-by-frame, they fill in the traced outlines with the animated character; the completed animation cels are photographed frame-by-frame matching the movements and actions of the live-action footage. The end result of, that the animated character replicates the live-action movements of the actor. However, this process takes a considerable amount of effort. Camera movements can be motion captured so that a virtual camera in the scene will pan, tilt or dolly around the stage driven by a camera operator while the actor is performing.
At the same time, the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking. Motion capture offers several advantages over traditional computer animation of a 3D model: Low latency, close to real time, results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation; the Hand Over technique is an example of this. The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques; this allows many tests to be done with different styles or deliveries, giving a different personality only limited by the talent of the actor.
Complex movement and realistic physical interactions such as secondary motions and exchange of forces can be recreated in a physically accurate manner. The amount of animation data that can be produced within a given time is large when compared to traditional animation techniques; this contributes to meeting production deadlines. Potential for free software and third party solutions reducing its costs. Specific hardware and special software programs are required to process the data; the cost of the software and personnel required can be prohibitive for small productions. The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion; when problems occur, it is easier to shoot the scene again rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide; the initial results are limited to what can be performed within the capture volume without extra editing of the data. Movement that does not follow the laws of physics cannot be captured.
Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later. If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, oversized hands, these may intersect the character's body if the human performer is not careful with their physical motion. Video games use motion capture to animate athletes, martial artists, other in-game characters; this has been done since the Sega Model 2 arcade game Virtua Fighter 2 in 1994. By mid-1995 the use of motion capture in video game development had become commonplace, developer/publisher Acclaim Entertainment had gone so far as to have its own in-house motion capture studio built into its headquarters. Namco's 1995 arcade game Soul Edge used passive optical system markers for motion capture. Movies use motion capture for CG effects, in some cases replacing traditional cel animation, for computer-generated creatures, such as Gollum, The Mummy, King Kong, Davy Jones from Pirates of the Caribbean, the Na'vi from the film Avatar, Clu from Tron: Legacy.
The Great Goblin, the three Stone-trolls, many of the orcs and goblins in the 2012 film The Hobbit: An Unexpected Journey, Smaug were created using motion capture. ‘’Star Wars: Episode I – The Phantom Menace’’ was the first feature-length film to
Facial recognition system
A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source. There are multiple methods in which facial recognition systems work, but in general, they work by comparing selected facial features from given image with faces within a database, it is described as a Biometric Artificial Intelligence based application that can uniquely identify a person by analysing patterns based on the person's facial textures and shape. While a form of computer application, it has seen wider uses in recent times on mobile platforms and in other forms of technology, such as robotics, it is used as access control in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. Although the accuracy of facial recognition system as a biometric technology is lower than iris recognition and fingerprint recognition, it is adopted due to its contactless and non-invasive process, it has become popular as a commercial identification and marketing tool.
Other applications include advanced human-computer interaction, video surveillance, automatic indexing of images, video database, among others. Pioneers of automated face recognition include Woody Bledsoe, Helen Chan Wolf, Charles Bisson. During 1964 and 1965, along with Helen Chan and Charles Bisson, worked on using the computer to recognize human faces, he was proud of this work, but because the funding was provided by an unnamed intelligence agency that did not allow much publicity, little of the work was published. Based on the available references, it was revealed that the Bledsoe's initial approach involved the manual marketing of various landmarks on the face such as the eye centers, etc. and these were mathematically rotated by computer to compensate for pose variation. The distances between landmarks were automatically computed and compared between images to determine identity. Given a large database of images and a photograph, the problem was to select from the database a small set of records such that one of the image records matched the photograph.
The success of the method could be measured in terms of the ratio of the answer list to the number of records in the database. Bledsoe described the following difficulties: This project was labeled man-machine because the human extracted the coordinates of a set of features from the photographs, which were used by the computer for recognition. Using a graphics tablet, the operator would extract the coordinates of features such as the center of pupils, the inside corner of eyes, the outside corner of eyes, point of widows peak, so on. From these coordinates, a list of 20 distances, such as the width of mouth and width of eyes, pupil to pupil, were computed; these operators could process about 40 pictures an hour. When building the database, the name of the person in the photograph was associated with the list of computed distances and stored in the computer. In the recognition phase, the set of distances was compared with the corresponding distance for each photograph, yielding a distance between the photograph and the database record.
The closest records are returned. Because it is unlikely that any two pictures would match in head rotation, lean and scale, each set of distances is normalized to represent the face in a frontal orientation. To accomplish this normalization, the program first tries to determine the tilt, the lean, the rotation. Using these angles, the computer undoes the effect of these transformations on the computed distances. To compute these angles, the computer must know the three-dimensional geometry of the head; because the actual heads were unavailable, Bledsoe used a standard head derived from measurements on seven heads. After Bledsoe left PRI in 1966, this work was continued at the Stanford Research Institute by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer outperformed humans when presented with the same recognition tasks. Peter Hart enthusiastically recalled the project with the exclamation, "It worked!" By about 1997, the system developed by Christoph von der Malsburg and graduate students of the University of Bochum in Germany and the University of Southern California in the United States outperformed most systems with those of Massachusetts Institute of Technology and the University of Maryland rated next.
The Bochum system was developed through funding by the United States Army Research Laboratory. The software was sold as ZN-Face and used by customers such as Deutsche Bank and operators of airports and other busy locations; the software was "robust enough to make identifications from less-than-perfect face views. It can often see through such impediments to identification as mustaches, changed hairstyles and glasses—even sunglasses". In 2006, the performance of the latest face recognition algorithms was evaluated in the Face Recognition Grand Challenge. High-resolution face images, 3-D face scans, iris images were used in the tests; the results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins. U. S. Government-sponsored evaluations and challenge problems have helped spur over two orders-of-magnitude in face-recognition system performance.
Since 1993, the error rate of automatic face-recognition systems has decreased by a factor of 272. The reduction applies