Motion capture is the process of recording the movement of objects or people. It is used in military, sports, medical applications, for validation of computer vision and robotics. In filmmaking and video game development, it refers to recording actions of human actors, using that information to animate digital character models in 2D or 3D computer animation; when it includes face and fingers or captures subtle expressions, it is referred to as performance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking refers more to match moving. In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions, Often the purpose of motion capture is to record only the movements of the actor, not his or her visual appearance; this animation data is mapped to a 3D model so that the model performs the same actions as the actor.
This process may be contrasted with the older technique of rotoscoping, as seen in Ralph Bakshi's The Lord of the Rings and American Pop. The animated character movements were achieved in these films by tracing over a live-action actor, capturing the actor's motions and movements. To explain, an actor is filmed performing an action, the recorded film is projected onto an animation table frame-by-frame. Animators trace the live-action footage onto animation cels, capturing the actor's outline and motions frame-by-frame, they fill in the traced outlines with the animated character; the completed animation cels are photographed frame-by-frame matching the movements and actions of the live-action footage. The end result of, that the animated character replicates the live-action movements of the actor. However, this process takes a considerable amount of effort. Camera movements can be motion captured so that a virtual camera in the scene will pan, tilt or dolly around the stage driven by a camera operator while the actor is performing.
At the same time, the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking. Motion capture offers several advantages over traditional computer animation of a 3D model: Low latency, close to real time, results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation; the Hand Over technique is an example of this. The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques; this allows many tests to be done with different styles or deliveries, giving a different personality only limited by the talent of the actor.
Complex movement and realistic physical interactions such as secondary motions and exchange of forces can be recreated in a physically accurate manner. The amount of animation data that can be produced within a given time is large when compared to traditional animation techniques; this contributes to meeting production deadlines. Potential for free software and third party solutions reducing its costs. Specific hardware and special software programs are required to process the data; the cost of the software and personnel required can be prohibitive for small productions. The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion; when problems occur, it is easier to shoot the scene again rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide; the initial results are limited to what can be performed within the capture volume without extra editing of the data. Movement that does not follow the laws of physics cannot be captured.
Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later. If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, oversized hands, these may intersect the character's body if the human performer is not careful with their physical motion. Video games use motion capture to animate athletes, martial artists, other in-game characters; this has been done since the Sega Model 2 arcade game Virtua Fighter 2 in 1994. By mid-1995 the use of motion capture in video game development had become commonplace, developer/publisher Acclaim Entertainment had gone so far as to have its own in-house motion capture studio built into its headquarters. Namco's 1995 arcade game Soul Edge used passive optical system markers for motion capture. Movies use motion capture for CG effects, in some cases replacing traditional cel animation, for computer-generated creatures, such as Gollum, The Mummy, King Kong, Davy Jones from Pirates of the Caribbean, the Na'vi from the film Avatar, Clu from Tron: Legacy.
The Great Goblin, the three Stone-trolls, many of the orcs and goblins in the 2012 film The Hobbit: An Unexpected Journey, Smaug were created using motion capture. ‘’Star Wars: Episode I – The Phantom Menace’’ was the first feature-length film to
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but originate from the face or hand. Current focuses in the field include emotion recognition from hand gesture recognition. Users can use simple gestures to interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait and human behaviors is the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or GUIs, which still limit the majority of input to keyboard and mouse and interact without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at this point will move accordingly.
This could make conventional input on devices such and redundant. Gesture recognition features: More accurate High stability Time saving to unlock a deviceThe major application areas of gesture recognition in the current scenario are: Automotive sector Consumer electronics sector Transit sector Gaming sector To unlock smartphones Defence Home automation Automated sign language translationGesture recognition technology has been considered to be the successful technology as it saves time to unlock any device. Gesture recognition can be conducted with techniques from image processing; the literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer. Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice; such implementations could enable a new range of hardware.
This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, mouse gesture recognition; this is computer interaction through the drawing of symbols with a pointing device cursor. In computer interfaces, two types of gestures are distinguished: We consider online gestures, which can be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are processed after the interaction is finished. Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu. Online gestures: Direct manipulation gestures, they are used to rotate a tangible object. Touchless user interface is an emerging type of technology in relation to gesture control. Touchless user interface is the process of commanding the computer via body motion and gestures without touching a keyboard, mouse, or screen.
For example, Microsoft's Kinect is a touchless game interface. Touchless interface in addition to gesture controls are becoming popular as they provide the abilities to interact with devices without physically touching them. There are a number of devices utilizing this type of interface such as, laptops and television. Although touchless technology is seen in gaming software, interest is now spreading to other fields including and healthcare industries. Soon to come, touchless technology and gesture control will be implemented in cars in levels beyond voice recognition. See BMW Series 7. There are a vast number of companies all over the world who are producing gesture recognition technology, such as: White Paper: Explore Intel's user experience research, which shows how touchless multifactor authentication can help healthcare organizations mitigate security risks while improving clinician efficiency and patient care; this touchless MFA solution combines facial recognition and device recognition capabilities for two-factor user authentication.
The aim of the project is to explore the use of touchless interaction within surgical settings, allowing images to be viewed and manipulated without contact through the use of camera-based gesture recognition technology. In particular, the project seeks to understand the challenges of these environments for the design and deployment of such systems, as well as articulate the ways in which these technologies may alter surgical practice. While our primary concerns here are with maintaining conditions of asepsis, the use of these touchless gesture-based technologies offers other potential uses. Elliptic Labs software suite delivers gesture and proximity functions by re-using the existing earpiece and microphone used only for audio. Ultrasound signals sent through the air from speakers integrated in smartphones and tablets bounce against a hand/object/head and are recorded by microphones integrated in these devices. In this way, Elliptic Labs' technology recognizes your hand gestures and uses them to move objects on a screen to the way bats use echolocation to navigate.
While these companies stand at the forefront of touchless technology for the future in this time, there are many other companies and products that are trending as well and may add value
A Doppler radar is a specialized radar that uses the Doppler effect to produce velocity data about objects at a distance. It does this by bouncing a microwave signal off a desired target and analyzing how the object's motion has altered the frequency of the returned signal; this variation gives direct and accurate measurements of the radial component of a target's velocity relative to the radar. Doppler radars are used in aviation, sounding satellites, Major League Baseball's StatCast system, radar guns and healthcare, bistatic radar; because of its common use by television meteorologists in on-air weather reporting, the specific term "Doppler Radar" has erroneously become popularly synonymous with the type of radar used in meteorology. Most modern weather radars use the pulse-Doppler technique to examine the motion of precipitation, but it is only a part of the processing of their data. So, while these radars use a specialized form of Doppler radar, the term is much broader in its meaning and its applications.
The Doppler effect, named after Austrian physicist Christian Doppler who proposed it in 1842, is the difference between the observed frequency and the emitted frequency of a wave for an observer moving relative to the source of the waves. It is heard when a vehicle sounding a siren approaches and recedes from an observer; the received frequency is higher during the approach, it is identical at the instant of passing by, it is lower during the recession. This variation of frequency depends on the direction the wave source is moving with respect to the observer. Imagine a baseball pitcher throwing one ball every second to a catcher. Assuming the balls travel at a constant velocity and the pitcher is stationary, the catcher catches one ball every second. However, if the pitcher is jogging towards the catcher, the catcher catches balls more because the balls are less spaced out; the inverse is true. He catches balls less because of the pitcher's backward motion. If the pitcher moves at an angle, but at the same speed, the frequency variation at which the receiver catches balls is less, as the distance between the two changes more slowly.
From the point of view of the pitcher, the frequency remains constant. Since with electromagnetic radiation like microwaves frequency is inversely proportional to wavelength, the wavelength of the waves is affected. Thus, the relative difference in velocity between a source and an observer is what gives rise to the doppler effect; the formula for radar Doppler shift is the same as that for reflection of light by a moving mirror. There is no need to invoke Einstein's theory of special relativity, because all observations are made in the same frame of reference; the result derived with c as the speed of light and v as the target velocity gives the shifted frequency as a function of the original frequency: f r = f t which simplifies to f r = f t The "beat frequency", is thus: f d = f r − f t = 2 v f t Since for most practical applications of radar, v ≪ c, so → c. We can write: f d ≈ 2 v f t c There are four ways of producing the Doppler effect. Radars may be: Coherent pulsed, Pulse-Doppler radar, Continuous wave, or Frequency modulation.
Doppler allows the use of narrow band receiver filters that reduce or eliminate signals from slow moving and stationary objects. This eliminates false signals produced by trees, insects, birds and other environmental influences. Cheap hand held. CW Doppler radar only provides a velocity output as the received signal from the target is compared in frequency with the original signal. Early Doppler radars included CW, but these led to the development of frequency modulated continuous wave radar, which sweeps the transmitter frequency to encode and determine rang
An oxygen sensor is an electronic device that measures the proportion of oxygen in the gas or liquid being analysed. It was developed by Robert Bosch GmbH during the late 1960s under the supervision of Dr. Günter Bauman; the original sensing element is made with a thimble-shaped zirconia ceramic coated on both the exhaust and reference sides with a thin layer of platinum and comes in both heated and unheated forms. The planar-style sensor entered the market in 1990 and reduced the mass of the ceramic sensing element, as well as incorporating the heater within the ceramic structure; this resulted in a sensor that responded faster. The most common application is to measure the exhaust-gas concentration of oxygen for internal combustion engines in automobiles and other vehicles in order to calculate and, if required, dynamically adjust the air-fuel ratio so that catalytic converters can work optimally, determine whether the converter is performing properly or not. Divers use a similar device to measure the partial pressure of oxygen in their breathing gas.
Scientists use oxygen sensors to measure respiration or production of oxygen and use a different approach. Oxygen sensors are used in oxygen analyzers, which find extensive use in medical applications such as anesthesia monitors and oxygen concentrator so. Oxygen sensors are used in hypoxic air fire prevention systems to continuously monitor the oxygen concentration inside the protected volumes. There are many different ways of measuring oxygen; these include technologies such as zirconia, infrared, ultrasonic and recently, laser methods. Automotive oxygen sensors, colloquially known as O2 sensors, make modern electronic fuel injection and emission control possible, they help determine, in real time, whether the air–fuel ratio of a combustion engine is rich or lean. Since oxygen sensors are located in the exhaust stream, they do not directly measure the air or the fuel entering the engine, but when information from oxygen sensors is coupled with information from other sources, it can be used to indirectly determine the air–fuel ratio.
Closed-loop feedback-controlled fuel injection varies the fuel injector output according to real-time sensor data rather than operating with a predetermined fuel map. In addition to enabling electronic fuel injection to work efficiently, this emissions control technique can reduce the amounts of both unburnt fuel and oxides of nitrogen entering the atmosphere. Unburnt fuel is pollution in the form of air-borne hydrocarbons, while oxides of nitrogen are a result of combustion chamber temperatures exceeding 1300 kelvins, due to excess air in the fuel mixture therefore contribute to smog and acid rain. Volvo was the first automobile manufacturer to employ this technology in the late 1970s, along with the three-way catalyst used in the catalytic converter; the sensor does not measure oxygen concentration, but rather the difference between the amount of oxygen in the exhaust gas and the amount of oxygen in air. Rich mixture causes an oxygen demand; this demand causes a voltage to build up, due to transportation of oxygen ions through the sensor layer.
Lean mixture causes low voltage. Modern spark-ignited combustion engines use oxygen sensors and catalytic converters in order to reduce exhaust emissions. Information on oxygen concentration is sent to the engine management computer or engine control unit, which adjusts the amount of fuel injected into the engine to compensate for excess air or excess fuel; the ECU attempts to maintain, on average, a certain air-fuel ratio by interpreting the information gained from the oxygen sensor. The primary goal is a compromise between power, fuel economy, emissions, in most cases is achieved by an air–fuel ratio close to stoichiometric. For spark-ignition engines, the three types of emissions modern systems are concerned with are: hydrocarbons, carbon monoxide and NOx. Failure of these sensors, either through normal aging, the use of leaded fuels, or fuel contaminated with silicones or silicates, for example, can lead to damage of an automobile's catalytic converter and expensive repairs. Tampering with or modifying the signal that the oxygen sensor sends to the engine computer can be detrimental to emissions control and can damage the vehicle.
When the engine is under low-load conditions, it is operating in "closed-loop mode". This refers to a feedback loop between the ECU and the oxygen sensor in which the ECU adjusts the quantity of fuel and expects to see a resulting change in the response of the oxygen sensor; this loop forces the engine to operate both lean and rich on successive loops, as it attempts to maintain a stoichiometric ratio on average. If modifications cause the engine to run moderately lean, there will be a slight increase in fuel efficiency, sometimes at the expense of increased NOx emissions, much higher exhaust gas temperatures, sometimes a slight increase in power that can turn into misfires and a drastic loss of power, as well as potential engine and catalytic-converter damage, at ultra-lean air–fuel ratios. If modifications cause the engine to run rich there will be a slight increase in power to a point (after which the engine starts flooding from too mu
A radar speed gun is a device used to measure the speed of moving objects. It is used in law-enforcement to measure the speed of moving vehicles and is used in professional spectator sport, for things such as the measurement of bowling speeds in cricket, speed of pitched baseballs and tennis serves. A radar speed gun is a Doppler radar unit that may be vehicle-mounted or static, it measures the speed of the objects at which it is pointed by detecting a change in frequency of the returned radar signal caused by the Doppler effect, whereby the frequency of the returned signal is increased in proportion to the object's speed of approach if the object is approaching, lowered if the object is receding. Such devices are used for speed limit enforcement, although more modern LIDAR speed gun instruments, which use pulsed laser light instead of radar, began to replace radar guns during the first decade of the twenty-first century, because of limitations associated with small radar systems; the radar speed gun was invented by John L. Barker Sr. and Ben Midlock, who developed radar for the military while working for the Automatic Signal Company in Norwalk, CT during World War II.
Automatic Signal was approached by Grumman Aircraft Corporation to solve the specific problem of terrestrial landing gear damage on the now-legendary PBY Catalina amphibious aircraft. Barker and Midlock cobbled a Doppler radar unit from coffee cans soldered shut to make microwave resonators; the unit was installed at the end of the runway, aimed directly upward to measure the sink rate of landing PBYs. After the war and Midlock tested radar on the Merritt Parkway. In 1947, the system was tested by the Connecticut State Police in Glastonbury, Connecticut for traffic surveys and issuing warnings to drivers for excessive speed. Starting in February 1949, the state police began to issue speeding tickets based on the speed recorded by the radar device. In 1948, radar was used in Garden City, New York. Speed guns use Doppler radar to perform speed measurements. Radar speed guns, like other types of radar, consist of receiver, they send out a radio signal in a narrow beam receive the same signal back after it bounces off the target object.
Due to a phenomenon called the Doppler effect, if the object is moving toward or away from the gun, the frequency of the reflected radio waves when they come back is different from the transmitted waves. When the object is approaching the radar, the frequency of the return waves is higher than the transmitted waves. From that difference, the radar speed gun can calculate the speed of the object from which the waves have been bounced; this speed is given by the following equation: v = Δ f f c 2 where c is the speed of light, f is the emitted frequency of the radio waves and Δf is the difference in frequency between the radio waves that are emitted and those received back by the gun. This equation holds only when object speeds are low compared to that of light, but in everyday situations, this is the case and the velocity of an object is directly proportional to this difference in frequency. After the returning waves are received, a signal with a frequency equal to this difference is created by mixing the received radio signal with a little of the transmitted signal.
Just as two different musical notes played together create a beat note at the difference in frequency between them, so when these two radio signals are mixed they create a "beat" signal. An electrical circuit measures this frequency using a digital counter to count the number of cycles in a fixed time period, displays the number on a digital display as the object's speed. Since this type of speed gun measures the difference in speed between a target and the gun itself, the gun must be stationary in order to give a correct reading. If a measurement is made from a moving car, it will give the difference in speed between the two vehicles, not the speed of the target relative to the road, so a different system has been designed to work from moving vehicles. In so-called "moving radar", the radar antenna receives reflected signals from both the target vehicle and stationary background objects such as the road surface, nearby road signs, guard rails and streetlight poles. Instead of comparing the frequency of the signal reflected from the target with the transmitted signal, it compares the target signal with this background signal.
The frequency difference between these two signals gives the true speed of the target vehicle. Modern radar speed guns operate at X, K, Ka, Ku bands. Radar guns that operate using the X band frequency range are becoming less common because they produce a strong and detectable beam. Most automatic doors utilize radio waves in the X band range and can affect the readings of police radar; as a result, K band and Ka band are most used by police agencies. Some motorists install radar detectors which can alert them to the presence of a speed trap ahead, the microwave signals from radar may change the quality of reception of AM and FM radio signals when tuned to a weak station. For these reasons, hand-held radar includes an on-off trigger and the radar is only turned on when the operator is about to make a measurement. Radar detectors are illegal in some areas. Traffic radar comes in many models. Hand-held units are b
A microphone, colloquially nicknamed mic or mike, is a transducer that converts sound into an electrical signal. Microphones are used in many applications such as telephones, hearing aids, public address systems for concert halls and public events, motion picture production and recorded audio engineering, sound recording, two-way radios, megaphones and television broadcasting, in computers for recording voice, speech recognition, VoIP, for non-acoustic purposes such as ultrasonic sensors or knock sensors. Several different types of microphone are in use, which employ different methods to convert the air pressure variations of a sound wave to an electrical signal; the most common are the dynamic microphone. Microphones need to be connected to a preamplifier before the signal can be recorded or reproduced. In order to speak to larger groups of people, a need arose to increase the volume of the human voice; the earliest devices used to achieve this were acoustic megaphones. Some of the first examples, from fifth century BC Greece, were theater masks with horn-shaped mouth openings that acoustically amplified the voice of actors in amphitheatres.
In 1665, the English physicist Robert Hooke was the first to experiment with a medium other than air with the invention of the "lovers' telephone" made of stretched wire with a cup attached at each end. In 1861, German inventor Johann Philipp Reis built an early sound transmitter that used a metallic strip attached to a vibrating membrane that would produce intermittent current. Better results were achieved in 1876 with the "liquid transmitter" design in early telephones from Alexander Graham Bell and Elisha Gray – the diaphragm was attached to a conductive rod in an acid solution; these systems, gave a poor sound quality. The first microphone that enabled proper voice telephony was the carbon microphone; this was independently developed by David Edward Hughes in England and Emile Berliner and Thomas Edison in the US. Although Edison was awarded the first patent in mid-1877, Hughes had demonstrated his working device in front of many witnesses some years earlier, most historians credit him with its invention.
The carbon microphone is the direct prototype of today's microphones and was critical in the development of telephony and the recording industries. Thomas Edison refined the carbon microphone into his carbon-button transmitter of 1886; this microphone was employed at the first radio broadcast, a performance at the New York Metropolitan Opera House in 1910. In 1916, E. C. Wente of Western Electric developed the next breakthrough with the first condenser microphone. In 1923, the first practical moving coil microphone was built; the Marconi-Sykes magnetophone, developed by Captain H. J. Round, became the standard for BBC studios in London; this was improved in 1930 by Alan Blumlein and Herbert Holman who released the HB1A and was the best standard of the day. In 1923, the ribbon microphone was introduced, another electromagnetic type, believed to have been developed by Harry F. Olson, who reverse-engineered a ribbon speaker. Over the years these microphones were developed by several companies, most notably RCA that made large advancements in pattern control, to give the microphone directionality.
With television and film technology booming there was demand for high fidelity microphones and greater directionality. Electro-Voice responded with their Academy Award-winning shotgun microphone in 1963. During the second half of 20th century development advanced with the Shure Brothers bringing out the SM58 and SM57; the latest research developments include the use of fibre optics and interferometers. The sensitive transducer element of a microphone is called its capsule. Sound is first converted to mechanical motion by means of a diaphragm, the motion of, converted to an electrical signal. A complete microphone includes a housing, some means of bringing the signal from the element to other equipment, an electronic circuit to adapt the output of the capsule to the equipment being driven. A wireless microphone contains a radio transmitter. Microphones are categorized by their transducer principle, such as condenser, etc. and by their directional characteristics. Sometimes other characteristics such as diaphragm size, intended use or orientation of the principal sound input to the principal axis of the microphone are used to describe the microphone.
The condenser microphone, invented at Western Electric in 1916 by E. C. Wente, is called a capacitor microphone or electrostatic microphone—capacitors were called condensers. Here, the diaphragm acts as one plate of a capacitor, the vibrations produce changes in the distance between the plates. There are two types, depending on the method of extracting the audio signal from the transducer: DC-biased microphones, radio frequency or high frequency condenser microphones. With a DC-biased microphone, the plates are biased with a fixed charge; the voltage maintained across the capacitor plates changes with the vibrations in the air, according to the capacitance equation, where Q = charge in coulombs, C = capacitance in farads and V = potential difference in volts. The capacitance of the plates is inversely proportional to the distance between them for a parallel-plate capacitor; the assembly of fixed and movable plates is called an "element" or "capsule". A nearly constant charge is maintained on the capa
A digital camera or digicam is a camera that captures photographs in digital memory. Most cameras produced today are digital, while there are still dedicated digital cameras, many more cameras are now being incorporated into mobile devices, portable touchscreen computers, which can, among many other purposes, use their cameras to initiate live videotelephony and directly edit and upload imagery to others. However, high-end, high-definition dedicated cameras are still used by professionals. Digital and movie cameras share an optical system using a lens with a variable diaphragm to focus light onto an image pickup device; the diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen after being recorded, store and delete images from memory. Many digital cameras can record moving videos with sound; some digital cameras can perform other elementary image editing.
The history of the digital camera began with Eugene F. Lally of the Jet Propulsion Laboratory, thinking about how to use a mosaic photosensor to capture digital images, his 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts' position. As with Texas Instruments employee Willis Adcock's filmless camera in 1972, the technology had yet to catch up with the concept; the Cromemco Cyclops was an all-digital camera introduced as a commercial product in 1975. Its design was published as a hobbyist construction project in the February 1975 issue of Popular Electronics magazine, it used a 32×32 Metal Oxide Semiconductor sensor. Steven Sasson, an engineer at Eastman Kodak and built the first self-contained electronic camera that used a charge-coupled device image sensor in 1975. Early uses were military and scientific. In 1986, Japanese company Nikon introduced the first digital single-lens reflex camera, the Nikon SVC. In the mid-to-late 1990s, DSLR cameras became common among consumers.
By the mid-2000s, DSLR cameras had replaced film cameras. In 2000, Sharp introduced the J-SH04 J-Phone, in Japan. By the mid-2000s, higher-end cell phones had an integrated digital camera. By the beginning of the 2010s all smartphones had an integrated digital camera; the two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS sensor. Overall final image quality is more dependent on the image processing capability of the camera, than on sensor type; the resolution of a digital camera is limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value, read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image.
The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1 megapixel. Final quality of an image depends on all optical transformations in the chain of producing the image. Carl Zeiss points out. In case of a digital camera, a simplistic way of expressing it is that the lens determines the maximum sharpness of the image while the image sensor determines the maximum resolution; the illustration on the right can be said to compare a lens with poor sharpness on a camera with high resolution, to a lens with good sharpness on a camera with lower resolution. Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters. Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors which are exposed to the same image via a beam splitter.
Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique; the most common was to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method is called Microscanning; this method uses a single sensor chip with a Bayer filter and physically moved the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combined the two methods without a Bayer filter on the chip; the third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by rotating the whole camera. A digital rotating line camera offers images of high total resolution.
The choice of method for a given capture is determined by the subject matter. It is inappropriate to attempt to capture a subject that moves with anything but a single-shot sys