In computing, multi-touch is technology that enables a surface to recognize the presence of more than one point of contact with the surface. The origins of multitouch began at CERN, MIT, University of Toronto, Carnegie Mellon University and Bell Labs in the 1970s. Multi-touch was in use as early as 1985. Apple popularized the term "multi-touch" in 2007. Plural-point awareness may be used to implement additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures; the two different uses of the term resulted from the quick developments in this field, many companies using the term to market older technology, called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are used as synonyms in marketing.
In computing, multi-touch is technology which enables a trackpad or touchscreen to recognize more than one or more than two points of contact with the surface. Apple popularized the term "multi-touch" in 2007 with which it implemented additional functionality, such as pinch to zoom or to activate certain subroutines attached to predefined gestures; the two different uses of the term resulted from the quick developments in this field, many companies using the term to market older technology, called gesture-enhanced single-touch or several other terms by other companies and researchers. Several other similar or related terms attempt to differentiate between whether a device can determine or only approximate the location of different points of contact to further differentiate between the various technological capabilities, but they are used as synonyms in marketing; the use of touchscreen technology predates the personal computer. Early synthesizer and electronic instrument builders like Hugh Le Caine and Robert Moog experimented with using touch-sensitive capacitance sensors to control the sounds made by their instruments.
IBM began building the first touch screens in the late 1960s. In 1972, Control Data released the PLATO IV computer, a terminal used for educational purposes, which employed single-touch points in a 16×16 array user interface; these early touchscreens only registered one point of touch at a time. On-screen keyboards were thus awkward to use, because key-rollover and holding down a shift key while typing another were not possible. An exception was a multi-touch reconfigurable touchscreen keyboard/display developed at the Massachusetts Institute of Technology in the early 1970s. In 1977, one of the early implementations of mutual capacitance touchscreen technology was developed at CERN based on their capacitance touch screens developed in 1972 by Danish electronics engineer Bent Stumpe; this technology was used to develop a new type of human machine interface for the control room of the Super Proton Synchrotron particle accelerator. In a handwritten note dated 11 March 1972, Stumpe presented his proposed solution – a capacitive touch screen with a fixed number of programmable buttons presented on a display.
The screen was to consist of a set of capacitors etched into a film of copper on a sheet of glass, each capacitor being constructed so that a nearby flat conductor, such as the surface of a finger, would increase the capacitance by a significant amount. The capacitors were to consist of fine lines etched in copper on a sheet of glass – fine enough and sufficiently far apart to be invisible. In the final device, a simple lacquer coating prevented the fingers from touching the capacitors. In 1976, MIT described a keyboard with variable graphics capable of multi-touch detection, for what is likely to be the first multitouch screen. In the early 1980s, The University of Toronto's Input Research Group were among the earliest to explore the software side of multi-touch input systems. A 1982 system at the University of Toronto used a frosted-glass panel with a camera placed behind the glass; when a finger or several fingers pressed on the glass, the camera would detect the action as one or more black spots on an otherwise white background, allowing it to be registered as an input.
Since the size of a dot was dependent on pressure, the system was somewhat pressure-sensitive as well. Of note, this system was not able to display graphics. In 1983, Bell Labs at Murray Hill published a comprehensive discussion of touch-screen based interfaces, though it makes no mention of multiple fingers. In the same year, the video-based Video Place/Video Desk system of Myron Krueger was influential in development of multi-touch gestures such as pinch-to-zoom, though this system had no touch interaction itself. By 1984, both Bell Labs and Carnegie Mellon University had working multi-touch-screen prototypes – both input and graphics – that could respond interactively in response to multiple finger inputs; the Bell Labs system was based on capacitive coupling of fingers, whereas the CMU system was optical. In 1985, the canonical multitouch pinch-to-zoom gesture was demonstrated, with coordinated graphics, on CMU's system. In October 1985, Steve Jobs signed a non-disclosure agreement to tour CMU's Sensor Frame multi-touch lab.
In 1990, Sears et al. published a review of academic research on single and multi-touch touchscreen human–computer interaction of the time, describing single touch gestures such as rotating knobs, swiping the screen to activate a switch, an
Image resolution is the detail an image holds. The term applies to raster digital images, film images, other types of images. Higher resolution means more image detail. Image resolution can be measured in various ways. Resolution quantifies. Resolution units can be tied to physical sizes, to the overall size of a picture, or to angular subtense. Line pairs are used instead of lines. A line is either a light line. A resolution of 10 lines per millimeter means 5 dark lines alternating with 5 light lines, or 5 line pairs per millimeter. Photographic lens and film resolution are most quoted in line pairs per millimeter; the resolution of digital cameras can be described in many different ways. The term resolution is considered equivalent to pixel count in digital imaging, though international standards in the digital camera field specify it should instead be called "Number of Total Pixels" in relation to image sensors, as "Number of Recorded Pixels" for what is captured. Hence, CIPA DCG-001 calls for notation such as "Number of Recorded Pixels 1000 × 1500".
According to the same standards, the "Number of Effective Pixels" that an image sensor or digital camera has is the count of pixel sensors that contribute to the final image, as opposed to the number of total pixels, which includes unused or light-shielded pixels around the edges. An image of N pixels height by M pixels wide can have any resolution less than N lines per picture height, or N TV lines, but when the pixel counts are referred to as "resolution", the convention is to describe the pixel resolution with the set of two positive integer numbers, where the first number is the number of pixel columns and the second is the number of pixel rows, for example as 7680 × 6876. Another popular convention is to cite resolution as the total number of pixels in the image given as number of megapixels, which can be calculated by multiplying pixel columns by pixel rows and dividing by one million. Other conventions include describing pixels per length unit or pixels per area unit, such as pixels per inch or per square inch.
None of these pixel resolutions are true resolutions, but they are referred to as such. Below is an illustration of how the same image might appear at different pixel resolutions, if the pixels were poorly rendered as sharp squares. An image, 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 = 3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by a 3.1-megapixel image. Or, you can think of it as a low quality image if printed at about 28.5 inches wide, or a good quality image if printed at about 7 inches wide. The count of pixels isn't a real measure of the resolution of digital camera images, because color image sensors are set up to alternate color filter types over the light sensitive individual pixel sensors. Digital images require a red and blue value for each pixel to be displayed or printed, but one individual pixel in the image sensor will only supply one of those three pieces of information; the image has to be demosaiced to produce all three colors for each output pixel.
The terms blurriness and sharpness are used for digital images but other descriptors are used to reference the hardware capturing and displaying the images. Spatial resolution in radiology refers to the ability of the imaging modality to differentiate two objects. Low spatial resolution techniques will be unable to differentiate between two objects that are close together; the measure of how lines can be resolved in an image is called spatial resolution, it depends on properties of the system creating the image, not just the pixel resolution in pixels per inch. For practical purposes the clarity of the image is decided by its spatial resolution, not the number of pixels in an image. In effect, spatial resolution refers to the number of independent pixel values per unit length; the spatial resolution of consumer displays range from 50 to 800 pixel lines per inch. With scanners, optical resolution is sometimes used to distinguish spatial resolution from the number of pixels per inch. In remote sensing, spatial resolution is limited by diffraction, as well as by aberrations, imperfect focus, atmospheric distortion.
The ground sample distance of an image, the pixel spacing on the Earth's surface, is considerably smaller than the resolvable spot size. In astronomy, one measures spatial resolution in data points per arcsecond subtended at the point of observation, because the physical distance between objects in the image depends on their distance away and this varies with the object of interest. On the other hand, in electron microscopy, line or fringe resolution refers to the minimum separation detectable between adjacent parallel lines, whereas point resolution instead refers to the minimum separation between adjacent points that can be both detected and interpreted e.g. as adjacent columns of atoms, for instance. The former helps one detect periodicity in specimens, whereas the latter is key to visualizing how individual atoms interact. In Stereoscopic 3D
A proximity sensor is a sensor able to detect the presence of nearby objects without any physical contact. A proximity sensor emits an electromagnetic field or a beam of electromagnetic radiation, looks for changes in the field or return signal; the object being sensed is referred to as the proximity sensor's target. Different proximity sensor targets demand different sensors. For example, a capacitive proximity sensor or photoelectric sensor might be suitable for a plastic target. Proximity sensors can have a high reliability and long functional life because of the absence of mechanical parts and lack of physical contact between the sensor and the sensed object. Proximity sensors are used in machine vibration monitoring to measure the variation in distance between a shaft and its support bearing; this is common in large steam turbines and motors that use sleeve-type bearings. International Electrotechnical Commission 60947-5-2 defines the technical details of proximity sensors. A proximity sensor adjusted to a short range is used as a touch switch.
Proximity sensors are used on mobile devices. When the target is within nominal range, the device lock screen user interface will appear, thus emerging from what is known as sleep mode. Once the device has awoken from sleep mode, if the proximity sensor's target is still for an extended period of time, the sensor will ignore it, the device will revert into sleep mode. For example, during a telephone call, proximity sensors play a role in detecting accidental touchscreen taps when mobiles are held to the ear. Proximity sensors can be used to recognise air hover-manipulations. An array of proximity sensing elements can replace vision-camera or depth camera based solutions for the hand gesture detection. Capacitive Capacitive displacement sensor Doppler effect Eddy-current Inductive Magnetic, including magnetic proximity fuse Optical Photoelectric Photocell Laser rangefinder Passive Passive thermal infrared Radar Reflection of ionizing radiation Sonar Ultrasonic sensor Fiber optics sensor Hall effect sensor Parking sensors, systems mounted on car bumpers that sense distance to nearby cars for parking Ground proximity warning system for aviation safety Vibration measurements of rotating shafts in machinery Top dead centre /camshaft sensor in reciprocating engines.
Sheet break sensing in paper machine. Anti-aircraft warfare Roller coasters Conveyor systems Beverage and food can making lines Mobile devices Touch screens that come in close proximity to the face Attenuating radio power in close proximity to the body, in order to reduce radiation exposure Automatic faucets Motion detector Occupancy sensor
Video Graphics Array
Video Graphics Array is a graphics standard for video display controller first introduced with the IBM PS/2 line of computers in 1987, following CGA and EGA introduced in earlier IBM personal computers. Through widespread adoption, the term has come to mean either an analog computer display standard, the 15-pin D-subminiature VGA connector, or the 640×480 resolution characteristic of the VGA hardware. VGA was the last IBM graphics standard to which the majority of PC clone manufacturers conformed, making it the lowest common denominator that all post-1990 PC graphics hardware can be expected to implement, it was followed by IBM's Extended Graphics Array standard, but was superseded by numerous different extensions to VGA made by clone manufacturers, collectively known as Super VGA. Today, the VGA analog interface is used for high-definition video, including resolutions of 1080p and higher. While the transmission bandwidth of VGA is high enough to support higher resolution playback, there can be picture quality degradation depending on cable quality and length.
How discernible this degradation is depends on the individual's eyesight and the display, though it is more noticeable when switching to and from digital inputs like HDMI or DVI. The VGA supports both alphanumeric text modes. Standard graphics modes are: 640×480 in 16 colors or monochrome 640×350 or 640×200 in 16 colors or monochrome 320×200 in 4 or 16 colors 320×200 in 256 colors The 640×480 16-color and 320×200 256-color modes had redefinable palettes, with each entry selectable from within an 18-bit RGB table, although the high resolution mode is most familiar from its use with a fixed palette under Microsoft Windows; the other color modes defaulted to standard EGA or CGA compatible palettes, but could still be redefined if desired using VGA-specific programming. Higher-resolution and other display modes are achievable with standard cards and most standard monitors – on the whole, a typical VGA system can produce displays with any combination of: 512 to 800 pixels wide, in 16 colors, or 256 to 400 pixels wide, in 256 colors with heights of: 200, or 350 to 410 lines at 70 Hz refresh rate, or 224 to 256, or 448 to 512 lines at 60 Hz refresh rate 512 to 600 lines at reduced vertical refresh rates, depending on individual monitor compatibility.
For example, high resolution modes with square pixels are available at 768×576 or 704×528 in 16 colors, or medium-low resolution at 320×240 with 256 colors. "Narrow" modes such as 256×224 tend to preserve the same pixel ratio as in e.g. 320×240 mode unless the monitor is adjusted to stretch the image out to fill the screen, as they are derived by masking down the wider mode instead of altering pixel or line timings, but can be useful for reducing memory requirements and pixel addressing calculations for arcade game conversions or console emulators. Standard text modes: 80×25 character display, rendered with a 9×16 pixel font, with an effective resolution of 720×400 in either 16 colors or monochrome, the latter being compatible with legacy MDA-based applications. 40×25, using the same font grid, for an effective resolution of 360×400 80×43 or 80×50 in 16-colors, with an effective resolution of 640×344 or 640×400 pixels. As with the pixel-based graphics modes, additional text modes are technically possible with an overall maximum of about 100×80 cells and an active area spanning about 88×64 cells, but these are used as it makes much more sense to just use a graphics mode – with a small proportional font – if a larger text display is required.
One variant, sometimes seen is 80×30 or 80×60, using an 8×16 or 8×8 font and an effective 640×480 pixel display, which trades use of the more flickery 60 Hz mode for an additional 5 or 10 lines of text and square character blocks. VGA is referred to as an "Array" instead of an "adapter" because it was implemented from the start as a single chip – an application-specific integrated circuit which replaced both the Motorola 6845 video address generator and the dozens of discrete logic chips that covered the full-length ISA boards of the MDA and CGA. More directly, it replaced many discrete logic chips on the EGA board, its single-chip implementation allowed the VGA to be placed directly on a PC′s motherboard with a minimum of difficulty, which in turn increased the reliability of the video subsystem by reducing the number of component connections, since the VGA required only video memory, timing crystals and an external RAMDAC. As a result, the first IBM PS/2 models were equipped with VGA on the motherboard, in contrast to all of the "family one" IBM PC desktop models – the PC, PC/XT, PC AT – which required a display adapter installed in a slot in order to connect a mo
The volt is the derived unit for electric potential, electric potential difference, electromotive force. It is named after the Italian physicist Alessandro Volta. One volt is defined as the difference in electric potential between two points of a conducting wire when an electric current of one ampere dissipates one watt of power between those points, it is equal to the potential difference between two parallel, infinite planes spaced 1 meter apart that create an electric field of 1 newton per coulomb. Additionally, it is the potential difference between two points that will impart one joule of energy per coulomb of charge that passes through it, it can be expressed in terms of SI base units as V = potential energy charge = J C = kg ⋅ m 2 A ⋅ s 3. It can be expressed as amperes times ohms, watts per ampere, or joules per coulomb, equivalent to electronvolts per elementary charge: V = A ⋅ Ω = W A = J C = eV e; the "conventional" volt, V90, defined in 1987 by the 18th General Conference on Weights and Measures and in use from 1990, is implemented using the Josephson effect for exact frequency-to-voltage conversion, combined with the caesium frequency standard.
For the Josephson constant, KJ = 2e/h, the "conventional" value KJ-90 is used: K J-90 = 0.4835979 GHz μ V. This standard is realized using a series-connected array of several thousand or tens of thousands of junctions, excited by microwave signals between 10 and 80 GHz. Empirically, several experiments have shown that the method is independent of device design, measurement setup, etc. and no correction terms are required in a practical implementation. In the water-flow analogy, sometimes used to explain electric circuits by comparing them with water-filled pipes, voltage is likened to difference in water pressure. Current is proportional to the amount of water flowing at that pressure. A resistor would be a reduced diameter somewhere in the piping and a capacitor/inductor could be likened to a "U" shaped pipe where a higher water level on one side could store energy temporarily; the relationship between voltage and current is defined by Ohm's law. Ohm's Law is analogous to the Hagen–Poiseuille equation, as both are linear models relating flux and potential in their respective systems.
The voltage produced by each electrochemical cell in a battery is determined by the chemistry of that cell. See Galvanic cell § Cell voltage. Cells can be combined in series for multiples of that voltage, or additional circuitry added to adjust the voltage to a different level. Mechanical generators can be constructed to any voltage in a range of feasibility. Nominal voltages of familiar sources: Nerve cell resting potential: ~75 mV Single-cell, rechargeable NiMH or NiCd battery: 1.2 V Single-cell, non-rechargeable: alkaline battery: 1.5 V. Some antique vehicles use 6.3 volts. Electric vehicle battery: 400 V when charged Household mains electricity AC: 100 V in Japan 120 V in North America, 230 V in Europe, Asia and Australia Rapid transit third rail: 600–750 V High-speed train overhead power lines: 25 kV at 50 Hz, but see the List of railway electrification systems and 25 kV at 60 Hz for exceptions. High-voltage electric power transmission lines: 110 kV and up Lightning: Varies often around 100 MV.
In 1800, as the result of a professional disagreement over the galvanic response advocated by Luigi Galvani, Alessandro Volta developed the so-called voltaic pile, a forerunner of the battery, which produced a steady electric current. Volta had determined that the most effective pair of dissimilar metals to produce electricity was zinc and silver. In 1861, Latimer Clark and Sir Charles Bright coined the name "volt" for the unit of resistance. By 1873, the British Association for the Advancement of Science had defined the volt and farad. In 1881, the International Electrical Congress, now the International Electrotechnical Commission, approved the volt as the unit for electromotive force, they made the volt equal to 108 cgs units of voltage
Form factor (mobile phones)
The form factor of a mobile phone is its size and style, as well as the layout and position of its major components. A bar phone takes the shape of a cuboid with rounded corners and/or edges; the name is derived from the rough resemblance to a chocolate bar in shape. This form factor is used by a variety of manufacturers, such as Nokia and Sony Ericsson. Bar-type smartphones have the screen and keypad on a single face. Sony had a well-known'Mars Bar' phone model CM-H333 in 1993; these are variants of bars. While they are technically the same as a regular bar phone, the keyboard and all the buttons make them look different. Devices like these lost popularity afterwards; the BlackBerry line from Research In Motion was popular and influential in this category. "Brick" is a slang term always used to refer to large, outdated rectangular phones early devices with large batteries and electronics. Such early phones, such as the Motorola DynaTAC, have been displaced by newer smaller models which offer greater portability thanks to smaller antennas and slimmer battery packs.
However, "brick" has more been applied to older phone models in general, including non-bar form factors, early touchscreen phones as well, due to their size and relative lack of functionality compared to current models on the market. The term "brick" has expanded beyond smartphones to include most non-working consumer electronics, including a game console, router, or other device, due to a serious misconfiguration, corrupted firmware, or a hardware problem, can no longer function, hence, is as technologically useful as a brick; the term derives from the vaguely cuboid shape of many electronic devices and the suggestion that the device can function only as a lifeless, square object, paperweight or doorstop. This term is used as a verb. For example, "I bricked my MP3 player when I tried to modify its firmware." It can be used as a noun, for example, "If it's corrupted and you apply using fastboot, your device is a brick." In the common usage of the term, "bricking" suggests that the damage is so serious as to have rendered the device permanently unusable.
A slate or slade is a smartphone form with few to no physical buttons, instead relying upon a touchscreen and an onscreen keyboard. The first commercially available touchscreen phone was a brick phone, the IBM Simon Personal Communicator, released in 1994; the iPhone, released in 2007, is responsible for the influence and achievement of this design as it is conceived. Some unusual "slate" designs include that of LG New Chocolate, or the Samsung Galaxy Round, curved; the phablet or smartlet is a subset of the slate/touchscreen. A portmanteau of the words phone and tablet, phablets are a class of mobile device designed to combine or straddle the size of a slate smartphone together with a tablet. Phablets have screens that measure between 5.3 and 6.7 inches, are larger than most high-end slate smartphones of the time, which have to be 5.2 inches or less to be known as a smartphone, though smaller than tablets. The multi-screen is of the slate form factor, but with two touchscreens; some have a small separate screen above the main screens, the LG V10 and LG V20.
Other multi-screen form factors has screens on both sides of the phone. In the case of Yotaphone and Siam 7X, they have normal touchscreens on the front, but on the backside is an e-ink screen, which enables using the cases in a fashion similar to reading a book; the presence of the front camera for taking selfies has been an essential feature in smartphones, however, it is a difficulty to achieve a bezelless screen as is the trend in the 2010s. The Nubia X and Vivo NEX Dual Display have solved this combining the use of the main camera and a smaller second rear screen, eliminating the front camera; the taco form factor was popularized by the Nokia N-Gage, released in 2003. It was known as the plastic taco for its taco-shape and the placement of microphones on the side of the device, when one talks into the microphone, gives the appearance of eating a taco. Other models include Nokia 3300 and Nokia 5510. A smartphone in the form of a wristwatch is referred to as a watch phone. A flip or clamshell phone consists of two or more sections that are connected by hinges, allowing the phone to flip open fold closed in order to become more compact.
When flipped open, the phone's screen and keyboard are available. When flipped shut, the phone becomes more portable than when it is opened for use. Motorola was once owner of a trademark for the term flip phone, but the term flip phone has become genericized and used more than clamshell in colloquial speech. Motorola was the manufacturer of the famed StarTAC flip phone in the 1990s, as well as the RAZR in the mid-2000s. There were flip "down" phones, like the Motorola MicroTAC series and was widely used by Ericsson. In 2010, Motorola introduced a different kind of flip phone with its Backflip smartphone; when closed, one side is the screen and the other is a physical QWERTY keyboard. The hinge is on a long edge of the phone instead of a short edge, when flipped out the screen is above the keyboard. Another unusual flip form was seen on the luxury Serene, a partnership between Samsung and Bang & Olufsen. Clamshell came to be use
A camcorder is an electronic device combining a video camera and a videocassette recorder. The earliest camcorders were tape-based. In 2006, digital recording became the norm, with tape replaced by storage media such as mini-HD, microDVD, internal flash memory and SD cards. More recent devices capable of recording video are camera phones and digital cameras intended for still pictures. Video cameras designed for television broadcast were large and heavy, mounted on special pedestals and wired to remote recorders in separate rooms; as technology improved, out-of-studio video recording was possible with compact video cameras and portable video recorders. Although the camera itself was compact, the need for a separate recorder made on-location shooting a two-person job. Specialized videocassette recorders were introduced by JVC and Sony releasing a model for mobile work. Portable recorders meant that recorded video footage could be aired on the early-evening news, since it was no longer necessary to develop film.
In 1983, Sony released the Betacam system, for professional use. A key component was a single camera-recorder unit, eliminating a cable between the camera and recorder and increasing the camera operator's freedom; the Betacam used the same cassette format as the Betamax, but with a different, incompatible recording format. It became standard equipment for broadcast news. Sony released the first consumer camcorder in 1983, the Betamovie BMC-100P, it used a Betamax cassette and rested on the operator's shoulder, due to a design not permitting a single-handed grip. That year, JVC released the first VHS-C camcorder. Kodak announced a new camcorder format in the 8 mm video format. Sony introduced its compact 8 mm Video8 format in 1985; that year, Panasonic, RCA and Hitachi began producing camcorders using a full-size VHS cassette with a three-hour capacity. These shoulder-mount camcorders were used by videophiles, industrial videographers and college TV studios. Full-size Super-VHS camcorders were released in 1987, providing an inexpensive way to collect news segments or other videographies.
Sony upgraded Video8, releasing the Hi8 in competition with S-VHS. Digital technology emerged with the Sony D1, a device which recorded uncompressed data and required a large amount of bandwidth for its time. In 1992 Ampex introduced DCT, the first digital video format with data compression using the discrete cosine transform algorithm present in most commercial digital video formats. In 1995 Sony, JVC, Panasonic and other video-camera manufacturers launched DV, which became a de facto standard for home video production, independent filmmaking and citizen journalism; that year, Ikegami introduced Editcam. Camcorders using DVD media were popular at the turn of the 21st century due to the convenience of being able to drop a disc into the family DVD player. Panasonic launched DVCPRO HD in 2000; the format was intended for professional camcorders, used full-size DVCPRO cassettes. In 2003 Sony, JVC, Canon and Sharp introduced HDV as the first affordable HD video format, due to its use of inexpensive MiniDV cassettes.
Sony introduced the XDCAM tapeless video format in 2003. Panasonic followed in 2004 with its P2 solid state memory cards as a recording medium for DVCPRO-HD video. In 2006 Panasonic and Sony introduced AVCHD as an inexpensive, high-definition video format. AVCHD camcorders are produced by Sony, Canon, JVC and Hitachi. About this time, some consumer grade camcorders with hard disk and/or memory card recording used MOD and TOD file formats, accessible by USB from a PC. In 2010, after the success of James Cameron's 2009 3D film Avatar, full 1080p HD 3D camcorders entered the market. With the proliferation of file-based digital formats, the relationship between recording media and recording format has declined. With tapeless formats, recording media are storage for digital files. In 2011 Panasonic released a camcorder capable of shooting in 3D, the HDC-SDT750, it is a 2D camcorder which can shoot in HD. Sony released a 3D camcorder, the HDR-TD10; the Sony's 3D lens is built in. Panasonic has released 2D camcorders with an optional 3D conversion lens.
The HDC-SD90, HDC-SD900, HDC-TM900 and HDC-HS900 are sold as "3D-ready": 2D camcorders, with optional 3D capability at a date. In CES 2014, Sony announced the first consumer/low-end professional camcorder Sony FDR-AX100 with a 1" 20.9MP sensor able to shoot 4K video in 3840x2160 pixels 30fps or 24fps in the XAVC-S format. When using the traditional format AVCHD, the camcorder supports 5.1 surround sound from its built-in microphone, this is however not supported in the XAVC-S format. The camera has a 3-step ND filter switch for maintaining a shallow depth of field or a softer appearance to motion. For one hour video shooting in 4K the camera needs about 32 GB to accommodate a data transfer rate of 50 Mbit/s; the camera's MSRP