In the broadest definition, a sensor is a device, module, or subsystem whose purpose is to detect events or changes in its environment and send the information to other electronics a computer processor. A sensor is always used with other electronics. Sensors are used in everyday objects such as touch-sensitive elevator buttons and lamps which dim or brighten by touching the base, besides innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure or flow measurement, for example into MARG sensors. Moreover, analog sensors such as potentiometers and force-sensing resistors are still used. Applications include manufacturing and machinery and aerospace, medicine and many other aspects of our day-to-day life. A sensor's sensitivity indicates how much the sensor's output changes when the input quantity being measured changes. For instance, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, the sensitivity is 1 cm/°C.
Some sensors can affect what they measure. Sensors are designed to have a small effect on what is measured. Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a higher speed and sensitivity compared with macroscopic approaches. A good sensor obeys the following rules:: it is sensitive to the measured property it is insensitive to any other property to be encountered in its application, it does not influence the measured property. Most sensors have a linear transfer function; the sensitivity is defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is a constant with the units; the sensitivity is the slope of the transfer function. Converting the sensor's electrical output to the measured units requires dividing the electrical output by the slope. In addition, an offset is added or subtracted.
For example, -40 must be added to the output. For an analog sensor signal to be processed, or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter. Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy: Since the range of the output signal is always limited, the output signal will reach a minimum or maximum when the measured property exceeds the limits; the full scale range defines the minimum values of the measured property. The sensitivity may in practice differ from the value specified; this is called a sensitivity error. This is an error in the slope of a linear transfer function. If the output signal differs from the correct value by a constant, the sensor has an offset error or bias; this is an error in the y-intercept of a linear transfer function. Nonlinearity is deviation of a sensor's transfer function from a straight line transfer function; this is defined by the amount the output differs from ideal behavior over the full range of the sensor noted as a percentage of the full range.
Deviation caused by rapid changes of the measured property over time is a dynamic error. This behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal. If the output signal changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor. Noise is a random deviation of the signal. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. If the sensor has a digital output, the output is an approximation of the measured property; this error is called quantization error. If the signal is monitored digitally, the sampling frequency can cause a dynamic error, or if the input variable or added noise changes periodically at a frequency near a multiple of the sampling rate, aliasing errors may occur.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment. A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input the sensor has a hysteresis error. All these deviations can be classified as random errors. Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing, such as filtering at the expense of the dynamic behavior of the sensor; the resolution of a sensor is the smallest change it can detect in the quantity that it is measuring. The resolution of a sensor with a digital output is the resolution of the digital output; the resolution is related to the precision with which the mea
Microsoft Robotics Developer Studio
Microsoft Robotics Developer Studio is a Windows-based environment for robot control and simulation. It is aimed at academic and commercial developers and handles a wide variety of robot hardware, it requires the Microsoft Windows 7 operating system. RDS is based on CCR: a. NET-based concurrent library implementation for managing asynchronous parallel tasks; this technique involves using message-passing and a lightweight services-oriented runtime, DSS, which allows the orchestration of multiple services to achieve complex behaviors. Features include: a visual programming tool, Microsoft Visual Programming Language for creating and debugging robot applications, web-based and windows-based interfaces, 3D simulation, easy access to a robot's sensors and actuators; the primary programming language is C#. Microsoft Robotics Developer Studio includes support for packages to add other services to the suite; those available include Soccer Simulation and Sumo Competition by Microsoft, a community-developed Maze Simulator, a program to create worlds with walls that can be explored by a virtual robot, a set of services for OpenCV.
Most of the additional packages are hosted on CodePlex. Course materials are available. There are four main components in RDS: CCR DSS VPL VSE CCR and DSS are available separately for use in commercial applications that require a high level of concurrency and/or must be distributed across multiple nodes in a network; this package is called the DSS Toolkit. The tools that allow to develop an MRDS application contain a graphical environment command line tools allow you to deal with Visual Studio projects in C#, 3D simulation tools. Visual Programming Language is a graphical development environment that uses a service and activity catalog, they can interact graphically, a service or an activity is represented by a block that has inputs and outputs that just need to be dragged from the catalog to the diagram. Linking can be done with the mouse, it allows you to define if signals are simultaneous or not, permits you to perform operations on transmitted values... VPL allows you to generate the code of new "macro" services from diagrams created by users.
It is possible in VPL to customize services for different hardware elements. RDS 3D simulation environment allows you to simulate the behavior of robots in a virtual world using NVIDIA PhysX technology that includes advanced physics. There are several simulation environments in RDS; these environments were developed by SimplySim Apartment Factory Modern House Outdoor Urban Many examples and tutorials are available for the different tools, which permits a fast understanding of MRDS. Several applications have been added to the suite, such as Maze Simulator, or Soccer Simulation, developed by Microsoft; the Kinect sensor can be used on a robot in the RDS environment. RDS includes a simulated Kinect sensor; the Kinect Services for RDS are licensed for both non-commercial use. They depend on the Kinect for Windows SDK. Princeton University's DARPA Urban Grand Challenge autonomous car entry was programmed with MRDS. MySpace uses MRDS's parallel computing foundation libraries, CCR and DSS, for a non-robotic application in the back end of their site.
Indiana University uses MRDS in a non-robotic application to coordinate a high-performance computing network. In 2008 Microsoft launched a simulated robotics competition named RoboChamps using MRDS, four challenges were available: maze, sumo and Mars rover; the simulated environment and robots used by the competition were created by SimplySim and the competition was sponsored by KIA Motors The 2009 robotics and algorithm section of the Imagine Cup software competition uses MRDS visual simulation environment. The challenges of this competition were developed by SimplySim and are improved versions of the RoboChamps challenges; the complication and overhead required to run MRDS prompted Princeton Autonomous Vehicle Engineering to convert their Prospect 12 system from MRDS to IPC++. The main RDS4 website hasn't been updated since 6/29/2012. Robotics Studio 1.0 -- Release Date: December 18, 2006 Robotics Studio 1.5 -- Release Date: May 2007 Robotics Studio 1.5 "Refresh" -- Release Date: December 13, 2007 Robotics Developer Studio 2008 Standard Edition, Academic Edition and Express Edition -- Release Date: November 18, 2008 Robotics Developer Studio 2008 R2 Standard Edition, Academic Edition and Express Edition -- Release Date: June 17, 2009 Robotics Developer Studio 2008 R3—Release Date: May 20, 2010.
With R3, Robotics Developer Studio 2008 is now free and the functionality of all editions and CCR & DSS Toolkit has been combined into the single free edition. R3 is no longer compatible with. NET Compact Framework development and it no longer supports Windows CE. Robotics Developer Studio 4 -- Release Date: March 8, 2012; this release adds full support for the Kinect sensor via the Kinect for Windows SDK V1. A Reference Platform Design is included in the documentation, with the first implementation being the Eddie robot from Parallax, it updates RDS to. NET 4.0 and XNA 4.0. ABB Group Robotics - ABB Connect for Microsoft R
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but originate from the face or hand. Current focuses in the field include emotion recognition from hand gesture recognition. Users can use simple gestures to interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait and human behaviors is the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or GUIs, which still limit the majority of input to keyboard and mouse and interact without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at this point will move accordingly.
This could make conventional input on devices such and redundant. Gesture recognition features: More accurate High stability Time saving to unlock a deviceThe major application areas of gesture recognition in the current scenario are: Automotive sector Consumer electronics sector Transit sector Gaming sector To unlock smartphones Defence Home automation Automated sign language translationGesture recognition technology has been considered to be the successful technology as it saves time to unlock any device. Gesture recognition can be conducted with techniques from image processing; the literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer. Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice; such implementations could enable a new range of hardware.
This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, mouse gesture recognition; this is computer interaction through the drawing of symbols with a pointing device cursor. In computer interfaces, two types of gestures are distinguished: We consider online gestures, which can be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are processed after the interaction is finished. Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu. Online gestures: Direct manipulation gestures, they are used to rotate a tangible object. Touchless user interface is an emerging type of technology in relation to gesture control. Touchless user interface is the process of commanding the computer via body motion and gestures without touching a keyboard, mouse, or screen.
For example, Microsoft's Kinect is a touchless game interface. Touchless interface in addition to gesture controls are becoming popular as they provide the abilities to interact with devices without physically touching them. There are a number of devices utilizing this type of interface such as, laptops and television. Although touchless technology is seen in gaming software, interest is now spreading to other fields including and healthcare industries. Soon to come, touchless technology and gesture control will be implemented in cars in levels beyond voice recognition. See BMW Series 7. There are a vast number of companies all over the world who are producing gesture recognition technology, such as: White Paper: Explore Intel's user experience research, which shows how touchless multifactor authentication can help healthcare organizations mitigate security risks while improving clinician efficiency and patient care; this touchless MFA solution combines facial recognition and device recognition capabilities for two-factor user authentication.
The aim of the project is to explore the use of touchless interaction within surgical settings, allowing images to be viewed and manipulated without contact through the use of camera-based gesture recognition technology. In particular, the project seeks to understand the challenges of these environments for the design and deployment of such systems, as well as articulate the ways in which these technologies may alter surgical practice. While our primary concerns here are with maintaining conditions of asepsis, the use of these touchless gesture-based technologies offers other potential uses. Elliptic Labs software suite delivers gesture and proximity functions by re-using the existing earpiece and microphone used only for audio. Ultrasound signals sent through the air from speakers integrated in smartphones and tablets bounce against a hand/object/head and are recorded by microphones integrated in these devices. In this way, Elliptic Labs' technology recognizes your hand gestures and uses them to move objects on a screen to the way bats use echolocation to navigate.
While these companies stand at the forefront of touchless technology for the future in this time, there are many other companies and products that are trending as well and may add value
Arduino is an open-source hardware and software company and user community that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices and interactive objects that can sense and control both physically and digitally. Its products are licensed under the GNU Lesser General Public License or the GNU General Public License, permitting the manufacture of Arduino boards and software distribution by anyone. Arduino boards are available commercially as do-it-yourself kits. Arduino board designs use a variety of controllers; the boards are equipped with sets of digital and analog input/output pins that may be interfaced to various expansion boards or breadboards and other circuits. The boards feature serial communications interfaces, including Universal Serial Bus on some models, which are used for loading programs from personal computers; the microcontrollers are programmed using a dialect of features from the programming languages C and C++. In addition to using traditional compiler toolchains, the Arduino project provides an integrated development environment based on the Processing language project.
The Arduino project started in 2003 as a program for students at the Interaction Design Institute Ivrea in Ivrea, aiming to provide a low-cost and easy way for novices and professionals to create devices that interact with their environment using sensors and actuators. Common examples of such devices intended for beginner hobbyists include simple robots and motion detectors; the name Arduino comes from a bar in Ivrea, where some of the founders of the project used to meet. The bar was named after Arduin of Ivrea, the margrave of the March of Ivrea and King of Italy from 1002 to 1014; the Arduino project was started at the Interaction Design Institute Ivrea in Italy. At that time, the students used a BASIC Stamp microcontroller at a cost of $50, a considerable expense for many students. In 2003 Hernando Barragán created the development platform Wiring as a Master's thesis project at IDII, under the supervision of Massimo Banzi and Casey Reas. Casey Reas is known with Ben Fry, the Processing development platform.
The project goal was to create simple, low cost tools for creating digital projects by non-engineers. The Wiring platform consisted of a printed circuit board with an ATmega168 microcontroller, an IDE based on Processing and library functions to program the microcontroller. In 2003, Massimo Banzi, with David Mellis, another IDII student, David Cuartielles, added support for the cheaper ATmega8 microcontroller to Wiring, but instead of continuing the work on Wiring, they renamed it Arduino. The initial Arduino core team consisted of Massimo Banzi, David Cuartielles, Tom Igoe, Gianluca Martino, David Mellis, but Barragán was not invited to participate. Following the completion of the Wiring platform and less expensive versions were distributed in the open-source community, it was estimated in mid-2011 that over 300,000 official Arduinos had been commercially produced, in 2013 that 700,000 official boards were in users' hands. In October 2016, Federico Musto, Arduino's former CEO, secured a 50% ownership of the company.
In April 2017, Wired reported that Musto had "fabricated his academic record.... On his company's website, personal LinkedIn accounts, on Italian business documents, Musto was until listed as holding a PhD from the Massachusetts Institute of Technology. In some cases, his biography claimed an MBA from New York University." Wired reported that neither University had any record of Musto's attendance, Musto admitted in an interview with Wired that he had never earned those degrees. Around that same time, Massimo Banzi announced that the Arduino Foundation would be "a new beginning for Arduino." But a year the Foundation still hasn't been established, the state of the project remains unclear. The controversy surrounding Musto continued when, in July 2017, he pulled many Open source licenses and code from the Arduino website, prompting scrutiny and outcry. In October 2017, Arduino announced its partnership with ARM Holdings; the announcement said, in part, "ARM recognized independence as a core value of Arduino... without any lock-in with the ARM architecture.”
Arduino intends to continue to work with all technology architectures. In early 2008, the five co-founders of the Arduino project created a company, Arduino LLC, to hold the trademarks associated with Arduino; the manufacture and sale of the boards was to be done by external companies, Arduino LLC would get a royalty from them. The founding bylaws of Arduino LLC specified that each of the five founders transfer ownership of the Arduino brand to the newly formed company. At the end of 2008, Gianluca Martino's company, Smart Projects, registered the Arduino trademark in Italy and kept this a secret from the other cofounders for about two years; this was revealed when the Arduino company tried to register the trademark in other areas of the world, discovered that it was registered in Italy. Negotiations with Gianluca and his firm to bring the trademark under control of the original Arduino company failed. In 2014, Smart Projects began refusing to pay royalties, they appointed a new CEO, Federico Musto, who renamed the company Arduino SRL and created the website arduino.org, copying the graphics and layout of the original arduino.cc.
This resulted in a rift in the Arduino development team. In January 2015, Arduino LLC filed a lawsuit against Arduino SRL. In May 2015, Arduino LLC created the worldwide tr
Rafael Lozano-Hemmer is a Mexican-Canadian electronic artist who works with ideas from architecture, technological theater and performance. He holds a Bachelor of Science in physical chemistry from Concordia University in Montreal. Lozano-Hemmer lives and works in Montreal and Madrid. Rafael Lozano-Hemmer was born in Mexico City in 1967, he emigrated to Canada in 1985 to study at the University of Victoria in British Columbia and at Concordia University in Montreal. The son of Mexico City nightclub owners, Lozano-Hemmer was drawn to science but could not resist joining the creative activities that his friends did, he worked in a molecular recognition lab in Montreal and published his research in Chemistry journals. Though he did not pursue the sciences as a direct career, it has influenced his work in many ways, providing conceptual inspiration and practical approaches to create his work. Lozano-Hemmer's work can be considered a blend of interactive art and performance art, using both large and small scales and outdoor settings, a wide variety of audiovisual technologies.
Lozano-Hemmer is best known for creating and presenting theatrical interactive installations in public spaces across Europe and America. Using robotics, real-time computer graphics, film projections, positional sound, internet links, cell phone interfaces and ultrasonic sensors, LED screens and other devices, his installations seek to interrupt the homogenized urban condition by providing critical platforms for participation. Lozano-Hemmer's smaller-scaled sculptural and video installations explore themes of perception and surveillance; as an outgrowth of these various large scale and performance-based projects Lozano-Hemmer documents the works in photography editions that are exhibited. In 1999, he created Alzado Vectorial, where internet participants directed searchlights over the central square in Mexico City; the work was repeated in Vitoria-Gasteiz in 2002, in Lyon in 2003, in Dublin in 2004 and in Vancouver in 2010. In 2007, he became the first artist to represent Mexico at the Venice Biennale, with a solo show at the Palazzo Soranzo Van Axel.
In 2006, his work 33 Questions Per Minute was acquired by The Museum of Modern Art in New York. Subtitled Public is held in the Tate Collection in the United Kingdom. Several of Lozano-Hemmer's installations include the use of words and sentences to add additional meaning; these texts are used to elaborate upon a deeper meaning that involves a viewer's actions, to change or create an effect upon the atmosphere and perception. Some of the text based installations, such as Third Person and Subtitled Public, place words upon the viewer himself; because of the random nature of these texts, the viewer has no control over what they are labeled as, incurring a sense of helplessness, experience the pleasant and unpleasant connotations that are associated with the words placed upon themselves. The text based installations such as 33 Questions Per Minute and There is No Business Like No Business are reliant upon the willing participation of the viewer; these two forms of text installations are externally reflective, while the first two are internally reflective.
33 Questions Per Minute is an installation consisting of several screens programmed to generate possible questions and display them at a rate of 33 per minute. The computer generating the questions can generate 55 billion unique questions, taking over 3,000 years to display them all. In addition to viewing the automatically displaying the questions, members of the public can submit their own questions into the system, their participation shows up on the screens and is registered by the program. Third Person is the second piece of the ShadowBox series of interactive displays with a built-in computerized tracking system; this piece shows the viewer's shadow, composed hundreds of tiny words that are in fact all the verbs of the dictionary conjugated in the third person. The portrait of the viewer is drawn in real time by active words, which appear automatically to fill his or her silhouette. There Is No Business Like No Business is a blinking neon sign, whose speed is directly proportional to the number of times that the word "economy" has appeared in online news items within the past 24 hours.
Subtitled Public consists of an empty exhibition space where visitors are detected by a computerized surveillance system. When people enter the space, the system generates a subtitle for each person and projects it onto him or her: the subtitle is chosen at random from a list of all verbs conjugated in the third person; the only way of getting rid of a subtitle is to touch another person, which leads to the two subtitles being exchanged. In 1994, Lozano-Hemmer coined the term "relational architecture" as the technological actualization of buildings and the urban environment with alien memory, he aimed to transform the dominant narratives of a specific building or urban setting by superimposing audiovisual elements to affect it, effect it and re-contextualize it. From 1997 to 2006, he built ten works of relational architecture beginning with Displaced Emperors and ending with Under Scan. Lozano-Hemmer says, "I want buildings to pretend to be something other than themselves, to engage in a kind of dissimulation"Solar Equation was a large-scale public art installation that consists of a faithful simulation of the Sun, scaled 100 million times smaller than the real thing.
Commissioned by the Light in Winter Festival in Melbourne, the piece featured the world's largest spherical balloon, custom-manufactured for the project, tethered over Federation Square and animated using five projectors. The solar animation on the balloon was generated by live mathematical
Dance Dance Revolution
Dance Dance Revolution known as Dancing Stage in earlier games in Europe, Central Asia, Middle East, South Asia and Oceania, some other games in Japan, is a music video game series produced by Konami. Introduced in Japan in 1998 as part of the Bemani series, released in North America and Europe in 1999, Dance Dance Revolution is the pioneering series of the rhythm and dance genre in video games. Players stand on a "dance platform" or stage and hit colored arrows laid out in a cross with their feet to musical and visual cues. Players are judged by how well they time their dance to the patterns presented to them and are allowed to choose more music to play to if they receive a passing score. Dance Dance Revolution has been met with critical acclaim for its originality and stamina in the video game market. There have been dozens of arcade-based releases across several countries and hundreds of home video game console releases, promoting a music library of original songs produced by Konami's in-house artists and an eclectic set of licensed music from many different genres.
The DDR series has inspired similar games such as Pump it Up by Andamiro and In the Groove by Roxor Games. The series' current version is Dance Dance Revolution A20, released in 2019; the core gameplay involves the player stepping their feet to correspond with the arrows that appears on screen and the beat. During normal gameplay, arrows scroll upwards from the bottom of the screen and pass over a set of stationary arrows near the top; when the scrolling arrows overlap the stationary ones, the player must step on the corresponding arrows on the dance platform, the player is given a judgement for their accuracy of every streaked notes. Additional arrow types are added in mixes. Freeze Arrows, introduced in DDRMAX, are long green arrows that must be held down until they travel through the Step Zone; each of these arrows awards an "O. K.!" if pressed or an "N. G." when the arrow is released too quickly. An "N. G." decreases the life bar and, starting with DDR X breaks any existing combo. DDR X introduced Shock Arrows, walls of arrows with lightning effects which must be avoided, awarding an "O.
K.!" if avoided or an "N. G." if any of the dancer's panels are stepped on. An "N. G." for shock arrows has the same consequences found with freeze arrows, but hitting a shock arrow additionally hides future steps for a short period of time. Hitting the arrows in time with the music fills the "Dance Gauge", or life bar, while failure to do so drains it. If the Dance Gauge is exhausted during gameplay, the player will fail the song, the game will be over. Otherwise, the player is taken to the Results Screen, which rates the player's performance with a letter grade and a numerical score, among other statistics; the player may be given a chance to play again, depending on the settings of the particular machine. The default limit is of three songs, though operators can set the limit between five. Aside from play style Single, Dance Dance Revolution provides two other play styles: Versus, where two players can play Single and Double, where one player uses all eight panels. Prior to the 2013 release of Dance Dance Revolution, some games offer additional modes, such as Course mode and Battle mode.
Earlier versions have Couple/Unison Mode, where two players must cooperate to play the song. This mode become the basis for "TAG Play" in newer games. Depending on the edition of the game, dance steps are broken into various levels of difficulty by colour. Difficulty is loosely separated into 3–5 categories depending on timeline: DDR 1st Mix established the three main difficulties and it began using the foot rating with a scale of 1 to 8. In addition, each difficulty rating would be labeled with a title. DDR 2nd Mix Club Version 2 increased the scale to 9, which would be implemented in the main series beginning in DDR 3rd Mix. DDR 3rd Mix renamed the Maniac difficulty to "SSR" and made it playable through a special mode, which can only be accessed via input code and is played on Flat by default; the SSR mode was eliminated in 3rdMix Plus, the Maniac routines were folded back into the regular game. In addition to the standard three difficulties, the first three titles of the series and their derivations featured a "Easy" mode, which provided simplified step charts for songs.
In this mode, one cannot access other difficulties, akin to the aforementioned SSR mode. While this mode is never featured again, it would become the basis for the accessible Beginner difficulty implemented in newer games. DDR 4th Mix removed the names of the song and made it simple by removing those names and organizing the difficulty by order. DDR 4th Mix Plus renamed several song's Maniac charts as Maniac-S and Maniac-D, while adding newer and harder stepcharts for the old ones as the "second" Maniac; these new charts were used as the default Maniac stepchart in DDR 5th Mix while the older ones were removed. Beginning in DDRMAX, a "Groove Radar" was introduced, showing how difficult a particular sequence is in various categories, such as the maximum density of steps, so on; the step difficulty was removed in favor of the Groove Radar. DDRMAX2 re-added the foot ratings and resto
Scott Snibbe is an interactive media artist and entrepreneur. He is one of the first artists to work with projector-based interactivity, where a computer-controlled projection onto a wall or floor changes in response to people moving across its surface, with his well-known full-body interactive work Boundary Functions, premiering at Ars Electronica 1998. In this floor-projected interactive artwork, people walk across a four-meter by four-meter floor; as they move, Boundary Functions uses a camera and projector to draw lines between all of the people on the floor, forming a Voronoi Diagram. This diagram has strong significance when drawn around people's bodies, surrounding each person with lines that outline his or her personal space - the space closer to that person than to anyone else. Snibbe states that this work "shows that personal space, though we call it our own, is only defined by others and changes without our control". Snibbe has become more broadly known for creating some of the first interactive art apps for iOS devices.
His first three apps—Gravilux, Bubble Harp, Antograph—released in May, 2010 as ports of screen-based artwork from the 1990s Dynamic Systems Series, all rose into the top ten in the iTunes Store's Entertainment section, have been downloaded over 400,000 times. Snibbe collaborated with Björk to produce Biophilia, the first full-length app album, released for iPad and iPhone in 2011. Snibbe received undergraduate and master's degrees in computer science and fine art from Brown University, where he studied with Dr. Andries van Dam and Dr. John Hughes. Snibbe studied animation at the Rhode Island School of Design with Amy Kravitz. After making several hand-drawn animated shorts, he turned to interactive art as his primary artistic medium, his first public interactive work, Motion Phone won an award from Prix Ars Electronica in 1996 and established him as a contributor to the field. Snibbe's work has been shown at the Whitney Museum of American Art, San Francisco Museum of Modern Art, The Kitchen, the NTT InterCommunication Center and the Institute of Contemporary Arts.
His work is shown and collected by science museums, including the Exploratorium, the New York Hall of Science, the Museum of Science and Industry, the Cité des Sciences et de l'Industrie, the London Science Museum, the Phaeno Science Center. He was featured on a December 18, 2011 episode of CNN's The Next List, he has received grants from the Rockefeller Foundation the National Endowment for the Arts, National Video Resources and awards from the Prix Ars Electronica Festival, the de:Trickfilmfestival Stuttgart, the Black Mariah Film Festival, the Student Academy Awards. Snibbe has taught media art and computer science at UC Berkeley, California Institute of the Arts, the San Francisco Art Institute, he worked as a Computer Scientist at Adobe Systems from 1994–1996, on the special effects and animation software Adobe After Effects, named on six patents for work in animation and motion tracking. He was an employee at Interval Research from 1996-2000 where he worked on Computer Vision, Computer Graphics and Haptics research projects receiving several patents in those fields.
Snibbe is the founder of Snibbe Interactive, which distributes and develops immersive interactive experiences for use in museums and branding. In 2009, Snibbe presented Sona Research's first research paper "Social Immersive Media" at the CHI 2009 conference, coining the term Social Immersive Media to describe interface techniques to create effective immersive interactive experiences focused on social interaction, winning the best paper of conference award. In November, 2013 Snibbe and Jaz Banga debated Laura Sydell and Christopher M. Kelty in an Oxford style debate entitled, Patent Pending: Does the U. S. Patent System stifle innovation? Interactive Art for the Screen Motion Sketch, 1989 Motion Phone, 1994 Bubble Harp, 1997 Gravilux, 1997 Myrmegraph, 1998 Emptiness is Form, 2000iPhone and iPad Apps Gravilux, 2010 Bubble Harp, 2010 Antograph, 2010 Tripolar, 2011 OscilloScoop, 2011Interactive Projections Boundary Functions, 1998 Shadow, 2002 Deep Walls, 2002 Shy, 2003 Impression, 2003 Depletion, 2003 Compliant, 2003 Concentration, 2003 Cause and Effect, 2004 Visceral Cinema: Chien, 2005 Shadow Bag, 2005 Central Mosaic, 2005 Outward Mosaic, 2006 Make Like a Tree, 2006 Falling Girl, 2008Electromechanical Sculpture Mirror, 2001 Circular Breathing, 2002 Blow Up, 2005Internet Art It's Out, 2001 Tripolar, 2002 Fuel, 2002 Cabspotting, 2005Public Art Installations You Are Here, New York Hall of Science, 2004 Women Hold up Half the Sky, Mills College, 2007 Transit, Los Angeles International Airport, 2009Performance In the Grace of the World, Saint Luke's Orchestra, 2008Film Lost Momentum, 1995 Brothers, 1990 Interactive art Electronic art Computer art Software art Abstract film Paul, Christiane.
Digital Art. London: Thames & Hudson. ISBN 0-500-20367-9. Wilson, Steve Information Arts: Intersections of Art and Technology ISBN 0-262-23209-X Bullivant, Lucy. Responsive Environments: architecture and design. London:Victoria and Albert Museum. ISBN 1-85177-481-5. Fiona Whitton, Tom Leeser, Christiane Paul. Visceral Cinema: Chien. Los Ang