Barnes & Noble Nook
The Barnes & Noble Nook is a brand of e-readers developed by American book retailer Barnes & Noble, based on the Android platform. The original device was announced in the U. S. in October 2009, was released the next month. The original Nook had a six-inch E-paper display and a separate, smaller color touchscreen that serves as the primary input device and was capable of Wi-Fi and AT&T 3G wireless connectivity; the original nook was followed in November 2010 by a color LCD device called the Nook Color, in June 2011 by the Nook Simple Touch, in November 2011 and February 2012 by the Nook Tablet. On April 30, 2012, Barnes & Noble entered into a partnership with Microsoft that spun off the Nook and college businesses into a subsidiary. On August 28, 2012, Barnes and Noble announced partnerships with retailers in the UK, which began offering the Nook digital products in October 2012. In December 2014, B&N purchased Microsoft's Nook shares. Nook users may read nearly any Nook Store e-book, digital magazines or newspapers for one hour once per day while connected to the store's Wi-Fi.
The Nook name and identity was devised and created by the Brand Development Group at R/GA. Nook was rejected as a name by Barnes & Noble but the connection to nook being a familiar place to read was compelling enough to change the minds of the company's executives; this decision pivoted on the information contained within an NPR article which suggested that women readers tend to read more than men. The name is claimed by Rex Wilder when he was consulting for Ammunition Design Group; the name was among over 400 he created, although that naming project ended with no name being chosen. In November 2017, B&N announced the 3rd generation of the GlowLight e-reader; the device returned to a design more reminiscent of the original Simple Touch and dropped the IP67 certification. The Glowlight 3 has an enhanced lighting system that provides a cool white during the day or in rooms with bright light, but can manually or automatically switch to night mode with an orange tone for reading in dark spaces with less blue light.
In February 2014, B&N announced a new Nook color tablet would be released in 2014. In June 2014, Barnes & Noble announced it would be teaming up with Samsung to develop co-branded tablets titled the Samsung Galaxy Tab 4 Nook; the devices would feature Samsung's hardware with a 7-inch display, customized Nook software from Barnes & Noble. In September 2015, B&N released the Samsung Galaxy Tab S2 Nook, a Nook branded Samsung Galaxy Tab S2 8" LCD tablet that includes some Samsung and B&N software, it uses Android 5.0.2, features an 8-core CPU with 3GB RAM, 32GB of internal storage, a microSD card slot, two cameras, a 4,000 mAh battery, which B&N says will last for up to 14 hours of video usage and launched with a US$399.99 retail price. In October 2015, B&N released the Samsung Galaxy Tab E Nook, a Nook branded Samsung Galaxy Tab S2 9.7 9.7" LCD tablet that includes some Samsung, B&N and Microsoft software. This tablet runs Android 5.1.1 on a 1.2 GHz Qualcomm Snapdragon 410 CPU with 16GB of storage, microSD card support, weighs 547 grams, two cameras.
In November 2016, B&N released the Nook Tablet 7, a Nook-branded tablet with a 7" LCD screen that has a resolution of 600 x 1024, retails at $50. It is using Android 6.0 Marshmallow with Nook apps included with a 1.3 GHz MediaTek CPU. It has a microSD card slot, Wi-Fi and Bluetooth, it has a battery for up to 7 hours. The device has two versions, a Nook that includes Wi-Fi and AT&T 3G wireless connectivity, one that only includes Wi-Fi; this version made its debut on November 22, 2009, at a retail price of US$259. It was offered with built-in Wi-Fi + 3G connectivity for free access to the Barnes & Noble online store; this has a six-inch E Ink display, a separate, smaller color touchscreen that serves as the primary input device. The price was dropped to US$199 on June 2010, upon the release of the new Nook Wi-Fi. With the announcement of the newer Nook Simple Touch Reader, on May 25, 2011 the price was dropped to US$169. In early 2011, Nook Wi-Fi + 3G was discontinued; this version of the Nook 1st Edition supports only Wi-Fi, is distinguishable due to its white back panel.
Nook Wi-Fi made its debut on June 21, 2010, at a retail price of US$149. With the announcement of the newer Nook Simple Touch Reader, on May 25, 2011 the price was dropped to US$119. In September 2011, the price was dropped again, to US$89. In late 2011, Nook Wi-Fi was discontinued. Announced on May 25, 2011, the Simple Touch Reader was released on June 10, 2011 at a retail price of US$139; the Simple Touch is a Wi-Fi only Nook, with an infrared touch-screen, E Ink technology, battery life of up to two months. The device weighs 212 grams with dimensions of 6.5" × 5" × 0.47". On November 7, 2011, the Simple Touch Reader's price dropped to US$99. On December 9, 2012, the Simple Touch Reader's price dropped to US$79. On December 4, 2013, the Simple Touch Reader's price dropped to US$59. In February 2014, the Simple Touch Reader was discontinued due to being phased out by the GlowLight. On April 12, 2012, a Nook Simple Touch Reader with built-in LED lighting, called "GlowLight", was released with a retail price of US$139.
This model is distinguishable from the non glow light model by a gray bezel on the outer edge. On September 3
Produced by Boston-based IDG World Expo, Macworld/iWorld is a trade show with conference tracks dedicated to the Apple Macintosh platform. It was held annually in the United States during January. Macworld Expo and Macworld Conference & Exposition, the gathering dates back to 1985. Macworld is the most read Macintosh magazine in North America and a trademark of Mac Publishing, a wholly owned subsidiary of International Data Group. IDG World Expo is a subsidiary; the conference tracks require large admission fees. They last for a few more days than the Expo, which runs three or four days. Attendees can visit the exhibits, set up by hardware manufacturers and software publishers that support the Macintosh platform. On December 18, 2008, Apple announced that the 2009 Macworld Conference & Expo would be the last in which the company participates. On October 14, 2014, IDG suspended Macworld/iWorld indefinitely; the first Macworld Expo occurred in 1985 in San Francisco. The conference itself was created by Peggy Kilburn, who helped to increase the size and profit of the event during her tenure.
Among the speakers recruited by Kilburn were David Pogue, Steve Case, Bob LeVitus, as well as representatives from BMUG, LaserBoard, other major user groups. The San Francisco event has always been held at the Moscone Center; the Expo was held in Brooks Hall near the San Francisco Civic Center from 1985 until 1993, when the expansion of Moscone Center allowed the show to be consolidated in one location. Until 2005, the U. S. shows were held semiannually, with a January show in San Francisco and an additional summer show held in the Eastern US. The event was held in Boston at the Bayside Expo & Executive Conference Center expanding with a dual presence at the World Trade Center Boston. From 1998 to 2003 it took place in New York City's Jacob K. Javits Convention Center; the 2004 and 2005 summer shows, retitled Macworld Conference & Expo took place in Boston, although without Apple's participation. Other companies followed Apple's lead, canceling or reducing the size of their own exhibits, which resulted in reduced attendance compared with previous Macworld conferences.
On 16 September 2005, IDG announced that no further summertime shows would be held in NYC or in Boston. The show has taken place in other cities: A Tokyo show, produced by IDG World Expo Japan, was held at Makuhari Messe and moved to Tokyo Big Sight in 2002. Macworld Expo Summit, a version of the show targeted at U. S. government customers, was held at the Washington Convention Center in Washington, D. C. as late as 1994. In 2004, Macworld UK, part of the IDG UK division of IDG, created two Macworld Conference events on its own: one standalone conference, one conference adjoining the MacExpo trade show in London. Since 1997, the show has been known for its keynote presentations by Apple CEO Steve Jobs; the 1987 Boston MacWorld Expo was held on August 11–13. The most significant product introduction at the show was Bill Atkinson's HyperCard. More than 3,000 copies of the software were handed out. MultiFinder, Apple File Exchange, the ImageWriter LQ, EtherTalk, AppleShare PC and the AppleFax Modem were among Apple's product announcements.
Promoters estimated. MacUser's review of the show concluded positively, saying that it was "revealing and disappointing. While the Mac is becoming the business machine of choice through much of corporate America, the show didn't have the sterile atmosphere that pure business trade shows have. Most of the time it was plain outright exciting, and the promise of the future, always in the air was wholly positive." The San Francisco MacWorld had 400 exhibits. Outbound Computers demonstrated the first Macintosh-compatible portable computers at the Boston show, preceding Apple's own introduction of the PowerBook by a couple of months. MacWorld Expo took place in three locations: San Francisco, Washington DC, Boston. Apple introduced the "Power Surge" line of Power Macintosh computers at the Boston show, consisting of the Power Macintosh 8500, 7500 and 7200. In Boston, Steve Jobs gave a status report on Apple Inc. Steve was the CEO of Pixar at the time. Steve addressed some of the comments, made about Apple: "Apple has become irrelevant", "Apple can't execute anything", "Apple's culture is anarchy.
Apple's sales were $11.1 billion in 1995, $9.5 billion in 1996, about $7 billion in 1997. Steve stated; the beginning steps that Apple was going to take were: Board of Directors, Focus on Relevance, Invest in Core Assets, Meaningful Partnerships, New Product Paradigm. Steve announced the new Board of Directors: Ed Woolard and former CEO of DuPont. Steve addressed their market focus. Apple was the dominant market leader for creative professionals. 80% of all computers used in advertising, graphic design and printing were Apple computers. 64% of internet websites were created on a Macintosh. Apple was the largest education company in the world. Apple sold 60% of all computers in education, they sold over $2 billion in annual revenues. Steve said Apple's core assets were the Apple brand and Mac OS that had yet to be exploited, he said Mac OS was still the best thing in the worl
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Debugging is the process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or a system. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, profiling; the terms "bug" and "debugging" are popularly attributed to Admiral Grace Hopper in the 1940s. While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. However, the term "bug", in the sense of "technical error", dates back at least to 1878 and Thomas Edison; the term "debugging" seems to have been used as a term in aeronautics before entering the world of computers. Indeed, in an interview Grace Hopper remarked; the moth fit the existing terminology, so it was saved. A letter from J. Robert Oppenheimer used the term in a letter to Dr. Ernest Lawrence at UC Berkeley, dated October 27, 1944, regarding the recruitment of additional technical staff.
The Oxford English Dictionary entry for "debug" quotes the term "debugging" used in reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" refers to debugging, this time of aircraft cameras. Hopper's bug was found on September 9, 1947; the term was not adopted by computer programmers until the early 1950s. The seminal article by Gill in 1951 is the earliest in-depth discussion of programming errors, but it does not use the term "bug" or "debugging". In the ACM's digital library, the term "debugging" is first used in three papers from 1952 ACM National Meetings. Two of the three use the term in quotation marks. By 1963 "debugging" was a common enough term to be mentioned in passing without explanation on page 1 of the CTSS manual. Kidwell's article Stalking the Elusive Computer Bug discusses the etymology of "bug" and "debug" in greater detail; as software and electronic systems have become more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, schedule software patches or full updates to a system.
The words "anomaly" and "discrepancy" can be used, as being more neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-called errors, defects or bugs must be fixed. Instead, an impact assessment can be made to determine if changes to remove an anomaly would be cost-effective for the system, or a scheduled new release might render the change unnecessary. Not all issues are mission-critical in a system, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known problem. Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zero defects. Considering the collateral issues, such as the cost-versus-benefit impact assessment broader debugging techniques will expand to determine the frequency of anomalies to help assess their impact to the overall system.
Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies with the complexity of the system, depends, to some extent, on the programming language used and the available tools, such as debuggers. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, restart it, set breakpoints, change values in memory; the term debugger can refer to the person, doing the debugging. High-level programming languages, such as Java, make debugging easier, because they have features such as exception handling and type checking that make real sources of erratic behaviour easier to spot. In programming languages such as C or assembly, bugs may cause silent problems such as memory corruption, it is difficult to see where the initial problem happened.
In those cases, memory debugger tools may be needed. In certain situations, general purpose software tools that are language specific in nature can be useful; these take the form of static code analysis tools. These tools look for a specific set of known problems, some common and some rare, within the source code. Concentrating more on the semantics rather than the syntax, as compilers and interpreters do; some tools claim to be able to detect over 300 different problems. Both commercial and free tools exist for various languages; these tools can be useful when checking large source trees, where it is impractical to do code walkthroughs. A typical example of a problem detected would be a variable dereference that occurs before the variable is assigned a value; as another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating errors in code, syntactically correct, but these tools have a reputation of false positives. The old Unix lint program is an early example.
For debugging electronic hardware (e.g
In information systems, a tag is a keyword or term assigned to a piece of information. This kind of metadata helps describe an item and allows it to be found again by browsing or searching. Tags are chosen informally and by the item's creator or by its viewer, depending on the system, although they may be chosen from a controlled vocabulary. Tagging was popularized by websites associated with Web 2.0 and is an important feature of many Web 2.0 services. It is now part of other database systems, desktop applications, operating systems. People use tags to aid classification, mark ownership, note boundaries, indicate online identity. Tags may take the form of images, or other identifying marks. An analogous example of tags in the physical world is museum object tagging. People were using textual keywords to classify information and objects long before computers. Computer based. Tagging gained popularity due to the growth of social bookmarking, image sharing, social networking websites; these sites allow users to manage labels that categorize content using simple keywords.
Websites that include tags display collections of tags as tag clouds, as do some desktop applications. On websites that aggregate the tags of all users, an individual user's tags can be useful both to them and to the larger community of the website's users. Tagging systems have sometimes been classified into two kinds: bottom-up. Top-down taxonomies are created by an authorized group of designers, whereas bottom-up taxonomies are created by all users; this definition of "top down" and "bottom up" should not be confused with the distinction between a single hierarchical tree structure versus multiple non-hierarchical sets. Some researchers and applications have experimented with combining hierarchical and non-hierarchical tagging to aid in information retrieval. Others are combining top-down and bottom-up tagging, including in some large library catalogs such as WorldCat; when tags or other taxonomies have further properties such as relationships and attributes, they constitute an ontology. Metadata tags as described in this article should not be confused with the use of the word "tag" in some software to refer to an automatically generated cross-reference.
The use of keywords as part of an identification and classification system long predates computers. Paper data storage devices, notably edge-notched cards, that permitted classification and sorting by multiple criteria were in use prior to the twentieth century, faceted classification has been used by libraries since the 1930s. In the late 1970s and early 1980s, the Unix text editor Emacs offered a companion software program called Tags that could automatically build a table of cross-references called a tags table that Emacs could use to jump between a function call and that function's definition; this use of the word "tag" did not refer to metadata tags, but was an early use of the word "tag" in software to refer to a word index. Online databases and early websites deployed keyword tags as a way for publishers to help users find content. In the early days of the World Wide Web, the keywords meta element was used by web designers to tell web search engines what the web page was about, but these keywords were only visible in a web page's source code and were not modifiable by users.
In 1997, the collaborative portal "A Description of the Equator and Some ØtherLands" produced by documenta X, used the folksonomic term Tag for its co-authors and guest authors on its Upload page. In "The Equator" the term Tag for user-input was described as an abstract literal or keyword to aid the user. However, users defined singular Tags, did not share Tags at that point. In 2003, the social bookmarking website Delicious provided a way for its users to add "tags" to their bookmarks. Within a couple of years, the photo sharing website Flickr allowed its users to add their own text tags to each of their pictures, constructing flexible and easy metadata that made the pictures searchable; the success of Flickr and the influence of Delicious popularized the concept, other social software websites—such as YouTube and Last.fm—also implemented tagging. In 2005, the Atom web syndication standard provided a "category" element for inserting subject categories into web feeds, in 2007 Tim Bray proposed a "tag" URN.
Many blog systems allow authors to add free-form tags to a post, along with placing the post into a predetermined category. For example, a post may display; each of those tags is a web link leading to an index page listing all of the posts associated with that tag. The blog may have a sidebar listing all the tags in use on that blog, with each tag leading to an index page. To reclassify a post, an author edits its list of tags. All connections between posts are automatically updated by the blog software; some desktop applications an
In physics, a rigid body is a solid body in which deformation is zero or so small it can be neglected. The distance between any two given points on a rigid body remains constant in time regardless of external forces exerted on it. A rigid body is considered as a continuous distribution of mass. In the study of special relativity, a rigid body does not exist. In quantum mechanics a rigid body is thought of as a collection of point masses. For instance, in quantum mechanics molecules are seen as rigid bodies; the position of a rigid body is the position of all the particles. To simplify the description of this position, we exploit the property that the body is rigid, namely that all its particles maintain the same distance relative to each other. If the body is rigid, it is sufficient to describe the position of at least three non-collinear particles; this makes it possible to reconstruct the position of all the other particles, provided that their time-invariant position relative to the three selected particles is known.
However a different, mathematically more convenient, but equivalent approach is used. The position of the whole body is represented by: the linear position or position of the body, namely the position of one of the particles of the body chosen as a reference point, together with the angular position of the body. Thus, the position of a rigid body has two components: angular, respectively; the same is true for other kinematic and kinetic quantities describing the motion of a rigid body, such as linear and angular velocity, momentum and kinetic energy. The linear position can be represented by a vector with its tail at an arbitrary reference point in space and its tip at an arbitrary point of interest on the rigid body coinciding with its center of mass or centroid; this reference point may define the origin of a coordinate system fixed to the body. There are several ways to numerically describe the orientation of a rigid body, including a set of three Euler angles, a quaternion, or a direction cosine matrix.
All these methods define the orientation of a basis set which has a fixed orientation relative to the body, relative to another basis set, from which the motion of the rigid body is observed. For instance, a basis set with fixed orientation relative to an airplane can be defined as a set of three orthogonal unit vectors b1, b2, b3, such that b1 is parallel to the chord line of the wing and directed forward, b2 is normal to the plane of symmetry and directed rightward, b3 is given by the cross product b 3 = b 1 × b 2. In general, when a rigid body moves, both its position and orientation vary with time. In the kinematic sense, these changes are referred to as rotation, respectively. Indeed, the position of a rigid body can be viewed as a hypothetic translation and rotation of the body starting from a hypothetic reference position. Velocity and angular velocity are measured with respect to a frame of reference; the linear velocity of a rigid body is a vector quantity, equal to the time rate of change of its linear position.
Thus, it is the velocity of a reference point fixed to the body. During purely translational motion, all points on a rigid body move with the same velocity. However, when motion involves rotation, the instantaneous velocity of any two points on the body will not be the same. Two points of a rotating body will have the same instantaneous velocity only if they happen to lie on an axis parallel to the instantaneous axis of rotation. Angular velocity is a vector quantity that describes the angular speed at which the orientation of the rigid body is changing and the instantaneous axis about which it is rotating. All points on a rigid body experience the same angular velocity at all times. During purely rotational motion, all points on the body change position except for those lying on the instantaneous axis of rotation; the relationship between orientation and angular velocity is not directly analogous to the relationship between position and velocity. Angular velocity is not the time rate of change of orientation, because there is no such concept as an orientation vector that can be differentiated to obtain the angular velocity.
The angular velocity of a rigid body B in a reference frame N is equal to the sum of the angular velocity of a rigid body D in N and the angular velocity of B with respect to D: N ω B = N ω D + D ω B. In this case, rigid bodies and reference frames are indistinguishable and interchangeable. For any set of three points P, Q, R, the position ve
Parallax scrolling is a technique in computer graphics where background images move past the camera more than foreground images, creating an illusion of depth in a 2D scene and adding to the sense of immersion in the virtual experience. The technique grew out of the multiplane camera technique used in traditional animation since the 1930s. Parallax scrolling was popularized in 2D computer graphics and video games by the arcade games Moon Patrol and Jungle Hunt, both released in 1982; some parallax scrolling had earlier been used by the 1981 arcade game Jump Bug. There are four main methods of parallax scrolling used in titles for arcade system board, video game console and personal computer systems; some display systems support multiple background layers that can be scrolled independently in horizontal and vertical directions and composited on one another, simulating a multiplane camera. On such a display system, a game can produce parallax by changing each layer's position by a different amount in the same direction.
Layers that move more are perceived to be closer to the virtual camera. Layers can be placed in front of the playfield—the layer containing the objects with which the player interacts—for various reasons such as to provide increased dimension, obscure some of the action of the game, or distract the player. Programmers may make pseudo-layers of sprites—individually controllable moving objects drawn by hardware on top of or behind the layers—if they are available on the display system. For instance Star Force, an overhead-view vertically scrolling shooter for NES, used this for its starfield, Final Fight for the Super NES used this technique for the layer in front of the main playfield; the Amiga computer has sprites which can have any height and can be set horizontal with the copper co-processor, which makes them ideal for this purpose. Risky Woods on the Amiga uses sprites multiplexed with the copper to create an entire fullscreen parallax background layer as an alternative to the system's dual playfield mode.
Scrolling displays built up of individual tiles can be made to'float' over a repeating background layer by animating the individual tiles' bitmaps in order to portray the parallax effect. Color cycling can be used to animate tiles on the whole screen; this software effect gives the illusion of another layer. Many games used this technique for a scrolling star-field, but sometimes a more intricate or multi-directional effect is achieved, such as in the game Parallax by Sensible Software. In raster graphics, the lines of pixels in an image are composited and refreshed in top-to-bottom order with a slight delay between drawing one line and drawing the next line. Games designed for older graphical chipsets—such as those of the third and fourth generations of video game consoles, those of dedicated TV games, or those of similar handheld systems—take advantage of the raster characteristics to create the illusion of more layers; some display systems have only one layer. These include most of the classic 8-bit systems.
The more sophisticated games on such systems divide the layer into horizontal strips, each with a different position and rate of scrolling. Strips higher on the screen will represent things farther away from the virtual camera or one strip will be held stationary to display status information; the program will wait for horizontal blank and change the layer's scroll position just before the display system begins to draw each scanline. This is called a "raster effect" and is useful for changing the system palette to provide a gradient background; some platforms provide a horizontal blank interrupt for automatically setting the registers independently of the rest of the program. Others, such as the NES, require the use of cycle-timed code, specially written to take as long to execute as the video chip takes to draw one scanline, or timers inside game cartridges that generate interrupts after a given number of scanlines have been drawn. Many NES games use this technique to draw their status bars, Teenage Mutant Ninja Turtles II: The Arcade Game and Vice: Project Doom for NES use it to scroll background layers at different rates.
More advanced raster techniques can produce interesting effects. A system can achieve a effective depth of field if layers with rasters are combined. If each scanline has its own layer, the Pole Position effect is produced, which creates a pseudo-3D road on a 2D system. If the display system supports rotation and scaling in addition to scrolling—an effect popularly known as Mode 7—changing the rotation and scaling factors can draw a projection of a plane or can warp the playfield to create an extra challenge factor. Another advanced technique is row/column scrolling, where rows/columns of tiles on a screen can be scrolled individually; this technique is implemented in the graphics chips of various Sega arcade system boards since the Sega Space Harrier and System 16, the Sega Mega Drive/Genesis console, the Capcom CP System, Irem M-92 and Taito F3 System arcade game boards. In the following animation, three layers are moving leftward at different speeds, their speeds decrease from front to back and correspond to increases in relative distance from the viewer.
The ground layer is moving 8 times as fast as the