Graphical user interface
The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard; the actions in a GUI are performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices and smaller household and industrial controls; the term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games, or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks; the visible graphical interface features of an application are sometimes referred to as chrome or GUI. Users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold; the widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent from and indirectly linked to application functions, so the GUI can be customized easily; this allows users to select or design a different skin at will, eases the designer's work to change the interface as user needs evolve.
Good user interface design relates to users more, to system architecture less. Large widgets, such as windows provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones act as a user-input tool. A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines, point of sale touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticketing and check-in, information kiosks in a public space, like a train station or a museum, monitors or control screens in an embedded industrial application which employ a real-time operating system. By the 1980s, cell phones and handheld game systems employed application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. Sample graphical desktop environments A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to use computer software; the most common combination of such elements in GUIs is the windows, menus, pointer paradigm in personal computers. The WIMP style of interaction uses a virtual input device to represent the position of a pointing device, most a mouse, presents information organized in windows and represented with icons. Available commands are compiled together in menus, actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows and the windowing system; the windowing system handles hardware devices such as pointing devices, graphics hardware, positioning of the pointer. In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed.
Window managers and other software combine to simulate the desktop environment with varying degrees of realism. Smaller mobile devices such as personal digital assistants and smartphones use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP user interfaces; as of 2011, some touchscreen-based operating systems such as Apple's iOS and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse. Human interface devices, for the efficient interaction with a GUI include a computer keyboard used together with keyboard shortcuts, pointing devices for the cursor control: mouse, pointing stick, trackball, virtual keyboards, head-up displays. There are actions performed by programs that affect the GUI.
For example, there are components like inotify or D-Bus to facilitate communication between computer programs. Ivan Sutherland developed Sketchpad in 1963 held as the first graphical co
ARM Advanced RISC Machine Acorn RISC Machine, is a family of reduced instruction set computing architectures for computer processors, configured for various environments. Arm Holdings develops the architecture and licenses it to other companies, who design their own products that implement one of those architectures—including systems-on-chips and systems-on-modules that incorporate memory, radios, etc, it designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products. Processors that have a RISC architecture require fewer transistors than those with a complex instruction set computing architecture, which improves cost, power consumption, heat dissipation; these characteristics are desirable for light, battery-powered devices—including smartphones and tablet computers, other embedded systems. For supercomputers, which consume large amounts of electricity, ARM could be a power-efficient solution.
ARM Holdings periodically releases updates to the architecture. Architecture versions ARMv3 to ARMv7 support 32-bit arithmetic; the Thumb version supports a variable-length instruction set that provides both 32- and 16-bit instructions for improved code density. Some older cores can provide hardware execution of Java bytecodes. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. With over 100 billion ARM processors produced as of 2017, ARM is the most used instruction set architecture and the instruction set architecture produced in the largest quantity; the used Cortex cores, older "classic" cores, specialized SecurCore cores variants are available for each of these to include or exclude optional capabilities. The British computer manufacturer Acorn Computers first developed the Acorn RISC Machine architecture in the 1980s to use in its personal computers, its first ARM-based products were coprocessor modules for the BBC Micro series of computers.
After the successful BBC Micro computer, Acorn Computers considered how to move on from the simple MOS Technology 6502 processor to address business markets like the one, soon dominated by the IBM PC, launched in 1981. The Acorn Business Computer plan required that a number of second processors be made to work with the BBC Micro platform, but processors such as the Motorola 68000 and National Semiconductor 32016 were considered unsuitable, the 6502 was not powerful enough for a graphics-based user interface. According to Sophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/second bandwidth. After testing all available processors and finding them lacking, Acorn decided it needed a new architecture. Inspired by papers from the Berkeley RISC project, Acorn considered designing its own processor. A visit to the Western Design Center in Phoenix, where the 6502 was being updated by what was a single-person company, showed Acorn engineers Steve Furber and Sophie Wilson they did not need massive resources and state-of-the-art research and development facilities.
Wilson developed the instruction set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a 6502 second processor. This convinced Acorn engineers. Wilson approached Acorn's CEO, Hermann Hauser, requested more resources. Hauser assembled a small team to implement Wilson's model in hardware; the official Acorn RISC Machine project started in October 1983. They chose VLSI Technology as the silicon partner, as they were a source of ROMs and custom chips for Acorn. Wilson and Furber led the design, they implemented it with a similar efficiency ethos as the 6502. A key design goal was achieving low-latency input/output handling like the 6502; the 6502's memory access architecture had let developers produce fast machines without costly direct memory access hardware. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985; the first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips, sped up the CAD software used in ARM2 development.
Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be dense, making ARM BBC BASIC an good test for any ARM emulator; the original aim of a principally ARM-based computer was achieved in 1987 with the release of the Acorn Archimedes. In 1992, Acorn once more won the Queen's Award for Technology for the ARM; the ARM2 featured 26-bit address space and 27 32-bit registers. Eight bits from the program counter register were available for other purposes; the address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags. The ARM2 had a transistor count of just 30,000, compared to Motorola's six-year-older 68000 model with around 40,000. Much of this simplicity came from the lack of mic
Multitouch screens allow the possibility to create virtual chorded keyboards for tablet computers, touchscreens and wired gloves. Virtual keyboards are used as an on-screen input method in devices with no physical keyboard, where there is no room for one, such as a pocket computer, personal digital assistant, tablet computer or touchscreen-equipped mobile phone. Text is inputted either by tapping a virtual keyboard or finger-tracing. Virtual keyboards are used as features of emulation software for systems that have fewer buttons than a computer keyboard would have; the four main approaches to enter text into a PDA were, virtual keyboards operated by a stylus, external USB keyboards, handwritten keyboards and stroke recognition. Many early PDA not focused on virtual keyboards. Microsoft's mobile operating system approach< was to simulate a complete functional keyboard. Which resulted in a overloaded keyboard layout; the main problem that those old PDAs were not supporting the multi-touch technology and were causing as a result, usability problems for the user.
When Apple presented the first iPhone in 2007, the decision not to include a physical keyboard was seen as a detriment to the device. But Apple brought the multi-touch technology into their new device, which enabled them to overcome the usability problems of PDAs. Apple's virtual keyboard design pattern has become a standard on mobile devices today. Both most common mobile operating systems, Android and iOS, give the developer community the possibility to individually develop custom virtual keyboards; the Android SDK provides a so-called InputMethodService. This service provides a standard implementation of an input method, which final implementations can derive from and customize, enabling the Android development community to implement their own keyboard layouts; the InputMethodService ships with it on Keyboard View. While the InputMethod Service can be used to customize key and gesture inputs, the Keyboard Class loads an XML description of a keyboard and stores the attributes of the keys; as a result, it is possible to install different keyboard versions on an Android device, that the keyboard is only an application.
Apple provides the possibility for the community to develop custom keyboards, but does not give any access to the dictionary or general keyboard settings. Further iOS is automatically switching between system and custom keyboards, if the user enters text into the text input field; the UIInputViewController is the primary view controller for a custom keyboard app extension. This controller provides different methods for the implementation of a custom keyboard, such as a user interface for a custom keyboard, obtaining a supplementary lexicon or changing the primary language of a custom keyboard. Next to the classic virtual keyboard implementation Android, iOS and custom keyboards, such as SwiftKey for example, are providing different features to improve the usability and the efficiency of their keyboards; the Android platform offers a spelling checker framework that offers the possibility to implement and access spell checking in the application itself. The framework is one of the Text Service APIs offered by the Android platform.
Based on provided text, the session object returns spelling suggestions generated by the spelling checker.iOS is using the class UITextChecker, an object used to check a string for misspelled words known as Apple's autocorrection. UITextChecker spell-checks are using a lexicon for a given language, it can be told to ignore specific words when spell-checking a particular document and it can learn new words, which adds those words to the lexicon. Diverse scientific papers at the beginning of the 2000s showed before the invention of smart phones, that predicting words, based on what the user is typing, is helpful to increase the typing speed. At the beginning of development of this keyboard feature, prediction was based on static dictionaries. Google implemented the predicting method in 2013 in Android 4.4. This development was driven by third party keyboard providers, such as SwiftKey and Swype. Both provide powerful word search engine with corresponding databases. In 2014 Apple presented iOS 8 which includes a new predictive typing feature called QuickType, wh
GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, process controls; the design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. The goal of user interface design is to produce a user interface which makes it easy and enjoyable to operate a machine in the way which produces the desired result; this means that the operator needs to provide minimal input to achieve the desired output, that the machine minimizes undesired outputs to the human. User interfaces are composed of one or more layers including a human-machine interface interfaces machines with physical input hardware such a keyboards, game pads and output hardware such as computer monitors and printers.
A device that implements a HMI is called a human interface device. Other terms for human-machine interfaces are man–machine interface and when the machine in question is a computer human–computer interface. Additional UI layers may interact with one or more human sense, including: tactile UI, visual UI, auditory UI, olfactory UI, equilibrial UI, gustatory UI. Composite user interfaces are UIs that interact with two or more senses; the most common CUI is a graphical user interface, composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI it becomes a multimedia user interface. There are three broad categories of CUI: standard and augmented. Standard composite user interfaces use standard human interface devices like keyboards and computer monitors; when the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface.
When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense Standard CUI with visual display and smells; the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch. In complex systems, the human–machine interface is computerized; the term human–computer interface refers to this kind of system. In the context of computing, the term extends as well to the software dedicated to control the physical elements used for human-computer interaction; the engineering of the human–machine interfaces is enhanced by considering ergonomics.
The corresponding disciplines are human factors engineering and usability engineering, part of systems engineering. Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics. There is a difference between a user interface and an operator interface or a human–machine interface; the term "user interface" is used in the context of computer systems and electronic devices Where a network of equipment or computers are interlinked through an MES -or Host to display information. A human-machine interface is local to one machine or piece of equipment, is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel; the user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI. In practice, the abbreviation MMI is still used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more used for human–computer interaction. Other terms used are operator interface terminal; however it is abbreviated, the terms refer to the'layer' that separates a human, operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to
A touchscreen, or touch screen, is an input device and layered on the top of an electronic visual display of an information processing system. A user can give input or control the information processing system through simple or multi-touch gestures by touching the screen with a special stylus or one or more fingers; some touchscreens use ordinary or specially coated gloves to work while others may only work using a special stylus or pen. The user can use the touchscreen to react to what is displayed and, if the software allows, to control how it is displayed; the touchscreen enables the user to interact directly with what is displayed, rather than using a mouse, touchpad, or other such devices. Touchscreens are common in devices such as Nintendo game consoles, personal computers, electronic voting machines, point-of-sale systems, they can be attached to computers or, as terminals, to networks. They play a prominent role in the design of digital appliances such as personal digital assistants and some e-readers.
The popularity of smartphones and many types of information appliances is driving the demand and acceptance of common touchscreens for portable and functional electronics. Touchscreens are found in the medical field, heavy industry, automated teller machines, kiosks such as museum displays or room automation, where keyboard and mouse systems do not allow a suitably intuitive, rapid, or accurate interaction by the user with the display's content; the touchscreen sensor and its accompanying controller-based firmware have been made available by a wide array of after-market system integrators, not by display, chip, or motherboard manufacturers. Display manufacturers and chip manufacturers have acknowledged the trend toward acceptance of touchscreens as a user interface component and have begun to integrate touchscreens into the fundamental design of their products. Eric Johnson, of the Royal Radar Establishment, located in Malvern, described his work on capacitive touchscreens in a short article published in 1965 and more fully—with photographs and diagrams—in an article published in 1967.
The application of touch technology for air traffic control was described in an article published in 1968. Frank Beck and Bent Stumpe, engineers from CERN, developed a transparent touchscreen in the early 1970s, based on Stumpe's work at a television factory in the early 1960s. Manufactured by CERN, it was put to use in 1973. A resistive touchscreen was developed by American inventor George Samuel Hurst, who received US patent No. 3,911,215 on October 7, 1975. The first version was produced in 1982. In 1972, a group at the University of Illinois filed for a patent on an optical touchscreen that became a standard part of the Magnavox Plato IV Student Terminal and thousands were built for this purpose; these touchscreens had a crossed array of 16×16 infrared position sensors, each composed of an LED on one edge of the screen and a matched phototransistor on the other edge, all mounted in front of a monochrome plasma display panel. This arrangement could sense any fingertip-sized opaque object in close proximity to the screen.
A similar touchscreen was used on the HP-150 starting in 1983. The HP 150 was one of the world's earliest commercial touchscreen computers. HP mounted their infrared transmitters and receivers around the bezel of a 9-inch Sony cathode ray tube. In 1984, Fujitsu released a touch pad for the Micro 16 to accommodate the complexity of kanji characters, which were stored as tiled graphics. In 1985, Sega released the Terebi Oekaki known as the Sega Graphic Board, for the SG-1000 video game console and SC-3000 home computer, it consisted of a plastic pen and a plastic board with a transparent window where pen presses are detected. It was used with a drawing software application. A graphic touch tablet was released for the Sega AI computer in 1986. Touch-sensitive control-display units were evaluated for commercial aircraft flight decks in the early 1980s. Initial research showed that a touch interface would reduce pilot workload as the crew could select waypoints and actions, rather than be "head down" typing latitudes and waypoint codes on a keyboard.
An effective integration of this technology was aimed at helping flight crews maintain a high-level of situational awareness of all major aspects of the vehicle operations including the flight path, the functioning of various aircraft systems, moment-to-moment human interactions. In the early 1980s, General Motors tasked its Delco Electronics division with a project aimed at replacing an automobile's non-essential functions from mechanical or electro-mechanical systems with solid state alternatives wherever possible; the finished device was dubbed the ECC for "Electronic Control Center", a digital computer and software control system hardwired to various peripheral sensors, solenoids, antenna and a monochrome CRT touchscreen that functioned both as display and sole method of input. The ECC replaced the traditional mechanical stereo, fan and air conditioner controls and displays, was capable of providing detailed and specific information about the vehicle's cumulative and current operating status in real time.
The ECC was standard equipment on the 1985–1989 Buick Riviera and the 1988–1989 Buick Reatta, but was unpopular with consumers—partly due to the technophobia of some traditional Buick customers, but because of costly technical problems suffered by the ECC's touchscreen which would render climate control or stereo operation impo
Bada is a discontinued operating system for mobile devices such as smartphones and tablet computers. It was developed by Samsung Electronics, its name is derived from "바다". It ranges from mid- to high-end smartphones. To foster adoption of Bada OS, since 2011 Samsung had considered releasing the source code under an open-source license, expanding device support to include Smart TVs. Samsung announced in June 2012 intentions to merge Bada into the Tizen project, but would meanwhile use its own Bada operating system, in parallel with Google Android OS and Microsoft Windows Phone, for its smartphones. All devices running Bada were branded under the Wave name, unlike Samsung's devices that are branded under the name Galaxy, which do not encompass the whole range of Samsung devices running Android. On 25 February 2013, Samsung announced that it would stop developing Bada, moving development to Tizen instead. Bug reporting was terminated in April 2014. After the announcement of Bada, the Wave S8500, which would turn to be the first Bada-based phone, was first shown to the public at Mobile World Congress 2010 in Barcelona in February 2010.
Alongside Bada itself, some applications running on Bada were exhibited, including mobile videogames like Gameloft's Asphalt 5. The Samsung Wave S8500, released in April that year, sold one million handsets over the first four weeks on the market. According to Samsung, companies such as Twitter, EA, Capcom and Blockbuster revealed their support for the Bada platform by having arranged development partnerships with Samsung since before the launch, shared a few insights about their vision for the future of mobile apps and how Bada would play a role in it; these were a showcase of what could be heard in a series of events held across the world during the year 2010, called Developer Days. In addition, it was made public the announcement of an incoming Bada Developer Challenge with a total prize of $2,700,000 throughout the launch event. In May 2010, Samsung released a beta of their Bada software development kit, making it available to the general public as it had done with partners the previous December, to entice potential developers of applications for this platform.
In August 2010, Samsung released version 1.0 of the Bada SDK. A year in August 2011, version 2.0 of the Bada SDK was released. The Samsung S8500 Wave was launched with version 1.0 of the Bada operating system. Samsung soon released version 1.0.2. The latest version 1.2 was released with the Samsung S8530 Wave II phone. The alpha-version of Bada 2.0 was introduced on 15 February 2011, with the Samsung S8530 Wave II handset. The current flagship Bada handset is the Samsung Wave 3 S8600, running Bada 2.0 With the release of the Samsung Wave, Samsung opened an international application store, Samsung Apps, for the Bada platform. Samsung Apps has over 2400 applications; this store is available for Android and Samsung feature phones. Samsung is to remove the Bada brand and market the new OS, with its own apps and store; the new store has around 1000 applications for Tizen. Bada, as Samsung defines it, is not an operating system itself, but a platform with a kernel configurable architecture, which allows using either a proprietary real-time operating system hybrid kernel or the Linux kernel.
According to copyrights displayed by Samsung Wave S8500, it uses code from FreeBSD, NetBSD and OpenBSD. Despite numerous suggestions, there is no known Bada device to date, running the Linux kernel. There is no evidence that Bada uses the same or similar graphics stack as the Tizen OS, in particular EFL; the device layer provides core functions such as graphics, protocols and security. The service layer provides more service-centric features such as mapping and in-app-purchasing. To provide such features there is a so-called Bada Server; the top layer, the framework layer provides an application programming interface in C++ for application developers to use. Bada provides various UI controls to developers: It provides assorted basic UI controls such as Listbox, Color Picker, Tab, has a web browser control based on the open-source WebKit, features Adobe Flash, supporting Flash 9, 10, or 11 in Bada 2.0. Both the WebKit and Flash can be embedded inside native Bada applications. Bada supports OpenGL ES 2.0 3D graphics API and offers interactive mapping with point of interest features, which can be embedded inside native applications.
It supports pinch-to-zoom, tabbed browsing and cut and paste features. Bada supports many mechanisms to enhance interaction; these include various sensors such as motion sensing, vibration control, face detection, magnetometer, Global Positioning System, multi-touch. Native applications are developed in C++ with the Bada SDK, the Eclipse based integrated development environment. GNU-based tool chains are used for debugging applications; the IDE contains UI Builder, with which developers can design the interface of their applications by dragging and dropping UI controls into forms. For testing and debugging, the IDE contains an emulator; some publications have criticized Bada 1.x over the following issues: In the beginning, all VoIP over Wi-Fi applications were banned which meant that popular applications such as Skype could not be used. In March 2011 this restriction was removed; the external sensor API is not open-ended, preventing new types of sensors or unexpected technology developments from b