Clojure is a modern and functional dialect of the Lisp programming language on the Java platform. Like other Lisps, Clojure has a Lisp macro system; the current development process is community-driven, overseen by Rich Hickey as its benevolent dictator for life. Clojure advocates immutability and immutable data structures and encourages programmers to be explicit about managing identity and its states; this focus on programming with immutable values and explicit progression-of-time constructs is intended to facilitate developing more robust programs multithreaded ones. While its type system is dynamic, recent efforts have sought the implementation of gradual typing. Commercial support for Clojure is provided by Cognitect. Annual Clojure conferences are organised every year across the globe, the most famous of them being Clojure/conj, Clojure/West, EuroClojure. Rich Hickey is the creator of the Clojure language. Before Clojure, he developed a similar project based on the. NET platform, three earlier attempts to provide interoperability between Lisp and Java: a Java foreign language interface for Common Lisp, A Foreign Object Interface for Lisp, a Lisp-friendly interface to Java Servlets.
Hickey spent about 2½ years working on Clojure before releasing it publicly, much of that time working on Clojure with no outside funding. At the end of this time, Hickey sent an email announcing the language to some friends in the Common Lisp community; the development process is managed at the Clojure Community website. The website contains an issue tracker where bugs may be filed. General development discussion occurs at the Clojure Dev Google Group. Anyone can submit bug reports and ideas, but to contribute patches, one must sign the Clojure Contributor agreement. JIRA tickets are processed by a team of screeners and Rich Hickey approves the changes. Rich Hickey developed Clojure because he wanted a modern Lisp for functional programming, symbiotic with the established Java platform, designed for concurrency. Clojure's approach to state is characterized by the concept of identities, which are represented as a series of immutable states over time. Since states are immutable values, any number of workers can operate on them in parallel, concurrency becomes a question of managing changes from one state to another.
For this purpose, Clojure provides several mutable reference types, each having well-defined semantics for the transition between states. Clojure runs on the Java platform and as a result, integrates with Java and supports calling Java code from Clojure, Clojure code can be called from Java also; the community uses Leiningen for project automation. Leiningen handles project package management and dependencies and is configured using Clojure syntax. Like most other Lisps, Clojure's syntax is built on S-expressions that are first parsed into data structures by a reader before being compiled. Clojure's reader supports literal syntax for maps and vectors in addition to lists, these are compiled to the mentioned structures directly. Clojure is a Lisp-1 and is not intended to be code-compatible with other dialects of Lisp, since it uses its own set of data structures incompatible with other Lisps; as a Lisp dialect, Clojure supports functions as first-class objects, a read–eval–print loop, a macro system.
Clojure's Lisp macro system is similar to that in Common Lisp with the exception that Clojure's version of the backquote qualifies symbols with their namespace. This helps prevent unintended name capture, it is possible to force a capturing macro expansion. Clojure does not allow user-defined reader macros, but the reader supports a more constrained form of syntactic extension. Clojure supports multimethods and for interface-like abstractions has a protocol based polymorphism and data type system using records, providing high-performance and dynamic polymorphism designed to avoid the expression problem. Clojure has support for lazy sequences and encourages the principle of immutability and persistent data structures; as a functional language, emphasis is placed on recursion and higher-order functions instead of side-effect-based looping. Automatic tail call optimization is not supported. For parallel and concurrent programming Clojure provides software transactional memory, a reactive agent system, channel-based concurrent programming.
Clojure 1.7 introduced reader conditionals by allowing the embedding of Clojure and ClojureScript code in the same namespace. Transducers were added as a method for composing transformations. Transducers enable higher-order functions such as map and fold to generalize over any source of input data. While traditionally these functions operate on sequences, transducers allow them to work on channels and let the user define their own models for transduction; the primary platform of Clojure is Java. The most notable of these are ClojureScript, which compiles to ECMAScript 3, ClojureCLR, a full port on the. NET platform, interoperable with its ecosystem. A survey of the Clojure community with 1,060 respondents conducted in 2013 found that 47% of respondents used both Clojure and ClojureScript when working with Clojure. In 2014 this number had increased to 55%, in 2015, based on 2,445 respondents, to 66%. Popular ClojureScript projects include implementations of the React library such as Om. With
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f
OpenLisp is a programming language in the Lisp family developed by Christian Jullien. It conforms to the international standard for ISLISP published jointly by the International Organization for Standardization and International Electrotechnical Commission, ISO/IEC 13816:1997, revised to ISO/IEC 13816:2007. Written in the programming languages C and Lisp, it runs on most common operating systems. OpenLisp is designated an ISLISP implementation, but contains many Common Lisp-compatible extensions and other libraries. OpenLisp includes an interpreter associated to a read–eval–print loop, a Lisp Assembly Program and a backend compiler for the language C; the main goal of this Lisp version is to implement a compliant ISLISP system. The secondary goal is to provide a complete embeddable Lisp system linkable to Java. A callback mechanism is used to communicate with the external program. Other goals are to be usable as scripting language or glue language and to produce standalone program executables.
Despite its name, OpenLisp is proprietary software. Its interpreter is available free of charge for any noncommercial use. OpenLisp runs in console mode: cmd.exe on Microsoft Windows, terminal emulator on Unix-based systems. Alternate solutions include running OpenLisp from Emacs via setting up Emacs inferior-lisp-mode, or using an integrated development environment which supports OpenLisp syntax. LispIDE by DaanSystems does so natively. Internally, OpenLisp uses virtual memory to extend objects automatically. Small objects of the same type are allocated using a Bibop memory organization. Large objects use a proxy; the conservative garbage collection is a sweep with coalescing heap. OpenLisp uses tagged architecture for fast type checking. Small integers are unboxed, large integers are boxed; as required by ISLISP, arbitrary-precision arithmetic are implemented. Characters are either 16/32-bit if Unicode support is enabled; the Lisp Kernel, native interpreter and basic libraries are hand coded in the language C, LAP intermediate language produced by the compiler is translated to C by the C backend code generator.
In 1988, the first motive behind OpenLisp was to implement a Lisp subset to extend EmACT, an Emacs clone. ISLISP became an obvious choice quickly. Further development ensued. OpenLisp claims to be portable, it runs on many operating systems including: Windows, most Unix and POSIX based, DOS, OS/2, Pocket PC, OpenVMS, z/OS; the official website download section contains over 50 different versions. OpenLisp can interact with modules written in C using foreign function interface, ISLISP streams are extended to support network socket, a simplified Extensible Markup Language reader can convert XML to Lisp. A basic SQL module can be used with MySQL, Odbc, SQLite, PostgreSQL. A comma-separated values module can write CSV files. Developer tools include data logging, pretty-printer, design by contract programming, unit tests; some well known algorithms are available in./contrib directory. Modules are shipped using BSD licenses; the prefix Open refers to open systems not to the open-source model. The name was chosen in 1993 to replace the MLisp internal code name, used by Gosling Emacs.
OpenLisp programming language is different than OpenLISP, a project begun in 1997 to implement Locator/Identifier Separation Protocol. This section describes how a compiler transforms Lisp code to C; the Fibonacci number function Lisp compiler translates Lisp source code to the following intermediate code. It is followed by a peephole optimization pass that uses this intermediate format to analyze and optimize instructions. After optimization, final LAP code is: Finally, C code generator uses LAP code to translate instructions in C. OpenLisp accepts lines having unlimited length; the recommended style is. It has been chosen by SDF Public Access Unix System nonprofit public access Unix systems on the Internet as one of its programming languages available online. Bricsys uses OpenLisp to implement AutoLISP in its Bricscad computer-aided design system. MEVA is written with OpenLisp. Università degli Studi di Palermo uses OpenLisp to teach Lisp. Official website Lisp Compilers and Interpreters at Curlie ISLISP on Software Preservation Group
Emacs or EMACS is a family of text editors that are characterized by their extensibility. The manual for the most used variant, GNU Emacs, describes it as "the extensible, self-documenting, real-time display editor". Development of the first Emacs began in the mid-1970s, work on its direct descendant, GNU Emacs, continues as of 2019. Emacs has over 10,000 built-in commands and its user interface allows the user to combine these commands into macros to automate work. Implementations of Emacs feature a dialect of the Lisp programming language that provides a deep extension capability, allowing users and developers to write new commands and applications for the editor. Extensions have been written to manage email, outlines, RSS feeds, as well as clones of ELIZA, Conway's Life and Tetris; the original EMACS was written in 1976 by Carl Mikkelsen, David A. Moon and Guy L. Steele Jr. as a set of Editor MACroS for the TECO editor. It was inspired by the ideas of the TECO-macro editors TECMAC and TMACS.
The most popular, most ported, version of Emacs is GNU Emacs, created by Richard Stallman for the GNU Project. XEmacs is a variant that branched from GNU Emacs in 1991. GNU Emacs and XEmacs are for the most part compatible with each other. Emacs is, along with vi, one of the two main contenders in the traditional editor wars of Unix culture. Emacs is among the open source projects still under development. Emacs development began during the 1970s at the MIT AI Lab, whose PDP-6 and PDP-10 computers used the Incompatible Timesharing System operating system that featured a default line editor known as Tape Editor and Corrector. Unlike most modern text editors, TECO used separate modes in which the user would either add text, edit existing text, or display the document. One could not place characters directly into a document by typing them into TECO, but would instead enter a character in the TECO command language telling it to switch to input mode, enter the required characters, during which time the edited text was not displayed on the screen, enter a character to switch the editor back to command mode.
This behavior is similar to that of the program ed. Richard Stallman visited the Stanford AI Lab in 1972 or 1974 and saw the lab's E editor, written by Fred Wright, he was impressed by the editor's intuitive WYSIWYG behavior, which has since become the default behavior of most modern text editors. He returned to MIT where Carl Mikkelsen, a hacker at the AI Lab, had added to TECO a combined display/editing mode called Control-R that allowed the screen display to be updated each time the user entered a keystroke. Stallman reimplemented this mode to run efficiently and added a macro feature to the TECO display-editing mode that allowed the user to redefine any keystroke to run a TECO program. E had another feature: random-access editing. TECO was a page-sequential editor, designed for editing paper tape on the PDP-1 and allowed editing on only one page at a time, in the order of the pages in the file. Instead of adopting E's approach of structuring the file for page-random access on disk, Stallman modified TECO to handle large buffers more efficiently and changed its file-management method to read and write the entire file as a single buffer.
All modern editors use this approach. The new version of TECO became popular at the AI Lab and soon accumulated a large collection of custom macros whose names ended in MAC or MACS, which stood for macro. Two years Guy Steele took on the project of unifying the diverse macros into a single set. Steele and Stallman's finished implementation included facilities for extending and documenting the new macro set; the resulting system was called EMACS, which stood for Editing MACroS or, alternatively, E with MACroS. Stallman picked the name Emacs "because <E> was not in use as an abbreviation on ITS at the time." An apocryphal hacker koan alleges that the program was named after Emack & Bolio's, a popular Cambridge ice cream store. The first operational EMACS system existed in late 1976. Stallman saw a problem in too much customization and de facto forking and set certain conditions for usage, he wrote: "EMACS was distributed on a basis of communal sharing, which means all improvements must be given back to me to be incorporated and distributed."The original Emacs, like TECO, ran only on the PDP-10 running ITS.
Its behavior was sufficiently different from that of TECO that it could be considered a text editor in its own right, it became the standard editing program on ITS. Mike McMahon ported Emacs from ITS to the TOPS-20 operating systems. Other contributors to early versions of Emacs include Kent Pitman, Earl Killian, Eugene Ciccarelli. By 1979, Emacs was the main editor used in its Laboratory for Computer Science. In the following years, programmers wrote a variety of Emacs-like editors for other computer systems; these included EINE and ZWEI, which were written for the Lisp machine by Mike McMahon and Daniel Weinreb, Sine, written by Owen Theodore Anderson. Weinreb's EINE was the first Emacs written in Lisp. In 1978, Bernard Greenberg wrote Multics Emacs entirely in Multics Lisp at Honeywell's Cambridge Information Systems Lab. Multics Emacs was maintained by Richard Soley, who went on to develop the NILE Emacs-like editor for the NIL Project, by Barry Margolin. Many versions of Emacs, including GNU Emacs, would adopt Lisp as an extension language.
James Gosling, who would invent Ne
Guy L. Steele Jr.
Guy Lewis Steele Jr. is an American computer scientist who has played an important role in designing and documenting several computer programming languages. Steele was born in Missouri and graduated from the Boston Latin School in 1972, he received a BA in applied mathematics from Harvard and an MS and Ph. D. from MIT in Computer Science. He worked as an assistant professor of computer science at Carnegie Mellon University and a compiler implementer at Tartan Laboratories, he joined the supercomputer company Thinking Machines, where he helped define and promote a parallel version of Lisp called *Lisp and a parallel version of C called C*. In 1994, Steele joined Sun Microsystems and was invited by Bill Joy to become a member of the Java team after the language had been designed, since he had a track record of writing good specifications for existing languages, he was named a Sun Fellow in 2003. Steele joined Oracle in 2010. While at MIT, Steele published more than two dozen papers with Gerald Jay Sussman on the subject of the Lisp language and its implementation.
One of their most notable contributions was the design of the programming language Scheme. Steele designed the original command set of Emacs and was the first to port TeX, he has published papers on other subjects, including compilers, parallel processing, constraint languages. One song he composed has been published in Communications of the Association for Computing Machinery. Steele has served on accredited standards committees ECMA TC39, X3J11, X3J3 and is chairman of X3J13, he was a member of the IEEE working group that produced the IEEE Standard for the Scheme Programming Language, IEEE Std 1178-1990. He represented Sun Microsystems in the High Performance Fortran Forum, which produced the High Performance Fortran specification in May, 1993. In addition to specifications of the Java programming language, Steele's work at Sun Microsystems has included research in parallel algorithms, implementation strategies, architectural and software support. In 2005, Steele began leading a team of researchers at Sun developing a new programming language named Fortress, a high-performance language designed to obsolete Fortran.
In 1982, Steele edited The Hacker's Dictionary, a print version of the Jargon File. Steele and Samuel P. Harbison wrote C: A Reference Manual, to provide a precise description of the C programming language, which Tartan Laboratories was trying to implement on a wide range of systems. Both authors participated in the ANSI C standardization process. On 16 March 1984, Steele published Common Lisp the Language; this first edition was the original specification of Common Lisp and served as the basis for the ANSI standard. Steele released a expanded second edition in 1990, which documented a near-final version of the ANSI standard. Steele, along with Charles H. Koelbel, David B. Loveman, Robert S. Schreiber, Mary E. Zosel wrote The High Performance Fortran Handbook. Steele coauthored all three editions of The Java Language Specification with James Gosling, Bill Joy, Gilad Bracha. Steele received the ACM Grace Murray Hopper Award in 1988, he was named an ACM Fellow in 1994, a member of the National Academy of Engineering of the United States of America in 2001 and a Fellow of the American Academy of Arts and Sciences in 2002.
He received the Dr. Dobb's Excellence in Programming Award in 2005. Steele is a Modern Western square dancer and caller from Mainstream up through C3A, a member of Tech Squares, a member of Callerlab. Under the pseudonym "Great Quux", an old student nickname at the Boston Latin School and MIT, he has published light verse and "Crunchly" cartoons, he has used the initialism GLS. In 1998, Steele solved the game Teeko via computer, showing what must occur if both players play wisely. Steele showed that the Advanced Teeko variant is a win for Black, as is one other variant, but the other fourteen variants are draws. Seibel, Peter. Coders at Work: Reflections on the Craft of Programming. Apress. Pp. 325–372. ISBN 1-4302-1948-3. OCLC 10605060. Works by Guy L. Steele at Project Gutenberg Works by or about Guy L. Steele Jr. at Internet Archive Sun/Oracle biographical page for Steele Telnet Song Poems from Guy Steele's student days A podcast interview with Guy Steele on Software Engineering Radio "Growing a Language", Keynote at the 1998 ACM OOPSLA Conference Guy Steele: Dan Friedman--Cool Ideas
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, process controls; the design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. The goal of user interface design is to produce a user interface which makes it easy and enjoyable to operate a machine in the way which produces the desired result; this means that the operator needs to provide minimal input to achieve the desired output, that the machine minimizes undesired outputs to the human. User interfaces are composed of one or more layers including a human-machine interface interfaces machines with physical input hardware such a keyboards, game pads and output hardware such as computer monitors and printers.
A device that implements a HMI is called a human interface device. Other terms for human-machine interfaces are man–machine interface and when the machine in question is a computer human–computer interface. Additional UI layers may interact with one or more human sense, including: tactile UI, visual UI, auditory UI, olfactory UI, equilibrial UI, gustatory UI. Composite user interfaces are UIs that interact with two or more senses; the most common CUI is a graphical user interface, composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI it becomes a multimedia user interface. There are three broad categories of CUI: standard and augmented. Standard composite user interfaces use standard human interface devices like keyboards and computer monitors; when the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface.
When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense Standard CUI with visual display and smells; the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch. In complex systems, the human–machine interface is computerized; the term human–computer interface refers to this kind of system. In the context of computing, the term extends as well to the software dedicated to control the physical elements used for human-computer interaction; the engineering of the human–machine interfaces is enhanced by considering ergonomics.
The corresponding disciplines are human factors engineering and usability engineering, part of systems engineering. Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics. There is a difference between a user interface and an operator interface or a human–machine interface; the term "user interface" is used in the context of computer systems and electronic devices Where a network of equipment or computers are interlinked through an MES -or Host to display information. A human-machine interface is local to one machine or piece of equipment, is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel; the user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI. In practice, the abbreviation MMI is still used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more used for human–computer interaction. Other terms used are operator interface terminal; however it is abbreviated, the terms refer to the'layer' that separates a human, operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to