Internet Relay Chat
Internet Relay Chat is an application layer protocol that facilitates communication in the form of text. The chat process works on a client/server networking model. IRC clients are computer programs that users can install on their system or web based applications running either locally in the browser or on 3rd party server; these clients communicate with chat servers to transfer messages to other clients. IRC is designed for group communication in discussion forums, called channels, but allows one-on-one communication via private messages as well as chat and data transfer, including file sharing. Client software is available for every major operating system; as of April 2011, the top 100 IRC networks served more than half a million users at a time, with hundreds of thousands of channels operating on a total of 1,500 servers out of 3,200 servers worldwide. IRC usage has been declining since 2003, losing 60% of its users and half of its channels. IRC was created by Jarkko Oikarinen in August 1988 to replace a program called MUT on a BBS called OuluBox at the University of Oulu in Finland, where he was working at the Department of Information Processing Science.
Jarkko intended to extend the BBS software he administered, to allow news in the Usenet style, real time discussions and similar BBS features. The first part he implemented was the chat part, which he did with borrowed parts written by his friends Jyrki Kuoppala and Jukka Pihl; the first IRC network was running on a single server named tolsun.oulu.fi. Oikarinen found inspiration in a chat system known as Bitnet Relay, which operated on the BITNET. Jyrki Kuoppala pushed Jarkko to ask Oulu University to free the IRC code so that it could be run outside of Oulu, after they got it released, Jyrki Kuoppala installed another server; this was the first "irc network". Jarkko got some friends at the Helsinki University and Tampere University to start running IRC servers when his number of users increased and other universities soon followed. At this time Jarkko realized that the rest of the BBS features wouldn't fit in his program. Jarkko got in touch with people at the University of Oregon State University.
They wanted to connect to the Finnish network. They had obtained the program from one of Jarkko's friends, Vijay Subramaniam—the first non-Finnish person to use IRC. IRC grew larger and got used on the entire Finnish national network—Funet—and connected to Nordunet, the Scandinavian branch of the Internet. In November 1988, IRC had spread across the Internet and in the middle of 1989, there were some 40 servers worldwide. In August 1990, the first major disagreement took place in the IRC world; the "A-net" included a server named eris.berkeley.edu. It required no passwords and had no limit on the number of connects; as Greg "wumpus" Lindahl explains: "it had a wildcard server line, so people were hooking up servers and nick-colliding everyone". The "Eris Free Network", EFnet, made the eris machine the first to be Q-lined from IRC. In wumpus' words again: "Eris refused to remove that line, it wasn't much of a fight. A-net was formed with the eris servers, EFnet was formed with the non-eris servers.
History showed most users went with EFnet. Once ANet disbanded, the name EFnet became meaningless, once again it was the one and only IRC network, it is around that time that IRC was used to report on the 1991 Soviet coup d'état attempt throughout a media blackout. It was used in a similar fashion during the Gulf War. Chat logs of these and other events are kept in the ibiblio archive. Another fork effort, the first that made a big and lasting difference, was initiated by'Wildthang' in the U. S. October 1992, it was meant to be just a test network to develop bots on but it grew to a network "for friends and their friends". In Europe and Canada a separate new network was being worked on and in December the French servers connected to the Canadian ones, by the end of the month, the French and Canadian network was connected to the US one, forming the network that came to be called "The Undernet"; the "undernetters" wanted to take ircd further in an attempt to make it less bandwidth consumptive and to try to sort out the channel chaos that EFnet started to suffer from.
For the latter purpose, the Undernet implemented timestamps, new routing and offered the CService—a program that allowed users to register channels and attempted to protect them from troublemakers. The first server list presented, from February 15, 1993, includes servers from USA, France and Japan. On August 15, the new user count record was set to 57 users. In May 1993, RFC 1459 was published and details a simple protocol for client/server operation, one-to-one and one-to-many conversations, it is notable that a significant number of extensions like CTCP, colors and formats are not included in the protocol specifications, nor is character encoding, which led various implementations of servers and clients to diverge. In fact, software implementation varied from one network to the other, each network implementing their own policies and standards in their own code bases. During the summer of 1994, the Undernet was itself forked; the new network was called DALnet, formed for better user service and more user and channel protections.
One of the more significant changes in DALnet was use of lo
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, process controls; the design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. The goal of user interface design is to produce a user interface which makes it easy and enjoyable to operate a machine in the way which produces the desired result; this means that the operator needs to provide minimal input to achieve the desired output, that the machine minimizes undesired outputs to the human. User interfaces are composed of one or more layers including a human-machine interface interfaces machines with physical input hardware such a keyboards, game pads and output hardware such as computer monitors and printers.
A device that implements a HMI is called a human interface device. Other terms for human-machine interfaces are man–machine interface and when the machine in question is a computer human–computer interface. Additional UI layers may interact with one or more human sense, including: tactile UI, visual UI, auditory UI, olfactory UI, equilibrial UI, gustatory UI. Composite user interfaces are UIs that interact with two or more senses; the most common CUI is a graphical user interface, composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI it becomes a multimedia user interface. There are three broad categories of CUI: standard and augmented. Standard composite user interfaces use standard human interface devices like keyboards and computer monitors; when the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface.
When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense Standard CUI with visual display and smells; the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch. In complex systems, the human–machine interface is computerized; the term human–computer interface refers to this kind of system. In the context of computing, the term extends as well to the software dedicated to control the physical elements used for human-computer interaction; the engineering of the human–machine interfaces is enhanced by considering ergonomics.
The corresponding disciplines are human factors engineering and usability engineering, part of systems engineering. Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics. There is a difference between a user interface and an operator interface or a human–machine interface; the term "user interface" is used in the context of computer systems and electronic devices Where a network of equipment or computers are interlinked through an MES -or Host to display information. A human-machine interface is local to one machine or piece of equipment, is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel; the user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI. In practice, the abbreviation MMI is still used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more used for human–computer interaction. Other terms used are operator interface terminal; however it is abbreviated, the terms refer to the'layer' that separates a human, operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to
MacOS is a series of graphical operating systems developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac family of computers. Within the market of desktop and home computers, by web usage, it is the second most used desktop OS, after Microsoft Windows.macOS is the second major series of Macintosh operating systems. The first is colloquially called the "classic" Mac OS, introduced in 1984, the final release of, Mac OS 9 in 1999; the first desktop version, Mac OS X 10.0, was released in March 2001, with its first update, 10.1, arriving that year. After this, Apple began naming its releases after big cats, which lasted until OS X 10.8 Mountain Lion. Since OS X 10.9 Mavericks, releases have been named after locations in California. Apple shortened the name to "OS X" in 2012 and changed it to "macOS" in 2016, adopting the nomenclature that they were using for their other operating systems, iOS, watchOS, tvOS; the latest version is macOS Mojave, publicly released in September 2018.
Between 1999 and 2009, Apple sold. The initial version, Mac OS X Server 1.0, was released in 1999 with a user interface similar to Mac OS 8.5. After this, new versions were introduced concurrently with the desktop version of Mac OS X. Beginning with Mac OS X 10.7 Lion, the server functions were made available as a separate package on the Mac App Store.macOS is based on technologies developed between 1985 and 1997 at NeXT, a company that Apple co-founder Steve Jobs created after leaving the company. The "X" in Mac OS X and OS X is pronounced as such; the X was a prominent part of the operating system's brand identity and marketing in its early years, but receded in prominence since the release of Snow Leopard in 2009. UNIX 03 certification was achieved for the Intel version of Mac OS X 10.5 Leopard and all releases from Mac OS X 10.6 Snow Leopard up to the current version have UNIX 03 certification. MacOS shares its Unix-based core, named Darwin, many of its frameworks with iOS, tvOS and watchOS.
A modified version of Mac OS X 10.4 Tiger was used for the first-generation Apple TV. Releases of Mac OS X from 1999 to 2005 ran on the PowerPC-based Macs of that period. After Apple announced that they were switching to Intel CPUs from 2006 onwards, versions were released for 32-bit and 64-bit Intel-based Macs. Versions from Mac OS X 10.7 Lion run on 64-bit Intel CPUs, in contrast to the ARM architecture used on iOS and watchOS devices, do not support PowerPC applications. The heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, launched in 1989; the kernel of NeXTSTEP is based upon the Mach kernel, developed at Carnegie Mellon University, with additional kernel layers and low-level user space code derived from parts of BSD. Its graphical user interface was built on top of an object-oriented GUI toolkit using the Objective-C programming language. Throughout the early 1990s, Apple had tried to create a "next-generation" OS to succeed its classic Mac OS through the Taligent and Gershwin projects, but all of them were abandoned.
This led Apple to purchase NeXT in 1996, allowing NeXTSTEP called OPENSTEP, to serve as the basis for Apple's next generation operating system. This purchase led to Steve Jobs returning to Apple as an interim, the permanent CEO, shepherding the transformation of the programmer-friendly OPENSTEP into a system that would be adopted by Apple's primary market of home users and creative professionals; the project was first code named "Rhapsody" and officially named Mac OS X. Mac OS X was presented as the tenth major version of Apple's operating system for Macintosh computers. Previous Macintosh operating systems were named using Arabic numerals, as with Mac OS 8 and Mac OS 9; the letter "X" in Mac OS X's name refers to a Roman numeral. It is therefore pronounced "ten" in this context. However, it is commonly pronounced like the letter "X"; the first version of Mac OS X, Mac OS X Server 1.0, was a transitional product, featuring an interface resembling the classic Mac OS, though it was not compatible with software designed for the older system.
Consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API; the consumer version of Mac OS X was launched in 2001 with Mac OS X 10.0. Reviews were variable, with extensive praise for its sophisticated, glossy Aqua interface but criticizing it for sluggish performance. With Apple's popularity at a low, the makers of several classic Mac applications such as FrameMaker and PageMaker declined to develop new versions of their software for Mac OS X. Ars Technica columnist John Siracusa, who reviewed every major OS X release up to 10.10, described the early releases in retrospect as'dog-slow, feature poor' and Aqua as'unbearably slow and a huge resource hog'. Apple developed several new releases of Mac OS X. Siracusa's review of version 10.3, noted "It's strange to have gone from years of uncertainty and vaporware to a steady annual supply of major new operating system releases." Version 10.4, Tiger shocked executives at Microsoft by offering a number of features, such as fast file s
Online chat may refer to any kind of communication over the Internet that offers a real-time transmission of text messages from sender to receiver. Chat messages are short in order to enable other participants to respond quickly. Thereby, a feeling similar to a spoken conversation is created, which distinguishes chatting from other text-based online communication forms such as Internet forums and email. Online chat may address point-to-point communications as well as multicast communications from one sender to many receivers and voice and video chat, or may be a feature of a web conferencing service. Online chat in a less stringent definition may be any direct text-based or video-based, one-on-one chat or one-to-many group chat, using tools such as instant messengers, Internet Relay Chat, talkers and MUDs; the expression online chat comes from the word chat which means "informal conversation". Online chat includes web-based applications that allow communication – directly addressed, but anonymous between users in a multi-user environment.
Web conferencing is a more specific online service, sold as a service, hosted on a web server controlled by the vendor. The first online chat system was called Talkomatic, created by Doug Brown and David R. Woolley in 1973 on the PLATO System at the University of Illinois, it offered several channels, each of which could accommodate up to five people, with messages appearing on all users' screens character-by-character as they were typed. Talkomatic was popular among PLATO users into the mid-1980s. In 2014, Brown and Woolley released a web-based version of Talkomatic; the first online system to use the actual command "chat" was created for The Source in 1979 by Tom Walker and Fritz Thane of Dialcom, Inc. The first transatlantic Internet chat took place between Oulu and Corvallis, Oregon in February 1989; the first dedicated online chat service, available to the public was the CompuServe CB Simulator in 1980, created by CompuServe executive Alexander "Sandy" Trevor in Columbus, Ohio. Ancestors include network chat software such as UNIX "talk" used in the 1970s.
The term chatiquette describes basic rules of online communication. These conventions or guidelines have been created to avoid misunderstandings and to simplify the communication between users. Chatiquette varies from community to community and describes basic courtesy; as an example, it is considered rude to write only in upper case, because it appears as if the user is shouting. The word "chatiquette" has been used in connection with various chat systems since 1995. Chatrooms can produce a strong sense of online identity leading to impression of subculture. Chats are valuable sources of various types of information, the automatic processing of, the object of chat/text mining technologies. Criticism of online chatting and text messaging include concern that they replace proper English with shorthand or with an completely new hybrid language. Writing is changing as it takes on some of the features of speech. Internet chat rooms and rapid real-time teleconferencing allow users to interact with whoever happens to coexist in cyberspace.
These virtual interactions involve us in'talking' more and more than before. With chatrooms replacing many face-to-face conversations, it is necessary to be able to have quick conversation as if the person were present, so many people learn to type as as they would speak; some critics are wary that this casual form of speech is being used so much that it will take over common grammar. With the increasing population of online chatrooms there has been a massive growth of new words created or slang words, many of them documented on the website Urban Dictionary. Sven Birkerts wrote: "as new electronic modes of communication provoke similar anxieties amongst critics who express concern that young people are at risk, endangered by a rising tide of information over which the traditional controls of print media and the guardians of knowledge have no control on it". In Guy Merchant's journal article Teenagers in Cyberspace: An Investigation of Language Use and Language Change in Internet Chatrooms; this new literacy develops skills that may well be important to the labor market but are viewed with suspicion in the media and by educationalists.
Merchant says "Younger people tend to be more adaptable than other sectors of society and, in general, quicker to adapt to new technology. To some extent they are the innovators, the forces of change in the new communication landscape." In this article he is saying that young people are adapting to what they were given. The following are common chat programs and protocols: Chat programs supporting multiple protocols: Web sites with browser-based chat services: Chat room Collaborative software Instant messaging Internet forum List of virtual communities with more than 100 million active users Online dating service Real-time text Videotelephony Voice chat
Real-time text is text transmitted as it is typed or created. Recipients can read the message while it is being written, without waiting. Real-time text is used for conversational text, in collaboration, in live captioning. Technologies include TDD/TTY devices for the deaf, live captioning for TV, Text over IP, some types of instant messaging, captioning for telephony/video teleconferencing, telecommunications relay services including ip-relay, transcription services including Remote CART, TypeWell, collaborative text editing, streaming text applications, next-generation 9-1-1/1-1-2 emergency service. Obsolete TDD/TTY devices are being replaced by more modern real-time text technologies, including Text over IP, ip-relay, instant messaging. During 2012, the Real-Time Text Taskforce designed a standard international symbol to represent real-time text, as well as the alternate name Fast Text to improve public education of the technology. While standard instant messaging is not real-time text, a real-time text option is found in some instant messaging software, including AOL Instant Messenger's "Real-Time IM" feature.
Real-time text is possible over any XMPP compatible chat networks, including those used by Apple iChat, Cisco WebEx, Google Talk, by using appropriate software that has a real-time text feature. When present in IM programs, the real-time text feature can be turned on/off, just like other chat features such as audio. Real-time text programs date at least to the 1970s, with the talk program on the DEC PDP-11, which remains in use on Unix systems. Certain real-time text applications have a feature that allows the real-time text to be "turned off", for temporary purposes; this allows the sender to pre-compose the message as a standard IM or text message before transmitting. Real-time text is used by the deaf, including IP-Relay services, TDD/TTY devices, Text over IP. Real-time text allows the other person to read without waiting for the sender to finish composing his or her sentence/message; this allows conversational use of text, much like a hearing person can listen to someone speaking in real-time.
Captioned telephony is the streaming of real-time text captions in parallel with speech on a phone call. This is used by people who are hard of hearing to allow them to have the full benefit of listening as best they can, hearing all the intonation etc. in speech, yet have the captions for those words they cannot hear enough. In the United States, captioned telephony is one of the free relay services, available to anyone, hard-of-hearing. Developed for use on the analog phone systems it is now available over IP using standard devices. Collaborative real-time editing is the utilization of real-time text for shared editing, rather than for conversation. Split screen chat, where conversational text appears continuously, is considered real-time text; some examples that provide this as a service are Apache Wave and its fork SwellRT, the editor Gobby, most notably Google Docs. Real-time text is used in closed captioning and when captions are being streamed live continuously during live events. Transcription services including Communication Access Real-Time Translation and TypeWell use real-time text, where text is streamed live to a remote display.
This is used in court reporting, is used by deaf attendees at a conference. Real-time text provides an enhancement to text messaging on mobile phones, via real-time texting apps. Real-time text protocols include Text over IP designed around ITU-T T.140, IETF RFC 4103, RFC 5194, XMPP Extension Protocol XEP-0301. According to ITU-T Multimedia Recommendation F.703, total conversation defines the simultaneous use of audio and real-time text. An instant messaging program that can enable all three features would be compliant. Real time text is an important part of it. Real-time text is historically found in the old UNIX talk, BBS software such as Celerity BBS, older versions of ICQ messaging software. Collaborative real-time editor Text over IP realtimetext.org - Real Time Text Taskforce realjabber.org - Animation of what real-time text looks like / Information on XEP-0301 Total Conversation by IVèS Total Conversation by Omintor Total Conversation in the cloud
Berkeley Software Distribution
The Berkeley Software Distribution was an operating system based on Research Unix and distributed by the Computer Systems Research Group at the University of California, Berkeley. Today, "BSD" refers to its descendants, such as FreeBSD, OpenBSD, NetBSD, or DragonFly BSD. BSD was called Berkeley Unix because it was based on the source code of the original Unix developed at Bell Labs. In the 1980s, BSD was adopted by workstation vendors in the form of proprietary Unix variants such as DEC Ultrix and Sun Microsystems SunOS due to its permissive licensing and familiarity to many technology company founders and engineers. Although these proprietary BSD derivatives were superseded in the 1990s by UNIX SVR4 and OSF/1 releases provided the basis for several open-source operating systems including FreeBSD, OpenBSD, NetBSD, DragonFly BSD, TrueOS. These, in turn, have been used by proprietary operating systems, including Apple's macOS and iOS, which derived from them, Microsoft Windows, which used a part of its TCP/IP code.
The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry, on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. In 1975, Ken Thompson came to Berkeley as a visiting professor, he started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, so in 1977 Joy started compiling the first Berkeley Software Distribution, released on March 9, 1978.
1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out; the second Berkeley Software Distribution, released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAX's virtual memory capabilities; the kernel of 32V was rewritten by Berkeley students to include a virtual memory implementation, a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was alternatively called Virtual VAX/UNIX or VMUNIX, BSD kernel images were called /vmunix until 4.4BSD. After 4.3BSD was released in June 1986, it was determined that BSD would move away from the aging VAX platform.
The Power 6/32 platform developed by Computer Consoles Inc. seemed promising at the time, but was abandoned by its developers shortly thereafter. Nonetheless, the 4.3BSD-Tahoe port proved valuable, as it led to a separation of machine-dependent and machine-independent code in BSD which would improve the system's future portability. In addition to portability, the CSRG worked on an implementation of the OSI network protocol stack, improvements to the kernel virtual memory system and new TCP/IP algorithms to accommodate the growth of the Internet; until all versions of BSD used proprietary AT&T Unix code, were therefore subject to an AT&T software license. Source code licenses had become expensive and several outside parties had expressed interest in a separate release of the networking code, developed outside AT&T and would not be subject to the licensing requirement; this led to Networking Release 1, made available to non-licensees of AT&T code and was redistributable under the terms of the BSD license.
It was released in June 1989. After Net/1, BSD developer Keith Bostic proposed that more non-AT&T sections of the BSD system be released under the same license as Net/1. To this end, he started a project to reimplement most of the standard Unix utilities without using the AT&T code. Within eighteen months, all of the AT&T utilities had been replaced, it was determined that only a few AT&T files remained in the kernel; these files were removed, the result was the June 1991 release of Networking Release 2, a nearly complete operating system, distributable. Net/2 was the basis for two separate ports of BSD to the Intel 80386 architecture: the free 386BSD by William Jolitz and the proprietary BSD/386 by Berkeley Software Design. 386BSD itself was short-lived, but became the initial code base of the NetBSD and FreeBSD projects that were started shortly thereafter. BSDi soon found itself in legal trouble with AT&T's Unix System Laboratories subsidiary the owners of the System V copyright and the Unix trademark.
The USL v. BSDi lawsuit was filed in 1992 and led to an injunction on the distribution of Net/2 until the validity of USL's copyright claims on the source could be determined; the lawsuit slowed development of the free-
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f