In computing, a computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the main input method for computers. Keyboard keys have characters engraved or printed on them, each press of a key corresponds to a single written symbol. However, producing some symbols may require pressing and holding several keys or in sequence. While most keyboard keys produce letters, numbers or signs, other keys or simultaneous key presses can produce actions or execute computer commands. In normal usage, the keyboard is used as a text entry interface for typing text and numbers into a word processor, text editor or any other program. In a modern computer, the interpretation of key presses is left to the software. A computer keyboard distinguishes each physical key from every other key and reports all key presses to the controlling software.
Keyboards are used for computer gaming — either regular keyboards or keyboards with special gaming features, which can expedite used keystroke combinations. A keyboard is used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination. Although on Pre-Windows 95 Microsoft operating systems this forced a re-boot, now it brings up a system security options screen. A command-line interface is a type of user interface navigated using a keyboard, or some other similar device that does the job of one. While typewriters are the definitive ancestor of all key-based text entry devices, the computer keyboard as a device for electromechanical data entry and communication derives from the utility of two devices: teleprinters and keypunches, it was through such devices. As early as the 1870s, teleprinter-like devices were used to type and transmit stock market text data from the keyboard across telegraph lines to stock ticker machines to be copied and displayed onto ticker tape.
The teleprinter, in its more contemporary form, was developed from 1907 to 1910 by American mechanical engineer Charles Krum and his son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were developed separately by individuals such as Royal Earl House and Frederick G. Creed. Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s; the keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint communication for most of the 20th century, while the keyboard on the keypunch device played a strong role in data entry and storage for just as long. The development of the earliest computers incorporated electric typewriter keyboards: the development of the ENIAC computer incorporated a keypunch device as both the input and paper-based output device, while the BINAC computer made use of an electromechanically controlled typewriter for both data entry onto magnetic tape and data output.
The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen. However, keyboards remain central to human-computer interaction to the present as mobile personal computing devices such as smartphones and tablets adapt the keyboard as an optional virtual, touchscreen-based means of data entry. One factor determining the size of a keyboard is the presence of duplicate keys, such as a separate numeric keyboard or two each of Shift, ALT and CTL for convenience. Further the keyboard size depends on the extent to which a system is used where a single action is produced by a combination of subsequent or simultaneous keystrokes, or multiple pressing of a single key. A keyboard with few keys is called a keypad. Another factor determining the size of a keyboard is the spacing of the keys.
Reduction is limited by the practical consideration that the keys must be large enough to be pressed by fingers. Alternatively a tool is used for pressing small keys. Standard alphanumeric keyboards have keys that are on three-quarter inch centers, have a key travel of at least 0.150 inches. Desktop computer keyboards, such as the 101-key US traditional keyboards or the 104-key Windows keyboards, include alphabetic characters, punctuation symbols, numbers and a variety of function keys; the internationally common 102/104 key keyboards have a smaller left shift key and an additional key with some more symbols between that and the letter to its right. The enter key is shaped differently. Computer keyboards are similar to electric-typewriter keyboards but contain additional keys, such as the command or Windows keys. There is no standard computer keyboard. There are three different PC keyboards: the original PC keyboard with 84 keys, the AT keyboard with 84 keys and the enhanced keyboard with 101 keys.
The three differ somewhat in the placement of function keys, the control keys, the return key, the shift key. Keyboards on laptops and notebook computers have a shorter travel distance for the keystroke, shorter over travel distance, a reduced set of keys, they may not have a numeric keypad, the function keys may be placed in locations that differ from their placement on a standard, full-sized keyboard. The switch
Microsoft Corporation is an American multinational technology company with headquarters in Redmond, Washington. It develops, licenses and sells computer software, consumer electronics, personal computers, related services, its best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers; as of 2016, it is the world's largest software maker by revenue, one of the world's most valuable companies. The word "Microsoft" is a portmanteau of "microcomputer" and "software". Microsoft is ranked No. 30 in the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800, it rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows.
The company's 1986 initial public offering, subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011. As of 2015, Microsoft is market-dominant in the IBM PC-compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android; the company produces a wide range of other consumer and enterprise software for desktops and servers, including Internet search, the digital services market, mixed reality, cloud computing and software development. Steve Ballmer replaced Gates as CEO in 2000, envisioned a "devices and services" strategy; this began with the acquisition of Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers.
Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999. In 2018, Microsoft surpassed Apple as the most valuable publicly traded company in the world after being dethroned by the tech giant in 2010. Childhood friends Bill Gates and Paul Allen sought to make a business utilizing their shared skills in computer programming. In 1972 they founded their first company, named Traf-O-Data, which sold a rudimentary computer to track and analyze automobile traffic data. While Gates enrolled at Harvard, Allen pursued a degree in computer science at Washington State University, though he dropped out of school to work at Honeywell; the January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systems's Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. After a call from Gates claiming to have a working interpreter, MITS requested a demonstration.
Since they didn't yet have one, Allen worked on a simulator for the Altair while Gates developed the interpreter. Although they developed the interpreter on a simulator and not the actual device, it worked flawlessly when they demonstrated the interpreter to MITS in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as the CEO; the original name of "Micro-Soft" was suggested by Allen. In August 1977 the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office, "ASCII Microsoft". Microsoft moved to a new home in Bellevue, Washington in January 1979. Microsoft entered the operating system business in 1980 with its own version of Unix, called Xenix. However, it was MS-DOS. After negotiations with Digital Research failed, IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS, set to be used in the upcoming IBM Personal Computer.
For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products, which it branded as MS-DOS, though IBM rebranded it to PC DOS. Following the release of the IBM PC in August 1981, Microsoft retained ownership of MS-DOS. Since IBM had copyrighted the IBM PC BIOS, other companies had to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Due to various factors, such as MS-DOS's available software selection, Microsoft became the leading PC operating systems vendor; the company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's disease. Allen claimed that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he didn't think he was working hard enough. After leaving Microsoft, Allen lost billions of dollars on ill-conceived or mistimed technology investments.
He invested in low-tech sectors, sports teams, commercial real estate. Despite having begun jointly developing a new operating system, OS/2, with IBM in
A computer file is a computer resource for recording data discretely in a computer storage device. Just as words can be written to paper, so can information be written to a computer file. Files can be transferred through the internet. There are different types of computer files, designed for different purposes. A file may be designed to store a picture, a written message, a video, a computer program, or a wide variety of other kinds of data; some types of files can store several types of information at once. By using computer programs, a person can open, change and close a computer file. Computer files may be reopened and copied an arbitrary number of times. Files are organised in a file system, which keeps track of where the files are located on disk and enables user access; the word "file" derives from the Latin filum."File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie.
He used it for building a table from successive differences, for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards." In February 1950: In a Radio Corporation of America advertisement in Popular Science Magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept'on file' and taken out again. Such a'file' now exists in a'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones - speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files"; the introduction, circa 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word.
Although the contemporary "register file" demonstrates the early concept of files, its use has decreased. On most modern operating systems, files are organized into one-dimensional arrays of bytes; the format of a file is defined by its content since a file is a container for data, although, on some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file are associated with either ASCII or UTF-8 characters, while the bytes of image and audio files are interpreted otherwise. Most file types allocate a few bytes for metadata, which allows a file to carry some basic information about itself; some file systems can store arbitrary file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via software-specific databases. All those methods, are more susceptible to loss of metadata than are container and archive file formats.
At any instant in time, a file might have a size expressed as number of bytes, that indicates how much storage is associated with the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count; the general definition of a file does not require that its size have any real meaning, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file. For example, the file to which the link /bin/ls points in a typical Unix-like system has a defined size that changes. Compare this with /dev/null, a file, but its size may be obscure. Information in a computer file can consist of smaller packets of information that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details.
A text file may contain lines of corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image or it may contain an executable; the way information is grouped into a file is up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis; the programmers who create the programs decide what files are needed, how they are to be used and their names. In some cases, computer pr
A computer mouse is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is translated into the motion of a pointer on a display, which allows a smooth control of the graphical user interface; the first public demonstration of a mouse controlling a computer system was in 1968. Wired to a computer, many modern mice are cordless, relying on short-range radio communication with the connected system. Mice used a ball rolling on a surface to detect motion, but modern mice have optical sensors that have no moving parts. In addition to moving a cursor, computer mice have one or more buttons to allow operations such as selection of a menu item on a display. Mice also feature other elements, such as touch surfaces and "wheels", which enable additional control and dimensional input; the earliest known publication of the term mouse as referring to a computer pointing device is in Bill English's July 1965 publication, "Computer-Aided Display Control" originating from its resemblance to the shape and size of a mouse, a rodent, with the cord resembling its tail.
The plural for the small rodent is always "mice" in modern usage. The plural of a computer mouse is "mouses" and "mice" according to most dictionaries, with "mice" being more common; the first recorded plural usage is "mice". The term computer mouses may be used informally in some cases. Although, the plural of mouse is mice, the two words have undergone a differentiation through usage; the trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called Comprehensive Display System. Benjamin was working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a "roller ball" for this purpose; the device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was built, the device was kept as a military secret.
Another early trackball was built by British electrical engineer Kenyon Taylor in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR system in 1952. DATAR was similar in concept to Benjamin's display; the trackball used four disks to pick up two each for the X and Y directions. Several rollers provided mechanical support; when the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals; this trackball used a standard Canadian five-pin bowling ball. It was not patented. Douglas Engelbart of the Stanford Research Institute has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, several others as the inventor of the computer mouse.
Engelbart was recognized as such in various obituary titles after his death in July 2013. By 1963, Engelbart had established a research lab at SRI, the Augmentation Research Center, to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence; that November, while attending a conference on computer graphics in Reno, Engelbart began to ponder how to adapt the underlying principles of the planimeter to X-Y coordinate input. On November 14, 1963, he first recorded his thoughts in his personal notebook about something he called a "bug," which in a "3-point" form could have a "drop point and 2 orthogonal wheels." He wrote that the "bug" would be "easier" and "more natural" to use, unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard."In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the mouse as early models had a cord attached to the rear part of the device which looked like a tail, in turn resembled the common mouse.
As noted above, this "mouse" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect. Several other experimental pointing-devices developed for Engelbart's oN-Line System exploited different body movements – for example, head-mounted devices attached to the chin or nose – but the mouse won out because of its speed and convenience; the first mouse, a bulky device used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Engelbart's group had been using their second generation, 3-button mouse for about a year.
On October 2, 1968, a mouse device named Rollkugel (German for "rolling bal
MIDI is a technical standard that describes a communications protocol, digital interface, electrical connectors that connect a wide variety of electronic musical instruments and related audio devices for playing and recording music. A single MIDI link through a MIDI cable can carry up to sixteen channels of information, each of which can be routed to a separate device or instrument; this could be sixteen different digital instruments, for example. MIDI carries event messages, data that specify the instructions for music, including a note's notation, velocity, panning to the right or left of stereo, clock signals; when a musician plays a MIDI instrument, all of the key presses, button presses, knob turns and slider changes are converted into MIDI data. One common MIDI application is to play a MIDI keyboard or other controller and use it to trigger a digital sound module to generate sounds, which the audience hears produced by a keyboard amplifier. MIDI data can be recorded to a sequencer to be edited or played back.
A file format that stores and exchanges the data is defined. Advantages of MIDI include small file size, ease of modification and manipulation and a wide choice of electronic instruments and synthesizer or digitally-sampled sounds. A MIDI recording of a performance on a keyboard could sound like a piano or other keyboard instrument. A MIDI recording is not an audio signal, as with a sound recording made with a microphone. Prior to the development of MIDI, electronic musical instruments from different manufacturers could not communicate with each other; this meant that a musician could not, for example, plug a Roland keyboard into a Yamaha synthesizer module. With MIDI, any MIDI-compatible keyboard can be connected to any other MIDI-compatible sequencer, sound module, drum machine, synthesizer, or computer if they are made by different manufacturers. MIDI technology was standardized in 1983 by a panel of music industry representatives, is maintained by the MIDI Manufacturers Association. All official MIDI standards are jointly developed and published by the MMA in Los Angeles, the MIDI Committee of the Association of Musical Electronics Industry in Tokyo.
In 2016, the MMA established the MIDI Association to support a global community of people who work, play, or create with MIDI. In the early 1980s, there was no standardized means of synchronizing electronic musical instruments manufactured by different companies. Manufacturers had their own proprietary standards to synchronize instruments, such as CV/gate and Digital Control Bus. Roland founder Ikutaro Kakehashi felt the lack of standardization was limiting the growth of the electronic music industry. In June 1981, he proposed developing a standard to Oberheim Electronics founder Tom Oberheim, who had developed his own proprietary interface, the Oberheim System. Kakehashi felt the system was too cumbersome, spoke to Sequential Circuits president Dave Smith about creating a simpler, cheaper alternative. While Smith discussed the concept with American companies, Kakehashi discussed it with Japanese companies Yamaha and Kawai. Representatives from all companies met to discuss the idea in October.
Using Roland's DCB as a basis and Sequential Circuits engineer Chet Wood devised a universal synthesizer interface to allow communication between equipment from different manufacturers. Smith proposed this standard at the Audio Engineering Society show in November 1981; the standard was discussed and modified by representatives of Roland, Korg and Sequential Circuits. Kakehashi favored the name Universal Musical Interface, pronounced you-me, but Smith felt this was "a little corny". However, he liked the use of "instrument" instead of "synthesizer", proposed the name Musical Instrument Digital Interface. Moog Music founder Robert Moog announced MIDI in the October 1982 issue of Keyboard. At the 1983 Winter NAMM Show, Smith demonstrated a MIDI connection between Prophet 600 and Roland JP-6 synthesizers; the MIDI specification was published in August 1983. The MIDI standard was unveiled by Kakehashi and Smith, who received Technical Grammy Awards in 2013 for their work; the first MIDI synthesizers were the Roland Jupiter-6 and the Prophet 600, both released in 1982.
1983 saw the release of the first MIDI drum machine, the Roland TR-909, the first MIDI sequencer, the Roland MSQ-700. The first computers to support MIDI were the NEC PC-88 and PC-98 in 1982, the MSX released in 1983. MIDI's appeal was limited to professional musicians and record producers who wanted to use electronic instruments in the production of popular music; the standard allowed different instruments to communicate with each other and with computers, this spurred a rapid expansion of the sales and production of electronic instruments and music software. This interoperability allowed one device to be controlled from another, which reduced the amount of hardware musicians needed. MIDI's introduction coincided with the dawn of the personal computer era and the introduction of samplers and digital synthesizers; the creative possibilities brought about by MIDI technology are credited for helping revive the music industry in the 1980s. MIDI introduced capabilities. MIDI sequencing makes it possible for
Multi-monitor called multi-display and multi-head, is the use of multiple physical display devices, such as monitors and projectors, in order to increase the area available for computer programs running on a single computer system. Research studies show that, depending on the type of work, multi-head may increase the productivity by 50–70%. Multiple computers can be connected to provide a single display, e.g. over Gigabit Ethernet/Ethernet to drive a large video wall. One way to extend the number of displays on one computer is to add displays via USB. Starting in 2006, DisplayLink released several chips for USB support on VGA/DVI/LVDS and other interfaces. In many professions, including graphic design, communications, accounting and video editing, the idea of two or more monitors being driven from one machine is not a new one. While in the past, it has meant multiple graphics adapters and specialized software, it was common for engineers to have at least two, if not more, displays to enhance productivity.
Multi-monitor gaming/simulation is becoming more common. The rising popularity of using multiple monitors to game is leading to websites being introduced which allow for smooth and easy configuration from outside sources from the original one screen option given by developers to a new multiple screen option. Early versions of Doom permitted a three-monitor display mode, using three networked machines to show left and center views. More games have used multiple monitors to show a more absorbing interface to the player or to display game information. Various flight simulators can use these monitor setups to create an artificial cockpit with more realistic interfaces. Others such as Supreme Commander and World in Conflict can use an additional monitor for a large scale map of the battlefield. A large number of older games support multi-monitor set-ups by treating the total screen space as a single monitor to the game, a technique known as spanning. Many games without inherent multi-monitor support such as Guild Wars and World of Warcraft can be made to run in multi-monitor set-ups, with this technique or in conjunction with addition of third-party software A larger list of games that support dual/multi-screen modes is available at WSGF.
The concept of "multi-monitor" games is not limited to games that can be played on personal computers. As arcade technology entered the 1990s, larger cabinets were being built which in turn housed larger monitors such as the 3 28" screen version of Namco's Ridge Racer from 1993. Although large screen technology such as CRT rear projection was beginning to be used more multi-monitor games were still released, such as Sega's F355 Challenge from 1999 which again used 3 28" monitors for the sit-down cockpit version; the most recent use of a multi-monitor setup in arcades occurred with Taito's Dariusburst: Another Chronicle game, released in Japan in December 2010 and worldwide the following year. It uses 2 32" LCD screens and an angled mirror to create a seamless widescreen. Nintendo demonstrated the feasibility of playing multi-monitor games on handheld game consoles in designing the Nintendo DS and its successor, the Nintendo 3DS, which both became successful consoles in their own right. Games on these systems take advantage of the two screens available by displaying gameplay on the upper screen, while showing useful information on the bottom screen.
There are a number of games for the Nintendo DS, whose gameplay spans across both screens, combining them into one tall screen for a more unique and larger view of the action. Ordinary software does not need special support for multiple screens if it uses the graphic accelerator. At the usual application level, multihead is presented just as a single larger monitor spanning over all screens. However, some special approaches may increase the multithread performance. With multiple monitors present, each screen will have its own graphics buffer. One possible scenario for programming is to present to OpenGL or DirectX a continuous, virtual frame buffer in which the OS or graphics driver writes out to each individual buffer. With some graphics cards, it's possible to enable a mode called "horizontal span" which accomplishes this; the OpenGL/DirectX programmer renders to a large frame buffer for output. In practice, with recent cards, this mode is being phased out because it does not make good use of GPU parallelism and does not support arbitrary arrangements of monitors.
A more recent technique uses the wglShareLists feature of OpenGL to share data across multiple GPUs, render to each individual monitor's frame buffer. Android supports an additional monitor as of version 4.2 but additional software is needed to multi-task/use both at once. Dual-touchscreen Multiseat configuration Video wall Elliott, John. "Dual-Head Operation on a Vintage PC". Archived from the original on 2016-11-23. Retrieved 2016-11-23
An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are longer than one line of text, are at least temporarily archived. Depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible. Forums have a specific set of jargon associated with them. A discussion forum is hierarchical or tree-like in structure: a forum can contain a number of subforums, each of which may have several topics. Within a forum's topic, each new discussion started is called a thread and can be replied to by as many people as so wish. Depending on the forum's settings, users can be anonymous or have to register with the forum and subsequently log in to post messages. On most forums, users do not have to log in to read existing messages; the modern forum originated from bulletin boards, so-called computer conferencing systems, are a technological evolution of the dialup bulletin board system.
From a technological standpoint, forums or boards are web applications managing user-generated content. Early Internet forums could be described as a web version of an electronic mailing list or newsgroup. Developments emulated the different newsgroups or individual lists, providing more than one forum, dedicated to a particular topic. Internet forums are prevalent in several developed countries. Japan posts the most with over two million per day on 2channel. China has many millions of posts on forums such as Tianya Club; some of the first forum systems were the Planet-Forum system, developed at the beginning of the 1970-s, the EIES system, first operational in 1976, the KOM system, first operational in 1977. One of the first forum sites is Delphi Forums, once called Delphi; the service, with four million members, dates to 1983. Forums perform a function similar to that of dial-up bulletin board systems and Usenet networks that were first created starting in the late 1970s. Early web-based forums date back as far as 1994, with the WIT project from W3 Consortium and starting from this time, many alternatives were created.
A sense of virtual community develops around forums that have regular users. Technology, video games, music, fashion and politics are popular areas for forum themes, but there are forums for a huge number of topics. Internet slang and image macros popular across the Internet are abundant and used in Internet forums. Forum software packages are available on the Internet and are written in a variety of programming languages, such as PHP, Java and ASP; the configuration and records of posts can be stored in a database. Each package offers different features, from the most basic, providing text-only postings, to more advanced packages, offering multimedia support and formatting code. Many packages can be integrated into an existing website to allow visitors to post comments on articles. Several other web applications, such as blog software incorporate forum features. WordPress comments at the bottom of a blog post allow for a single-threaded discussion of any given blog post. Slashcode, on the other hand, is far more complicated, allowing threaded discussions and incorporating a robust moderation and meta-moderation system as well as many of the profile features available to forum users.
Some stand alone threads on forums have reached fame and notability such as the "I am lonely will anyone speak to me" thread on MovieCodec.com's forums, described as the "web's top hangout for lonely folk" by Wired Magazine. A forum consists of a tree-like directory structure; the top end is "Categories". A forum can be divided into categories for the relevant discussions. Under the categories are sub-forums and these sub-forums can further have more sub-forums; the topics come under the lowest level of sub-forums and these are the places under which members can start their discussions or posts. Logically forums are organized into a finite set of generic topics driven and updated by a group known as members, governed by a group known as moderators, it can have a graph structure. All message boards will use one of three possible display formats; each of the three basic message board display formats: Non-Threaded/Semi-Threaded/Fully Threaded, has its own advantages and disadvantages. If messages are not related to one another at all, a Non-Threaded format is best.
If a user has a message topic and multiple replies to that message topic, a semi-threaded format is best. If a user has a message topic and replies to that message topic and responds to replies a threaded format is best. Internally, Western-style forums logged in members into user groups. Privileges and rights are given based on these groups. A user of the forum can automatically be promoted to a more privileged user group based on criteria set by the administrator. A person viewing a closed thread as a member will see a box saying he does not have the right to submit messages there, but a moderator will see the same box granting him access to more than just posting messages. An unregistered user of the site is known as a guest or visitor. Guests are granted access to all functions that do not require database alterations or breach privacy. A guest can view the contents of the forum or use such features as read marking, but an administrator will disallow visi