Free and open-source software
Free and open-source software is software that can be classified as both free software and open-source software. That is, anyone is licensed to use, copy and change the software in any way, the source code is shared so that people are encouraged to voluntarily improve the design of the software; this is in contrast to proprietary software, where the software is under restrictive copyright licensing and the source code is hidden from the users. FOSS maintains the software user's civil liberty rights. Other benefits of using FOSS can include decreased software costs, increased security and stability, protecting privacy and giving users more control over their own hardware. Free and open-source operating systems such as Linux and descendants of BSD are utilized today, powering millions of servers, desktops and other devices. Free-software licenses and open-source licenses are used by many software packages; the free-software movement and the open-source software movement are online social movements behind widespread production and adoption of FOSS.
"Free and open-source software" is an umbrella term for software, considered both Free software and open-source software. FOSS allows the user to inspect the source code and provides a high level of control of the software's functions compared to proprietary software; the term "free software" does not refer to the monetary cost of the software at all, but rather whether the license maintains the software user's civil liberties. There are a number of related terms and abbreviations for free and open-source software, or free/libre and open-source software. Although there is a complete overlap between free-software licenses and open-source-software licenses, there is a strong philosophical disagreement between the advocates of these two positions; the terminology of FOSS or "Free and Open-source software" was created to be a neutral on these philosophical disagreements between the FSF and OSI and have a single unified term that could refer to both concepts. As the Free Software Foundation explains the philosophical difference between free software and open-source software: "The two terms describe the same category of software, but they stand for views based on fundamentally different values.
Open-source is a development methodology. For the free-software movement, free software is an ethical imperative, essential respect for the users' freedom. By contrast, the philosophy of open-source considers issues in terms of how to make software “better”—in a practical sense only." In parallel to this the Open Source Initiative considers many free-software licenses to be open source. These include the latest versions of the FSF's three main licenses: the GPL, the Lesser General Public License, the GNU Affero General Public License. Richard Stallman's Free Software Definition, adopted by the Free Software Foundation, defines free software as a matter of liberty not price, it upholds the Four Essential Freedoms; the earliest-known publication of the definition of his free-software idea was in the February 1986 edition of the FSF's now-discontinued GNU's Bulletin publication. The canonical source for the document is in the philosophy section of the GNU Project website; as of August 2017, it is published there in 40 languages.
To meet the definition of "free software", the FSF requires the software's licensing rights what the FSF respect the civil liberties / human rights of what the FSF calls the software user's "Four Essential Freedoms". The freedom to run the program as you wish, for any purpose; the freedom to study how the program works, change it so it does your computing as you wish. Access to the source code is a precondition for this; the freedom to redistribute copies. The freedom to distribute copies of your modified versions to others. By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this; the open-source-software definition is used by the Open Source Initiative to determine whether a software license qualifies for the organization's insignia for Open-source software. The definition was based on the Debian Free Software Guidelines and adapted by Bruce Perens. Perens did not base his writing on the Four Essential Freedoms of free software from the Free Software Foundation, which were only available on the web.
Perens subsequently stated that he felt Eric Raymond's promotion of Open-source unfairly overshadowed the Free Software Foundation's efforts and reaffirmed his support for Free software. In the following 2000s, he spoke about open source again. In the 1950s through the 1980s, it was common for computer users to have the source code for all programs they used, the permission and ability to modify it for their own use. Software, including source code, was shared by individuals who used computers as public domain software. Most companies had a business model based on hardware sales, provided or bundled software with hardware, free of charge. By the late 1960s, the prevailing business model around software was changing. A growing and evolving software industry was competing with the hardware manufacturer's bundled software products. Leased machines required software support while providing n
X86 is a family of instruction set architectures based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address; the term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Many additions and extensions have been added to the x86 instruction set over the years consistently with full backward compatibility; the architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many other companies. Of those, only Intel, AMD, VIA hold x86 architectural licenses, are producing modern 64-bit designs; the term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. As of 2018, the majority of personal computers and laptops sold are based on the x86 architecture, while other categories—especially high-volume mobile categories such as smartphones or tablets—are dominated by ARM.
In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 represented any 8086 compatible CPU. Today, however, x86 implies a binary compatibility with the 32-bit instruction set of the 80386; this is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and also because the term became common after the introduction of the 80386 in 1985. A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, as well as simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were terms iRMX, iSBC, iSBX – all together under the heading Microsystem 80. However, this naming scheme was quite temporary.
Although the 8086 was developed for embedded systems and small multi-user or single-user computers as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, is used in midrange computers, workstations and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of x86 operating systems are using x86-based hardware. Modern x86 is uncommon in embedded systems and small low power applications as well as low-cost microprocessor markets, such as home appliances and toys, lack any significant x86 presence. Simple 8-bit and 16-bit based architectures are common here, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some low power and low cost segments. There have been several attempts, including by Intel itself, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors.
Examples of this are the iAPX 432, the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 and the scalability of x86 chips such as the eight-core Intel Xeon and 12-core AMD Opteron is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from new architectures; the table below lists processor models and model series implementing variations of the x86 instruction set, in chronological order. Each line item is characterized by improved or commercially successful processor microarchitecture designs. At various times, companies such as IBM, NEC, AMD, TI, STM, Fujitsu, OKI, Cyrix, Intersil, C&T, NexGen, UMC, DM&P started to design or manufacture x86 processors intended for personal computers as well as embedded systems; such x86 implementations are simple copies but employ different internal microarchitectures as well as different solutions at the electronic and physical levels.
Quite early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors named to Intel's original chips. Other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, Weitek. Following the pipelined i486, Intel introduced the Pentium brand name for their new set of superscalar x86 designs.
A computer cluster is a set of loosely or connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task and scheduled by software; the components of a cluster are connected to each other through fast local area networks, with each node running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups, different operating systems can be used on each computer, or different hardware. Clusters are deployed to improve performance and availability over that of a single computer, while being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, software for high-performance distributed computing.
They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single unit fault tolerant mainframes with modular redundancy were employed. In contrast to high-reliability mainframes clusters are cheaper to scale out, but have increased complexity in error handling, as in clusters error modes are not opaque to running programs; the desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach connects a number of available computing nodes via a fast local area network; the activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which use many nodes, but with a far more distributed nature. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer; the developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may be used to achieve high levels of performance.
The TOP500 organization's semiannual list of the 500 fastest supercomputers includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture. Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s; the formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster; the first production system designed as a cluster was the Burroughs B5700 in the mid-1960s.
This allowed up to four computers, each with either one or two processors, to be coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation; the first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" system, developed in 1977, using ARCnet as the cluster interface. Clustering per se did not take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system; the ARC and VAXcluster products not only supported parallel computing, but shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan and the IBM S/390 Parallel Sysplex. Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer.
Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K compu
Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
An email client, email reader or more formally mail user agent is a computer program used to access and manage a user's email. A web application which provides message management and reception functions may act as an email client, "email client" may refer to a piece of computer hardware or software whose primary or most visible role is to work as an email client. Like most client programs, an email client is only active; the most common arrangement is for an email user to make an arrangement with a remote Mail Transfer Agent server for the receipt and storage of the client's emails. The MTA, using a suitable mail delivery agent, adds email messages to a client's storage as they arrive; the remote mail storage is referred to as the user's mailbox. The default setting on many Unix systems is for the mail server to store formatted messages in mbox, within the user's HOME directory. Of course, users of the system can log-in and run a mail client on the same computer that hosts their mailboxes. Emails are stored in the user's mailbox on the remote server until the user's email client requests them to be downloaded to the user's computer, or can otherwise access the user's mailbox on the remote server.
The email client can be set up to connect to multiple mailboxes at the same time and to request the download of emails either automatically, such as at pre-set intervals, or the request can be manually initiated by the user. A user's mailbox can be accessed in two dedicated ways; the Post Office Protocol allows the user to download messages one at a time and only deletes them from the server after they have been saved on local storage. It is possible to leave messages on the server to permit another client to access them. However, there is no provision for flagging a specific message as seen, answered, or forwarded, thus POP is not convenient for users who access the same mail from different machines. Alternatively, the Internet Message Access Protocol allows users to keep messages on the server, flagging them as appropriate. IMAP provides folders and sub-folders, which can be shared among different users with different access rights; the Sent and Trash folders are created by default. IMAP features an idle extension for real time updates, providing faster notification than polling, where long lasting connections are feasible.
See the remote messages section below. In addition, the mailbox storage can be accessed directly by programs running on the server or via shared disks. Direct access is less portable as it depends on the mailbox format. Email clients contain user interfaces to display and edit text; some applications permit the use of a program-external editor. The email clients will perform formatting according to RFC 5322 for headers and body, MIME for non-textual content and attachments. Headers include the destination fields, To, Cc, Bcc, the originator fields From, the message's author, Sender in case there are more authors, Reply-To in case responses should be addressed to a different mailbox. To better assist the user with destination fields, many clients maintain one or more address books and/or are able to connect to an LDAP directory server. For originator fields, clients may support different identities. Client settings require the user's real name and email address for each user's identity, a list of LDAP servers.
When a user wishes to create and send an email, the email client will handle the task. The email client is set up automatically to connect to the user's mail server, either an MSA or an MTA, two variations of the SMTP protocol; the email client which uses the SMTP protocol creates an authentication extension, which the mail server uses to authenticate the sender. This method eases nomadic computing; the older method was for the mail server to recognize the client's IP address, e.g. because the client is on the same machine and uses internal address 127.0.0.1, or because the client's IP address is controlled by the same Internet service provider that provides both Internet access and mail services. Client settings require the name or IP address of the preferred outgoing mail server, the port number, the user name and password for the authentication, if any. There is a non-standard port 465 for SSL encrypted SMTP sessions, that many clients and servers support for backward compatibility. With no encryption, much like for postcards, email activity is plainly visible by any occasional eavesdropper.
Email encryption enables privacy to be safeguarded by encrypting the mail sessions, the body of the message, or both. Without it, anyone with network access and the right tools can monitor email and obtain login passwords. Examples of concern include the government censorship and surveillance and fellow wireless network users such as at an Internet cafe. All relevant email protocols have an option to encrypt the whole session, to prevent a user's name and password from being sniffed, they are suggested for nomadic users and whenever the Internet access provider is not trusted. When sending mail, users can only control encryption at the first hop from a client to its configured outgoing mail server. At any further hop, messages may be transmitted with or without encryption, depending on the general configuration of the transmitting server and the capabilities of the receiving one. Encrypted mail sessions deliver messages in their original format, i.e. plain text or encrypted body, o
Parallel ATA AT Attachment, is an interface standard for the connection of storage devices such as hard disk drives, floppy disk drives, optical disc drives in computers. The standard is maintained by the X3/INCITS committee, it uses the underlying AT AT Attachment Packet Interface standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment; the ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE and Ultra ATA. After the introduction of Serial ATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of 18 in; because of this limit, the technology appears as an internal computer storage interface.
For many years, ATA provided the least expensive interface for this application. It has been replaced by SATA in newer systems; the standard was conceived as the "AT Bus Attachment," called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment"; the "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has been referred to as "Advanced Technology Attachment". When a newer Serial ATA was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short; the first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics. Together with Control Data Corporation and Compaq Computer, they developed the connector, the signaling protocols and so on, with the goal of remaining software compatible with the existing ST-506 hard drive interface.
The first such drives appeared in Compaq PCs in 1986. The term Integrated Drive Electronics refers not just to the connector and interface definition, but to the fact that the drive controller is integrated into the drive, as opposed to a separate controller on or connected to the motherboard; the interface cards used to connect a parallel ATA drive to, for example, a PCI slot are not drive controllers: they are bridges between the host bus and the ATA interface. Since the original ATA interface is just a 16-bit ISA bus in disguise, the bridge was simple in case of an ATA connector being located on an ISA interface card; the integrated controller presented the drive to the host computer as an array of 512-byte blocks with a simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself.
This eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, either accept the data from the drive or send the data to it; the interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After versions of the standard were developed, this became known as "ATA-1". A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus, it has been referred to as "XT-IDE", "XTA" or "XT Attachment". When PC motherboard makers started to include onboard ATA interfaces in place of the earlier ISA plug-in cards, there was only one ATA connector on the board, which could support up to two hard drives. At the time, in combination with the floppy drive, this was sufficient for most users; when the CD-ROM was developed, many computers would have been unable to accept these drives if they had been ATA devices, due to having two hard drives installed.
Adding the CD-ROM drive would have required removal of one of the drives. SCSI was available as a CD-ROM expansion option at the time, but devices with SCSI were more expensive than ATA devices due to the need for a smart interface, capable of bus arbitration. SCSI added US$100–300 to the cost of a storage device, in addition to the cost of a SCSI host adapter; the less expensive solution was the addition of a dedicated CD-ROM interface, included as an expansion option on a sound card. PC motherboards did not come with support for more than simple beeps from internal speakers. Sound cards included a game port joystick/gamepad port along with interfaces to control a CD-ROM and transmit CD audio to the system; the second drive interface was not well defined. It was first introduced with interfaces specific to certain CD-ROM drives such as Mitsumi, Sony or Panasonic drives, it was common to find early sound cards with two or three separate connectors each designed to match a certain brand of CD-ROM drive.
This evolved into the standard ATA interface for ease of cross-compatibility, though the sound card ATA interface still usual
GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni