Theo de Raadt
Theo de Raadt is a software engineer who lives in Calgary, Canada. He is the founder and leader of the OpenBSD and OpenSSH projects, was a founding member of NetBSD. In 2004, De Raadt won the Free Software Award for his work on OpenBSD and OpenSSH. Theo de Raadt is the eldest of four children to a Dutch father and a South African mother, with two sisters and a brother. Concern over the mandatory two-year armed forces conscription in South Africa led the family to emigrate to Calgary, Canada in November 1977. In 1983, the largest recession in Canada since the Great Depression sent the family to the Yukon. Prior to the move, De Raadt got his first computer, a Commodore VIC-20, soon followed by an Amiga, it is with these computers. In 1992, he obtained a BSc in Computer Science from the University of Calgary. In 1993, Theo de Raadt founded NetBSD with Chris Demetriou, Adam Glass, Charles Hannum, who felt frustrated at the poor quality of 386BSD and believed an open development model would be better.
386BSD was derived from the original University of California Berkeley's 4.3BSD release, while the new NetBSD project would merge relevant code from the Networking/2 and 386BSD releases. The new project focused on clean, correct code, with the goal of producing a unified, multi-platform, production-quality BSD operating system; because of the importance of networks such as the Internet in the distributed, collaborative nature of its development, De Raadt suggested the name "NetBSD", which the three other founders agreed upon. The first NetBSD source code repository was established on March 21, 1993 and the initial release, NetBSD 0.8, was made in April 1993. This was derived from 386BSD 0.1 plus the version 0.2.2 unofficial patchkit, with several programs from the Net/2 release missing from 386BSD re-integrated, various other improvements. In August 1993, NetBSD 0.9 was released. This was still a PC-platform-only release, although by this time work was underway to add support for other architectures.
NetBSD 1.0 was released in October, 1994. This was the first multi-platform release, supporting the IBM PC compatible, HP 9000 Series 300, Amiga, 68k Macintosh, Sun-4c series and PC532. In this release, the encumbered Net/2-derived source code was replaced with equivalent code from 4.4BSD-lite, in accordance with the USL v BSDi lawsuit settlement. De Raadt played a vital role in the creation of the SPARC port, implementing much of the initial code together with Chuck Cranor. In December 1994, Theo de Raadt was forced to resign from the NetBSD core team, his access to the source repository was revoked. Fellow team members claimed. In his book Free for All, Peter Wayner claims that De Raadt "began to rub some people the wrong way" before the split from NetBSD, while Linus Torvalds has described him as "difficult". Many have different feelings: the same interviewer describes De Raadt's "transformation" on founding OpenBSD and his "desire to take care of his team," some find his straightforwardness refreshing, De Raadt remains respected as a hacker and security expert.
In October 1995, De Raadt founded OpenBSD, a new project forked from NetBSD 1.0. The initial release, OpenBSD 1.2, was made in July 1996, followed in October of the same year by OpenBSD 2.0. Since the project has followed a schedule of a release every six months, each of, maintained and supported for one year. De Raadt has been a vocal advocate of free software since the inception of OpenBSD, but he is a strong proponent of free speech, having on occasion had rather public disputes with various groups, from Linux advocates to governments; this outspoken attitude, while sometimes the cause of conflict, has led him to acclaim. S. AUUG Conference in Melbourne, Australia and FISL in Porto Alegre, RS, Brazil. After De Raadt stated his disapproval of the U. S.-led invasion of Iraq in an April, 2003 interview with Toronto's Globe and Mail, a multi-million-dollar US Department of Defense grant to the University of Pennsylvania's POSSE project was cancelled ending the project. Funding from the grant had been used in the development of OpenSSH and OpenBSD, as well as many other projects and was to be used to pay for the hackathon planned for May 8, 2003.
Despite money from the grant having been used to secure accommodations for sixty developers for a week, the money was reclaimed by the government at a loss and the hotel was told not to allow the developers to pay the reclaimed money to resecure the rooms. This resulted in criticism among some; the grant termination was, not as bad a blow as some portrayed it. The project's supporters rallied to help and the hackathon went on as planned; the funding was cut mere months before the end of the grant, further fueling the speculations regarding the situation surrounding the grant's termination. De Raadt is well known for his advocacy of free software drivers, he has long been critical of developers of Linux and other free platforms for their tolerance of non-free drivers and acceptance of non-disclosure agreements. In particular, De Raadt has worked to convince wireless hardware vendors to allow the firmware images of their products to be redistributed; these efforts have been successful in negotiations with Taiwanese companies, leading to many new wireless drivers.
De Raadt has commented that "most
O'Reilly Media is an American media company established by Tim O'Reilly that publishes books and Web sites and produces conferences on computer technology topics. Their distinctive brand features a woodcut of an animal on many of their book covers; the company began in 1978 as a private consulting firm doing technical writing, based in the Cambridge, Massachusetts area. In 1984, it began to retain publishing rights on manuals created for Unix vendors. A few 70-page "Nutshell Handbooks" were well-received, but the focus remained on the consulting business until 1988. After a conference displaying O'Reilly's preliminary Xlib manuals attracted significant attention, the company began increasing production of manuals and books; the original cover art consisted of animal designs developed by Edie Freedman because she thought that Unix program names sounded like "weird animals". In 1993 O'Reilly Media created the first web portal, when they launched one of the first Web-based resources, Global Network Navigator.
GNN was sold to AOL in one of the first large transactions of the dot-com bubble. GNN was the first site on the World Wide Web to feature paid advertising. Although O'Reilly Media got its start in publishing two decades after its genesis the company expanded into event production. In 1997, O'Reilly launched The Perl Conference to cross-promote its books on the Perl programming language. Many of the company's other software bestsellers were on topics that were off the radar of the commercial software industry. In 1998, O'Reilly invited many of the leaders of software projects to a meeting. Called the freeware summit, the meeting became known as the Open Source Summit; the O'Reilly Open Source Convention is now one of O'Reilly's flagship events. Other key events include the Strata Conference on big data, the Velocity Conference on Web Performance and Operations, FOO Camp. Past events of note include the Web 2.0 Summit. Overall, O'Reilly describes its business not as publishing or conferences, but as "changing the world by spreading the knowledge of innovators."Today, the company offers over one dozen conferences: Strata + Hadoop World OSCON Fluent Velocity The Next:Economy Summit The Next:Money Summit The Solid Conference The O'Reilly Software Architecture Conference The O'Reilly Design Conference O'Reilly Emerging Technology Conference Tools of Change Conference Web 2.0 Summit Web 2.0 Expo MySQL Conference and Expo RailsConf Where 2.0 Money:Tech Gov 2.0 Expo and Gov 2.0 Summit O'Reilly school of technology will be discontinued as of January 6, 2016, new enrollments are no longer accepted.
In the late 1990s, O'Reilly founded the O'Reilly Network, which grew to include sites such as: LinuxDevCenter.com MacDevCenter.com WindowsDevCenter.com ONLamp.com O'Reilly RadarIn 2008 the company revised its online model and stopped publishing on several of its sites. The company produced dev2dev in association with BEA and java.net in association with Sun Microsystems and CollabNet. In 2001, O'Reilly launched Safari Books Online, a subscription-based service providing access to ebooks as a joint venture with the Pearson Technology Group. Safari Books Online includes books and video from Adobe Press, Alpha Books, Cisco Press, FT Press, Microsoft Press, New Riders Publishing, O'Reilly, Peachpit Press, Prentice Hall, Prentice Hall PTR, Que and Sams Publishing. In 2014, O'Reilly Media acquired Pearson's stake, making Safari Books Online a wholly owned subsidiary of O'Reilly Media. O'Reilly did a redesign of the site and has some success in the attempt to expand beyond Safari's core B2C market into the B2B Enterprise market.
In 2017, O'Reilly Media announced they were no longer selling books including eBooks. Instead, everyone was encouraged to sign up to Safari. In 2003, after the dot com bust, O'Reilly's corporate goal was to reignite enthusiasm in the computer industry. To do this, Dale Dougherty and Tim O'Reilly decided to use the term "Web 2.0" coined in January 1999 by Darcy DiNucci. The term was used for the Web 2.0 Summit run by O'Reilly TechWeb. CMP registered Web 2.0 as a Service Mark "for arranging and conducting live events, namely trade shows, business conferences and educational conferences in various fields of computers and information technology." Web 2.0 framed what distinguished the companies that survived the dot com bust from those that died, identified key drivers of future success, including what is now called “cloud computing,” big data, new approaches to iterative, data-driven software development. In May 2006 CMP Media learned of an impending event called the "Web 2.0 Half day conference."
Concerned over their obligation to take reasonable means to enforce their trade and service marks CMP sent a cease and desist letter to the non-profit Irish organizers of the event. This attempt to restrict through legal mechanisms the use of the term was criticized by some; the legal issue was resolved by O'Reilly's apologizing for the early and aggressive involvement of attorneys, rather than calling the organizers, allowing them to use the service mark for this single event. In January 2005 the compan
Usenet is a worldwide distributed discussion system available on computers. It was developed from the general-purpose Unix-to-Unix Copy dial-up network architecture. Tom Truscott and Jim Ellis conceived the idea in 1979, it was established in 1980. Users post messages to one or more categories, known as newsgroups. Usenet resembles a bulletin board system in many respects and is the precursor to Internet forums that are used today. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially; the name comes from the term "users network". A major difference between a BBS or web forum and Usenet is the absence of a central server and dedicated administrator. Usenet is distributed among a large changing conglomeration of servers that store and forward messages to one another in so-called news feeds. Individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, employer, or their own server.
Usenet is culturally significant in the networked world, having given rise to, or popularized, many recognized concepts and terms such as "FAQ", "flame", "spam". Usenet was conceived in 1979 and publicly established in 1980, at the University of North Carolina at Chapel Hill and Duke University, over a decade before the World Wide Web went online and the general public received access to the Internet, making it one of the oldest computer network communications systems still in widespread use, it was built on the "poor man's ARPANET", employing UUCP as its transport protocol to offer mail and file transfers, as well as announcements through the newly developed news software such as A News. The name Usenet emphasized its creators' hope that the USENIX organization would take an active role in its operation; the articles that users post to Usenet are organized into topical categories known as newsgroups, which are themselves logically organized into hierarchies of subjects. For instance, sci.math and sci.physics are within the sci.* hierarchy, for science.
Or, talk.origins and talk.atheism are in the talk.* hierarchy. When a user subscribes to a newsgroup, the news client software keeps track of which articles that user has read. In most newsgroups, the majority of the articles are responses to some other article; the set of articles that can be traced to one single non-reply article is called a thread. Most modern newsreaders display the articles arranged into subthreads; when a user posts an article, it is only available on that user's news server. Each news server talks to one or more other exchanges articles with them. In this fashion, the article is copied from server to server and should reach every server in the network; the peer-to-peer networks operate on a similar principle, but for Usenet it is the sender, rather than the receiver, who initiates transfers. Usenet was designed under conditions when networks were not always available. Many sites on the original Usenet network would connect only once or twice a day to batch-transfer messages in and out.
This is because the POTS network was used for transfers, phone charges were lower at night. The format and transmission of Usenet articles is similar to that of Internet e-mail messages; the difference between the two is that Usenet articles can be read by any user whose news server carries the group to which the message was posted, as opposed to email messages, which have one or more specific recipients. Today, Usenet has diminished in importance with respect to Internet forums, mailing lists and social media. Usenet differs from such media in several ways: Usenet requires no personal registration with the group concerned; the groups in alt.binaries are still used for data transfer. Many Internet service providers, many other Internet sites, operate news servers for their users to access. ISPs that do not operate their own servers directly will offer their users an account from another provider that operates newsfeeds. In early news implementations, the server and newsreader were a single program suite, running on the same system.
Today, one uses separate newsreader client software, a program that resembles an email client but accesses Usenet servers instead. Some clients such as Mozilla Thunderbird and Outlook Express provide both abilities. Not all ISPs run news servers. A news server is one of the most difficult Internet services to administer because of the large amount of data involved, small customer base, a disproportionately high volume of customer support incidents; some ISPs outsource news operation to specialist sites, which will appear to a user as though the ISP ran the server itself. Many sites carry a restricted newsfeed, with a limited number of newsgroups. Omitted from such a newsfeed are foreign-language newsgroups and the alt.binaries hierarchy which carries software, music and images, accounts for over 99 percent of article data. There are Usenet providers that specialize in offering service to users whose ISPs do not carry news, or that carry a restricted feed. See news server operation for an overview of how news systems are implemented.
Newsgroups are accessed with newsreaders: applications that allow users to read and reply to postings in newsgro
Electronic engineering is an electrical engineering discipline which utilizes nonlinear and active electrical components to design electronic circuits, devices, VLSI devices and their systems. The discipline also designs passive electrical components based on printed circuit boards. Electronics is a subfield within the wider electrical engineering academic subject but denotes a broad engineering field that covers subfields such as analog electronics, digital electronics, consumer electronics, embedded systems and power electronics. Electronics engineering deals with implementation of applications and algorithms developed within many related fields, for example solid-state physics, radio engineering, telecommunications, control systems, signal processing, systems engineering, computer engineering, instrumentation engineering, electric power control and many others; the Institute of Electrical and Electronics Engineers is one of the most important and influential organizations for electronics engineers.
Electronics is a subfield within the wider electrical engineering academic subject. An academic degree with a major in electronics engineering can be acquired from some universities, while other universities use electrical engineering as the subject; the term electrical engineer is still used in the academic world to include electronic engineers. However, some people consider the term'electrical engineer' should be reserved for those having specialized in power and heavy current or high voltage engineering, while others consider that power is just one subset of electrical engineering, as well as'electrical distribution engineering'; the term'power engineering' is used as a descriptor in that industry. Again, in recent years there has been a growth of new separate-entry degree courses such as'systems engineering' and'communication systems engineering' followed by academic departments of similar name, which are not considered as subfields of electronics engineering but of electrical engineering.
Electronic engineering as a profession sprang from technological improvements in the telegraph industry in the late 19th century and the radio and the telephone industries in the early 20th century. People were attracted to radio by the technical fascination it inspired, first in receiving and in transmitting. Many who went into broadcasting in the 1920s were only'amateurs' in the period before World War I. To a large extent, the modern discipline of electronic engineering was born out of telephone and television equipment development and the large amount of electronic systems development during World War II of radar, communication systems, advanced munitions and weapon systems. In the interwar years, the subject was known as radio engineering and it was only in the late 1950s that the term electronic engineering started to emerge. In the field of electronic engineering, engineers design and test circuits that use the electromagnetic properties of electrical components such as resistors, inductors and transistors to achieve a particular functionality.
The tuner circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. In designing an integrated circuit, electronics engineers first construct circuit schematics that specify the electrical components and describe the interconnections between them; when completed, VLSI engineers convert the schematics into actual layouts, which map the layers of various conductor and semiconductor materials needed to construct the circuit. The conversion from schematics to layouts can be done by software but often requires human fine-tuning to decrease space and power consumption. Once the layout is complete, it can be sent to a fabrication plant for manufacturing. For systems of intermediate complexity, engineers may use VHDL modeling for programmable logic devices and FPGAs. Integrated circuits, FPGAs and other electrical components can be assembled on printed circuit boards to form more complicated circuits. Today, printed circuit boards are found in most electronic devices including televisions and audio players.
Electronic engineering has many subfields. This section describes some of the most popular subfields in electronic engineering. Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error checking and error detection of digital signals. Telecommunications engineering deals with the transmission of information across a channel such as a co-axial cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier wave in order to shift the information to a carrier frequency suitable for transmission, this is known as modulation.
Popular analog modulation techniques include amplitude frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced by the engineer. Once the transmission characteristics of a system are determined, telecommunica
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
The Linux kernel is a free and open-source, Unix-like operating system kernel. The Linux family of operating systems is based on this kernel and deployed on both traditional computer systems such as personal computers and servers in the form of Linux distributions, on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs, NAS appliances. While the adoption of the Linux kernel in desktop computer operating system is low, Linux-based operating systems dominate nearly every other segment of computing, from mobile devices to mainframes; as of November 2017, all of the world's 500 most powerful supercomputers run Linux. The Android operating system for tablet computers and smartwatches uses the Linux kernel; the Linux kernel was conceived and created in 1991 by Linus Torvalds for his personal computer and with no cross-platform intentions, but has since expanded to support a huge array of computer architectures, many more than other operating systems or kernels.
Linux attracted developers and users who adopted it as the kernel for other free software projects, notably the GNU Operating System, created as a free, non-proprietary operating system, based on UNIX as a by-product of the fallout of the Unix wars. The Linux kernel API, the application programming interface through which user programs interact with the kernel, is meant to be stable and to not break userspace programs; as part of the kernel's functionality, device drivers control the hardware. However, the interface between the kernel and loadable kernel modules, unlike in many other kernels and operating systems, is not meant to be stable by design; the Linux kernel, developed by contributors worldwide, is a prominent example of free and open source software. Day-to-day development discussions take place on the Linux kernel mailing list; the Linux kernel is released under the GNU General Public License version 2, with some firmware images released under various non-free licenses. In April 1991, Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, started working on some simple ideas for an operating system.
He started with a task switcher in a terminal driver. On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet: I'm doing a operating system for 386 AT clones. This has been brewing since April, is starting to get ready. I'd like any feedback on things people like/dislike in minix. I've ported bash and gcc, things seem to work; this implies that I'll get something practical within a few months Yes - it's free of any minix code, it has a multi-threaded fs. It is NOT portable, it never will support anything other than AT-harddisks, as that's all I have:-(. It's in C, but most people wouldn't call what I write C, it uses every conceivable feature of the 386 I could find, as it was a project to teach me about the 386. As mentioned, it uses a MMU, for both paging and segmentation. It's the segmentation; some of my "C"-files are as much assembler as C. Unlike minix, I happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them. After that, many people contributed code to the project.
Early on, the MINIX community contributed code and ideas to the Linux kernel. At the time, the GNU Project had created many of the components required for a free operating system, but its own kernel, GNU Hurd, was incomplete and unavailable; the Berkeley Software Distribution had not yet freed itself from legal encumbrances. Despite the limited functionality of the early versions, Linux gained developers and users. In September 1991, Torvalds released version 0.01 of the Linux kernel on the FTP server of the Finnish University and Research Network. It had 10,239 lines of code. On 5 October 1991, version 0.02 of the Linux kernel was released. Torvalds assigned version 0 to the kernel to indicate that it was for testing and not intended for productive use. In December 1991, Linux kernel 0.11 was released. This version was the first to be self-hosted as Linux kernel 0.11 could be compiled by a computer running the same kernel version. When Torvalds released version 0.12 in February 1992, he adopted the GNU General Public License version 2 over his previous self-drafted license, which had not permitted commercial redistribution.
On 19 January 1992, the first post to the new newsgroup alt.os.linux was submitted. On 31 March 1992, the newsgroup was renamed comp.os.linux. The fact that Linux is a monolithic kernel rather than a microkernel was the topic of a debate between Andrew S. Tanenbaum, the creator of MINIX, Torvalds; this discussion is known as the Tanenbaum–Torvalds debate and started in 1992 on the Usenet discussion group comp.os.minix as a general debate about Linux and kernel architecture. Tanenbaum argued that microkernels were superior to monolithic kernels and that therefore Linux was obsolete. Unlike traditional monolithic kernels, device drivers in Linux are configured as loadable kernel modules and are loaded or unloaded while
Berkeley Software Distribution
The Berkeley Software Distribution was an operating system based on Research Unix and distributed by the Computer Systems Research Group at the University of California, Berkeley. Today, "BSD" refers to its descendants, such as FreeBSD, OpenBSD, NetBSD, or DragonFly BSD. BSD was called Berkeley Unix because it was based on the source code of the original Unix developed at Bell Labs. In the 1980s, BSD was adopted by workstation vendors in the form of proprietary Unix variants such as DEC Ultrix and Sun Microsystems SunOS due to its permissive licensing and familiarity to many technology company founders and engineers. Although these proprietary BSD derivatives were superseded in the 1990s by UNIX SVR4 and OSF/1 releases provided the basis for several open-source operating systems including FreeBSD, OpenBSD, NetBSD, DragonFly BSD, TrueOS. These, in turn, have been used by proprietary operating systems, including Apple's macOS and iOS, which derived from them, Microsoft Windows, which used a part of its TCP/IP code.
The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry, on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. In 1975, Ken Thompson came to Berkeley as a visiting professor, he started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, so in 1977 Joy started compiling the first Berkeley Software Distribution, released on March 9, 1978.
1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out; the second Berkeley Software Distribution, released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAX's virtual memory capabilities; the kernel of 32V was rewritten by Berkeley students to include a virtual memory implementation, a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was alternatively called Virtual VAX/UNIX or VMUNIX, BSD kernel images were called /vmunix until 4.4BSD. After 4.3BSD was released in June 1986, it was determined that BSD would move away from the aging VAX platform.
The Power 6/32 platform developed by Computer Consoles Inc. seemed promising at the time, but was abandoned by its developers shortly thereafter. Nonetheless, the 4.3BSD-Tahoe port proved valuable, as it led to a separation of machine-dependent and machine-independent code in BSD which would improve the system's future portability. In addition to portability, the CSRG worked on an implementation of the OSI network protocol stack, improvements to the kernel virtual memory system and new TCP/IP algorithms to accommodate the growth of the Internet; until all versions of BSD used proprietary AT&T Unix code, were therefore subject to an AT&T software license. Source code licenses had become expensive and several outside parties had expressed interest in a separate release of the networking code, developed outside AT&T and would not be subject to the licensing requirement; this led to Networking Release 1, made available to non-licensees of AT&T code and was redistributable under the terms of the BSD license.
It was released in June 1989. After Net/1, BSD developer Keith Bostic proposed that more non-AT&T sections of the BSD system be released under the same license as Net/1. To this end, he started a project to reimplement most of the standard Unix utilities without using the AT&T code. Within eighteen months, all of the AT&T utilities had been replaced, it was determined that only a few AT&T files remained in the kernel; these files were removed, the result was the June 1991 release of Networking Release 2, a nearly complete operating system, distributable. Net/2 was the basis for two separate ports of BSD to the Intel 80386 architecture: the free 386BSD by William Jolitz and the proprietary BSD/386 by Berkeley Software Design. 386BSD itself was short-lived, but became the initial code base of the NetBSD and FreeBSD projects that were started shortly thereafter. BSDi soon found itself in legal trouble with AT&T's Unix System Laboratories subsidiary the owners of the System V copyright and the Unix trademark.
The USL v. BSDi lawsuit was filed in 1992 and led to an injunction on the distribution of Net/2 until the validity of USL's copyright claims on the source could be determined; the lawsuit slowed development of the free-