The National Aeronautics and Space Administration is an independent agency of the United States Federal Government responsible for the civilian space program, as well as aeronautics and aerospace research. NASA was established in 1958; the new agency was to have a distinctly civilian orientation, encouraging peaceful applications in space science. Since its establishment, most US space exploration efforts have been led by NASA, including the Apollo Moon landing missions, the Skylab space station, the Space Shuttle. NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle, the Space Launch System and Commercial Crew vehicles; the agency is responsible for the Launch Services Program which provides oversight of launch operations and countdown management for unmanned NASA launches. NASA science is focused on better understanding Earth through the Earth Observing System. From 1946, the National Advisory Committee for Aeronautics had been experimenting with rocket planes such as the supersonic Bell X-1.
In the early 1950s, there was challenge to launch an artificial satellite for the International Geophysical Year. An effort for this was the American Project Vanguard. After the Soviet launch of the world's first artificial satellite on October 4, 1957, the attention of the United States turned toward its own fledgling space efforts; the US Congress, alarmed by the perceived threat to national security and technological leadership, urged immediate and swift action. On January 12, 1958, NACA organized a "Special Committee on Space Technology", headed by Guyford Stever. On January 14, 1958, NACA Director Hugh Dryden published "A National Research Program for Space Technology" stating: It is of great urgency and importance to our country both from consideration of our prestige as a nation as well as military necessity that this challenge be met by an energetic program of research and development for the conquest of space... It is accordingly proposed that the scientific research be the responsibility of a national civilian agency...
NACA is capable, by rapid extension and expansion of its effort, of providing leadership in space technology. While this new federal agency would conduct all non-military space activity, the Advanced Research Projects Agency was created in February 1958 to develop space technology for military application. On July 29, 1958, Eisenhower signed the National Aeronautics and Space Act, establishing NASA; when it began operations on October 1, 1958, NASA absorbed the 43-year-old NACA intact. A NASA seal was approved by President Eisenhower in 1959. Elements of the Army Ballistic Missile Agency and the United States Naval Research Laboratory were incorporated into NASA. A significant contributor to NASA's entry into the Space Race with the Soviet Union was the technology from the German rocket program led by Wernher von Braun, now working for the Army Ballistic Missile Agency, which in turn incorporated the technology of American scientist Robert Goddard's earlier works. Earlier research efforts within the US Air Force and many of ARPA's early space programs were transferred to NASA.
In December 1958, NASA gained control of the Jet Propulsion Laboratory, a contractor facility operated by the California Institute of Technology. The agency's leader, NASA's administrator, is nominated by the President of the United States subject to approval of the US Senate, reports to him or her and serves as senior space science advisor. Though space exploration is ostensibly non-partisan, the appointee is associated with the President's political party, a new administrator is chosen when the Presidency changes parties; the only exceptions to this have been: Democrat Thomas O. Paine, acting administrator under Democrat Lyndon B. Johnson, stayed on while Republican Richard Nixon tried but failed to get one of his own choices to accept the job. Paine was confirmed by the Senate in March 1969 and served through September 1970. Republican James C. Fletcher, appointed by Nixon and confirmed in April 1971, stayed through May 1977 into the term of Democrat Jimmy Carter. Daniel Goldin was appointed by Republican George H. W. Bush and stayed through the entire administration of Democrat Bill Clinton.
Robert M. Lightfoot, Jr. associate administrator under Democrat Barack Obama, was kept on as acting administrator by Republican Donald Trump until Trump's own choice Jim Bridenstine, was confirmed in April 2018. Though the agency is independent, the survival or discontinuation of projects can depend directly on the will of the President; the first administrator was Dr. T. Keith Glennan appointed by Republican President Dwight D. Eisenhower. During his term he brought together the disparate projects in American space development research; the second administrator, James E. Webb, appointed by President John F. Kennedy, was a Democrat who first publicly served under President Harry S. Truman. In order to implement the Apollo program to achieve Kennedy's Moon la
Quantian OS was a remastering of Knoppix/Debian for computational sciences. The environment was self-configuring and directly bootable CD/DVD that turns any PC or laptop into a Linux workstation. Quantian incorporated clusterKnoppix and added support for openMosix, including remote booting of light clients in an openMosix terminal server context permitting rapid setup of a SMP cluster computer. Numerous software packages for usual or scientific aims come with Quantian. After the installation, total package volume is about 2.7 GB. The packages for "home users" include: KDE, the default desktop environment and their components XMMS, xine media players Internet access software, including the KPPP dialer, ISDN utilities and WLAN The Mozilla, Mozilla Firefox and Konqueror web browsers K3b, for CD management The GIMP, an image-manipulation program Tools for data rescue and system repair Network analysis and administration tools OpenOffice.org Kile, LyxAdditionally, some of the scientific applications/programs in Quantian are such like: R, statistical computing software Octave, a Matlab clone Scilab, another Matlab clone GSL, GNU Scientific Library Maxima computer algebra system Python programming language with Scipy Fityk curve fitter Ghemical for computational chemistry Texmacs for wysiwyg scientific editing Grass geographic information system OpenDX and MayaVi data visualisation systems Gnuplot, a command-line driven interactive data and function plotting utility LabPlot, an application for plotting of data sets and functions Quantian Home Page
Old English, or Anglo-Saxon, is the earliest historical form of the English language, spoken in England and southern and eastern Scotland in the early Middle Ages. It was brought to Great Britain by Anglo-Saxon settlers in the mid-5th century, the first Old English literary works date from the mid-7th century. After the Norman conquest of 1066, English was replaced, for a time, as the language of the upper classes by Anglo-Norman, a relative of French; this is regarded as marking the end of the Old English era, as during this period the English language was influenced by Anglo-Norman, developing into a phase known now as Middle English. Old English developed from a set of Anglo-Frisian or Ingvaeonic dialects spoken by Germanic tribes traditionally known as the Angles and Jutes; as the Anglo-Saxons became dominant in England, their language replaced the languages of Roman Britain: Common Brittonic, a Celtic language, Latin, brought to Britain by Roman invasion. Old English had four main dialects, associated with particular Anglo-Saxon kingdoms: Mercian, Northumbrian and West Saxon.
It was West Saxon that formed the basis for the literary standard of the Old English period, although the dominant forms of Middle and Modern English would develop from Mercian. The speech of eastern and northern parts of England was subject to strong Old Norse influence due to Scandinavian rule and settlement beginning in the 9th century. Old English is one of the West Germanic languages, its closest relatives are Old Frisian and Old Saxon. Like other old Germanic languages, it is different from Modern English and difficult for Modern English speakers to understand without study. Old English grammar is similar to that of modern German: nouns, adjectives and verbs have many inflectional endings and forms, word order is much freer; the oldest Old English inscriptions were written using a runic system, but from about the 9th century this was replaced by a version of the Latin alphabet. Englisc, which the term English is derived from, means'pertaining to the Angles'. In Old English, this word was derived from Angles.
During the 9th century, all invading Germanic tribes were referred to as Englisc. It has been hypothesised that the Angles acquired their name because their land on the coast of Jutland resembled a fishhook. Proto-Germanic *anguz had the meaning of'narrow', referring to the shallow waters near the coast; that word goes back to Proto-Indo-European *h₂enǵʰ- meaning'narrow'. Another theory is that the derivation of'narrow' is the more connection to angling, which itself stems from a Proto-Indo-European root meaning bend, angle; the semantic link is the fishing hook, curved or bent at an angle. In any case, the Angles may have been called such because they were a fishing people or were descended from such, therefore England would mean'land of the fishermen', English would be'the fishermen's language'. Old English was not static, its usage covered a period of 700 years, from the Anglo-Saxon settlement of Britain in the 5th century to the late 11th century, some time after the Norman invasion. While indicating that the establishment of dates is an arbitrary process, Albert Baugh dates Old English from 450 to 1150, a period of full inflections, a synthetic language.
Around 85 per cent of Old English words are no longer in use, but those that survived are basic elements of Modern English vocabulary. Old English is a West Germanic language, it came to be spoken over most of the territory of the Anglo-Saxon kingdoms which became the Kingdom of England. This included most of present-day England, as well as part of what is now southeastern Scotland, which for several centuries belonged to the Anglo-Saxon kingdom of Northumbria. Other parts of the island – Wales and most of Scotland – continued to use Celtic languages, except in the areas of Scandinavian settlements where Old Norse was spoken. Celtic speech remained established in certain parts of England: Medieval Cornish was spoken all over Cornwall and in adjacent parts of Devon, while Cumbric survived to the 12th century in parts of Cumbria, Welsh may have been spoken on the English side of the Anglo-Welsh border. Norse was widely spoken in the parts of England which fell under Danish law. Anglo-Saxon literacy developed after Christianisation in the late 7th century.
The oldest surviving text of Old English literature is Cædmon's Hymn, composed between 658 and 680. There is a limited corpus of runic inscriptions from the 5th to 7th centuries, but the oldest coherent runic texts date to the 8th century; the Old English Latin alphabet was introduced around the 9th century. With the unification of the Anglo-Saxon kingdoms by Alfred the Great in the 9th century, the language of government and literature became standardised around the West Saxon dialect. Alfred advocated education in English alongside Latin, had many works translated into the English language. In Old English, typical of the development of literature, poetry arose before prose, but King Alfred the Great chiefly inspired the growth of prose. A literary standard, dating from the 10th century, arose under the influence of Bishop Æthelwold of Winchester, was followed by such writers as the prolific Ælfric of Eynsham. Th
A computer cluster is a set of loosely or connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task and scheduled by software; the components of a cluster are connected to each other through fast local area networks, with each node running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups, different operating systems can be used on each computer, or different hardware. Clusters are deployed to improve performance and availability over that of a single computer, while being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, software for high-performance distributed computing.
They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single unit fault tolerant mainframes with modular redundancy were employed. In contrast to high-reliability mainframes clusters are cheaper to scale out, but have increased complexity in error handling, as in clusters error modes are not opaque to running programs; the desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach connects a number of available computing nodes via a fast local area network; the activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which use many nodes, but with a far more distributed nature. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer; the developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may be used to achieve high levels of performance.
The TOP500 organization's semiannual list of the 500 fastest supercomputers includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture. Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s; the formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster; the first production system designed as a cluster was the Burroughs B5700 in the mid-1960s.
This allowed up to four computers, each with either one or two processors, to be coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation; the first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" system, developed in 1977, using ARCnet as the cluster interface. Clustering per se did not take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system; the ARC and VAXcluster products not only supported parallel computing, but shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan and the IBM S/390 Parallel Sysplex. Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer.
Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K compu
A Linux distribution is an operating system made from a software collection, based upon the Linux kernel and a package management system. Linux users obtain their operating system by downloading one of the Linux distributions, which are available for a wide variety of systems ranging from embedded devices and personal computers to powerful supercomputers. A typical Linux distribution comprises a Linux kernel, GNU tools and libraries, additional software, documentation, a window system, a window manager, a desktop environment. Most of the included software is free and open-source software made available both as compiled binaries and in source code form, allowing modifications to the original software. Linux distributions optionally include some proprietary software that may not be available in source code form, such as binary blobs required for some device drivers. A Linux distribution may be described as a particular assortment of application and utility software, packaged together with the Linux kernel in such a way that its capabilities meet the needs of many users.
The software is adapted to the distribution and packaged into software packages by the distribution's maintainers. The software packages are available online in so-called repositories, which are storage locations distributed around the world. Beside glue components, such as the distribution installers or the package management systems, there are only few packages that are written from the ground up by the maintainers of a Linux distribution. Six hundred Linux distributions exist, with close to five hundred out of those in active development; because of the huge availability of software, distributions have taken a wide variety of forms, including those suitable for use on desktops, laptops, mobile phones and tablets, as well as minimal environments for use in embedded systems. There are commercially backed distributions, such as Fedora, openSUSE and Ubuntu, community-driven distributions, such as Debian, Slackware and Arch Linux. Most distributions come ready to use and pre-compiled for a specific instruction set, while some distributions are distributed in source code form and compiled locally during installation.
Linus Torvalds developed the Linux kernel and distributed its first version, 0.01, in 1991. Linux was distributed as source code only, as a pair of downloadable floppy disk images – one bootable and containing the Linux kernel itself, the other with a set of GNU utilities and tools for setting up a file system. Since the installation procedure was complicated in the face of growing amounts of available software, distributions sprang up to simplify this. Early distributions included the following: H. J. Lu's "Boot-root", the aforementioned disk image pair with the kernel and the absolute minimal tools to get started, in late 1991 MCC Interim Linux, made available to the public for download in February 1992 Softlanding Linux System, released in 1992, was the most comprehensive distribution for a short time, including the X Window System Yggdrasil Linux/GNU/X, a commercial distribution first released in December 1992The two oldest and still active distribution projects started in 1993; the SLS distribution was not well maintained, so in July 1993 a new distribution, called Slackware and based on SLS, was released by Patrick Volkerding.
Dissatisfied with SLS, Ian Murdock set to create a free distribution by founding Debian, which had its first release in December 1993. Users were attracted to Linux distributions as alternatives to the DOS and Microsoft Windows operating systems on IBM PC compatible computers, Mac OS on the Apple Macintosh, proprietary versions of Unix. Most early adopters were familiar with Unix from school, they embraced Linux distributions for their low cost, availability of the source code for most or all of the software included. The distributions were a convenience, offering a free alternative to proprietary versions of Unix but they became the usual choice for Unix or Linux experts. To date, Linux has become more popular in server and embedded devices markets than in the desktop market. For example, Linux is used on over 50% of web servers, whereas its desktop market share is about 3.7%. Many Linux distributions provide an installation system akin to that provided with other modern operating systems. On the other hand, some distributions, including Gentoo Linux, provide only the binaries of a basic kernel, compilation tools, an installer.
Distributions are segmented into packages. Each package contains service. Examples of packages are a library for handling the PNG image format, a collection of fonts or a web browser; the package is provided as compiled code, with installation and removal of packages handled by a package management system rather than a simple file archiver. Each package intended for such a PMS contains meta-information such as a package description, "dependencies"; the package management system can evaluate this meta-information to allow package searches, to perform an automatic upgrade to a newer version, to check that all dependencies of a package are fulfilled, and/or to fulfill them automatically. Alth
The Fastra II is a desktop supercomputer designed for tomography. It was built in late 2009 by the ASTRA group of researchers of the IBBT VisionLab at the University of Antwerp and by Belgian computer shop Tones, in collaboration with Asus, a Taiwanese multinational computer product manufacturer, as the successor to the Fastra I; the Fastra II was determined to be over three times faster than the Fastra I, which in turn was faster than a 512-core cluster. However, because of the number of GPUs in the computer, the system suffered from several issues, like the system refusing to reboot and overheating due to a lack of space between the video cards; the computer was built as a researching and demonstration project by the ASTRA group of researchers at the Vision Lab in the University of Antwerp in Belgium, one of the researchers being Joost Batenburg. Unlike other modern supercomputers such as the Cray Jaguar and the IBM Roadrunner, which cost millions of euros, the Fastra II only uses consumer hardware, costing €6,000 in total.
The Fastra II's predecessor, the Fastra I, has 4 dual-GPU GeForce 9800 GX2 video cards, for a total of 8 GPUs. At that time, the ASTRA group needed a motherboard that had four PCI Express x16 slots with double-spacing between each of them; the only such motherboard the ASTRA group could find at that time was the MSI K9A2 Platinum, which has four such slots. In 2009, the Asus P6T7 WS Supercomputer motherboard, which the Fastra II uses, was released, which has seven PCI Express x16 slots; the Fastra II has six faster dual-GPU GeForce GTX 295 video cards, a single-GPU GeForce GTX 275, for a total of 13 GPUs. In the Fastra II, the GPUs perform tomographic reconstruction; the technique which allows GPUs to perform general-purpose tasks like this outside of gaming, instead of CPUs, is called GPGPU, general-purpose computing on graphics processing units. Overheating caused by the lack of space between the video cards forces researchers using the FASTRA II to keep the side panel door open, so that the video cards can get regular air, decreasing the overall temperature inside the case.
Due to the number of GPUs in the system, its initial boot was unsuccessful. This was because its motherboard uses a 32 bit BIOS, which only had 3 GB of address space for the video cards. However, Asus managed to provide them a specialized BIOS that skipped the address space allocation of the GTX 295 video cards; the BIOS-replacement coreboot was not tested. All seven PCI Express x16 slots in the Asus P6T7 motherboard were used in the building of the Fastra II computer. However, the video cards in the Fastra II are wide enough to require two such slots each. To solve this issue, the researchers came up with flexible PCI Express cables, Tones developed a custom cage which allowed the video cards to suspend over the motherboard. Like the Fastra I, the Fastra II uses a Lian Li PC-P80 Armorsuit case; the motherboard in the Fastra II was at that time the only workstation motherboard that had seven full-sized PCI Express x16 slots. The memory modules were six 2 GB modules, but were upgraded to 4 GB each, for a total of 24 GB.
Instead of an eighth dual-GPU video card, the single-GPU GTX 275 is in the computer because, out of all the video cards in the Fastra II, the GTX 275 is the only one the Fastra II's BIOS can initialize. The total amount of GPUs is 13; the video cards together bring 12 teraflops of computing power. Four of the six GTX 295 video cards have 2 PCBs, while the other two have only 1 PCB. According to the benchmarks on its official website, the Fastra II is faster and more power efficient than its competitors, including the Fastra I and the Tesla C1060 video card; the benchmarks were performed on the Fastra II, the Fastra I, a 512-core cluster, an Nvidia Tesla C1060 workstation card on an Intel Core i7 940 CPU, on an Intel Core i7 940 CPU itself. The Fastra II is over three times faster. Although the Fastra II consumes more power than the Fastra I, it's nearly 3 times as energy efficient as the Fastra I, over 300 times as energy efficient as the 512-core cluster; the video cards run at 37 degrees Celsius when idle, at 60 degrees Celsius at full load.
The operating system is CentOS, a community driven Linux distribution and Red Hat Enterprise Linux clone. The Fastra II received a positive public impression. Techie.com called it the "world's most powerful desktop-sized supercomputer", describing it as a computer with "so much power in such a small space." ITech News Net called it "the Most Powerful Desktop Supercomputer". Fastra II relies on Nvidia's Scalable Link Interface and is therefore limited to the number of GPUs supported by it and by the vendor the free and open-source device drivers; the Fastra II's motherboard is designed for workstations, it is being used in hospitals for medical imaging. It remains to be seen whether another Fastra featuring NVLink, first available with Pascal-based GPUs, will be built. Direct Rendering Infrastructure#GPGPU Debian-Med Beowulf cluster coreboot
Berkeley Software Distribution
The Berkeley Software Distribution was an operating system based on Research Unix and distributed by the Computer Systems Research Group at the University of California, Berkeley. Today, "BSD" refers to its descendants, such as FreeBSD, OpenBSD, NetBSD, or DragonFly BSD. BSD was called Berkeley Unix because it was based on the source code of the original Unix developed at Bell Labs. In the 1980s, BSD was adopted by workstation vendors in the form of proprietary Unix variants such as DEC Ultrix and Sun Microsystems SunOS due to its permissive licensing and familiarity to many technology company founders and engineers. Although these proprietary BSD derivatives were superseded in the 1990s by UNIX SVR4 and OSF/1 releases provided the basis for several open-source operating systems including FreeBSD, OpenBSD, NetBSD, DragonFly BSD, TrueOS. These, in turn, have been used by proprietary operating systems, including Apple's macOS and iOS, which derived from them, Microsoft Windows, which used a part of its TCP/IP code.
The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry, on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. In 1975, Ken Thompson came to Berkeley as a visiting professor, he started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, so in 1977 Joy started compiling the first Berkeley Software Distribution, released on March 9, 1978.
1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out; the second Berkeley Software Distribution, released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAX's virtual memory capabilities; the kernel of 32V was rewritten by Berkeley students to include a virtual memory implementation, a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was alternatively called Virtual VAX/UNIX or VMUNIX, BSD kernel images were called /vmunix until 4.4BSD. After 4.3BSD was released in June 1986, it was determined that BSD would move away from the aging VAX platform.
The Power 6/32 platform developed by Computer Consoles Inc. seemed promising at the time, but was abandoned by its developers shortly thereafter. Nonetheless, the 4.3BSD-Tahoe port proved valuable, as it led to a separation of machine-dependent and machine-independent code in BSD which would improve the system's future portability. In addition to portability, the CSRG worked on an implementation of the OSI network protocol stack, improvements to the kernel virtual memory system and new TCP/IP algorithms to accommodate the growth of the Internet; until all versions of BSD used proprietary AT&T Unix code, were therefore subject to an AT&T software license. Source code licenses had become expensive and several outside parties had expressed interest in a separate release of the networking code, developed outside AT&T and would not be subject to the licensing requirement; this led to Networking Release 1, made available to non-licensees of AT&T code and was redistributable under the terms of the BSD license.
It was released in June 1989. After Net/1, BSD developer Keith Bostic proposed that more non-AT&T sections of the BSD system be released under the same license as Net/1. To this end, he started a project to reimplement most of the standard Unix utilities without using the AT&T code. Within eighteen months, all of the AT&T utilities had been replaced, it was determined that only a few AT&T files remained in the kernel; these files were removed, the result was the June 1991 release of Networking Release 2, a nearly complete operating system, distributable. Net/2 was the basis for two separate ports of BSD to the Intel 80386 architecture: the free 386BSD by William Jolitz and the proprietary BSD/386 by Berkeley Software Design. 386BSD itself was short-lived, but became the initial code base of the NetBSD and FreeBSD projects that were started shortly thereafter. BSDi soon found itself in legal trouble with AT&T's Unix System Laboratories subsidiary the owners of the System V copyright and the Unix trademark.
The USL v. BSDi lawsuit was filed in 1992 and led to an injunction on the distribution of Net/2 until the validity of USL's copyright claims on the source could be determined; the lawsuit slowed development of the free-