The PDP-11 is a series of 16-bit minicomputers sold by Digital Equipment Corporation from 1970 into the 1990s, one of a succession of products in the PDP series. In total, around 600,000 PDP-11s of all models were sold, making it one of DEC's most successful product lines; the PDP-11 is considered by some experts to be the most popular minicomputer ever. The PDP-11 included a number of innovative features in its instruction set and additional general-purpose registers that made it much easier to program than earlier models in the series. Additionally, the innovative Unibus system allowed external devices to be interfaced to the system using direct memory access, opening the system to a wide variety of peripherals; the PDP-11 replaced the PDP-8 in many real-time applications, although both product lines lived in parallel for more than 10 years. But the ease of programming of the PDP-11 made it popular for general purpose computing uses as well; the design of the PDP-11 inspired the design of late-1970s microprocessors including the Intel x86 and the Motorola 68000.
Design features of PDP-11 operating systems, as well as other operating systems from Digital Equipment, influenced the design of other operating systems such as CP/M and hence MS-DOS. The first named version of Unix ran on the PDP-11/20 in 1970, it is stated that the C programming language took advantage of several low-level PDP-11–dependent programming features, albeit not by design. An effort to expand the PDP-11 from 16 to 32-bit addressing led to the VAX-11 design, which took part of its name from the PDP-11. In 1963, DEC introduced what is considered to be the first commercial minicomputer in the form of the PDP-5; this was a 12-bit design adapted from the 1962 LINC machine, intended to be used in a lab setting. DEC simplified the LINC system and instruction set, aiming the PDP-5 at smaller settings that did not need the power of their larger 18-bit PDP-4; the PDP-5 was a success selling about 50,000 examples. During this period, the computer market was moving from computer word lengths based on units of 6-bits to units of 8-bits, following the introduction of the 7-bit ASCII standard.
In 1967–68, DEC engineers designed a 16-bit machine, the PDP-X, but management cancelled the project. Several of the engineers from the PDP-X formed Data General; the next year they introduced the 16-bit Data General Nova. The Nova was a major success, selling tens of thousands of units and launching what would become one of DEC's major competitors through the 1970s and 80s. A subsequent effort, code-named "Desk Calculator", looked at a variety of options before choosing what became the 16-bit PDP-11. DEC sold over 170,000 PDP-11s in the 1970s. Manufactured of small-scale transistor–transistor logic, a single-board large scale integration version of the processor was developed in 1975. A two-or-three-chip processor, the J-11 was developed in 1979; the last models of the PDP-11 line were the PDP-11/94 and -11/93 introduced in 1990. The PDP-11 processor architecture has a orthogonal instruction set. For example, instead of instructions such as load and store, the PDP-11 has a move instruction for which either operand can be memory or register.
There are output instructions. More complex instructions such as add can have memory, input, or output as source or destination. Most operands can apply any of eight addressing modes to eight registers; the addressing modes provide register, absolute, relative and indexed addressing, can specify autoincrementation and autodecrementation of a register by one or two. Use of relative addressing lets a machine-language program be position-independent. Early models of the PDP-11 had no dedicated bus for input/output, but only a system bus called the Unibus, as input and output devices were mapped to memory addresses. An input/output device determined the memory addresses to which it would respond, specified its own interrupt vector and interrupt priority; this flexible framework provided by the processor architecture made it unusually easy to invent new bus devices, including devices to control hardware that had not been contemplated when the processor was designed. DEC published the basic Unibus specifications offering prototyping bus interface circuit boards, encouraging customers to develop their own Unibus-compatible hardware.
The Unibus made the PDP-11 suitable for custom peripherals. One of the predecessors of Alcatel-Lucent, the Bell Telephone Manufacturing Company, developed the BTMC DPS-1500 packet-switching network and used PDP-11s in the regional and national network management system, with the Unibus directly connected to the DPS-1500 hardware. Higher-performance members of the PDP-11 family, starting with the PDP-11/45 Unibus and 11/83 Q-bus systems, departed from the single-bus approach. Instead, memory was interfaced by dedicated circuitry and space in the CPU cabinet, while the Unibus continued to be used for I/O only. In the PDP-11/70, this was taken a step further, with the addition of a dedicated interface between disks and tapes and memory, via the Massbus. Although input/output devices continued to be mapped into memory addresses, some additional programming was necessary to set up the added bus interfaces; the PDP-11 supports hardware interrupts at four priority levels. Interrupts are serviced by software service routines, which could specify
Solaris (operating system)
Solaris is a Unix operating system developed by Sun Microsystems. It superseded their earlier SunOS in 1993. In 2010, after the Sun acquisition by Oracle, it was renamed Oracle Solaris. Solaris is known for its scalability on SPARC systems, for originating many innovative features such as DTrace, ZFS and Time Slider. Solaris supports SPARC and x86-64 servers from Oracle and other vendors. Solaris is registered as compliant with the Single UNIX Specification. Solaris was developed as proprietary software. In June 2005, Sun Microsystems released most of the codebase under the CDDL license, founded the OpenSolaris open-source project. With OpenSolaris, Sun wanted to build a user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution and the development model. In August 2010, Oracle discontinued providing public updates to the source code of the Solaris kernel turning Solaris 11 back into a closed source proprietary operating system.
Following that, in 2011 the Solaris 11 kernel source code leaked to BitTorrent. However, through the Oracle Technology Network, industry partners can still gain access to the in-development Solaris source code. Source code for the open source components of Solaris 11 is available for download from Oracle. In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, Xenix; this became Unix System V Release 4. On September 4, 1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS 4, with one based on SVR4; this was identified internally as SunOS 5, but a new marketing name was introduced at the same time: Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but the OpenWindows graphical user interface and Open Network Computing functionality. Although SunOS 4.1.x micro releases were retroactively named Solaris 1 by Sun, the Solaris name is used exclusively to refer only to the releases based on SVR4-derived SunOS 5.0 and later.
For releases based on SunOS 5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS 5.4. After Solaris 2.6, the 2. was dropped from the release name, so Solaris 7 incorporates SunOS 5.7, the latest release SunOS 5.11 forms the core of Solaris 11.4. Although SunSoft stated in its initial Solaris 2 press release their intent to support both SPARC and x86 systems, the first two Solaris 2 releases, 2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and uniprocessor workgroup server operating system. It included the Wabi emulator to support Windows applications. At the time, Sun offered the Interactive Unix system that it had acquired from Interactive Systems Corporation. In 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. On September 2, 2017, Simon Phipps, a former Sun Microsystems employee not hired by Oracle in the acquisition, reported on Twitter that Oracle had laid off the Solaris core development staff, which many interpreted as sign that Oracle no longer intended to support future development of the platform.
While Oracle did have a large layoff of Solaris development engineering staff, development continues today of which Solaris 11.4 was released in 2018. Solaris uses a common code base for the platforms it supports: i86pc. Solaris has a reputation for being well-suited to symmetric multiprocessing, supporting a large number of CPUs, it has been integrated with Sun's SPARC hardware, with which it is marketed as a combined package. This has led to more reliable systems, but at a cost premium compared to commodity PC hardware. However, it has supported x86 systems since Solaris 2.1 and 64-bit x86 applications since Solaris 10, allowing Sun to capitalize on the availability of commodity 64-bit CPUs based on the x86-64 architecture. Sun has marketed Solaris for use with both its own "x64" workstations and servers based on AMD Opteron and Intel Xeon processors, as well as x86 systems manufactured by companies such as Dell, Hewlett-Packard, IBM; as of 2009, the following vendors support Solaris for their x86 server systems: Dell – will "test and optimize Solaris and OpenSolaris on its rack and blade servers and offer them as one of several choices in the overall Dell software menu" Intel Hewlett Packard Enterprise – distributes and provides software technical support for Solaris on BL, DL, SL platforms Fujitsu SiemensAs of July 2010, Dell and HP certify and resell Oracle Solaris, Oracle Enterprise Linux and Oracle VM on their respective x86 platforms, IBM stopped direct support for Solaris on x64 kit.
Solaris 2.5.1 included support for the PowerPC platform, but the port was canceled before the Solaris 2.6 release. In January 2006, a community of developers at Blastwave began work on a PowerPC port which they named Polaris. In October 2006, an OpenSolaris community project based on the Blastwave efforts and Sun Labs' Project Pulsar, which re-integrated the relevant parts from Solaris 2.5.1 into OpenSolaris, announced its first official source code release. A port of Solaris to the Intel Itanium architecture was announced in 1997 but never brought to market. On November 28, 2007, IBM, Sine Nomine Associates demonstrated a preview of OpenSolaris for System z running on an IBM System z mainframe under z/VM, called Sirius
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f
A screensaver is a computer program that blanks the screen or fills it with moving images or patterns when the computer is not in use. The original purpose of screensavers was to prevent phosphor burn-in on CRT and plasma computer monitors. Though modern monitors are not susceptible to this issue, screensavers are still used for other purposes. Screensavers are set up to offer a basic layer of security, by requiring a password to re-access the device; some screensavers use the otherwise unused computer resources to do useful work, such as processing for distributed computing projects. As well as computers, modern television operating systems, media players and other digital entertainment systems include optional screensavers. Before the advent of LCD screens, most computer screens were based on cathode ray tubes; when the same image is displayed on a CRT screen for long periods, the properties of the exposed areas of phosphor coating on the inside of the screen and permanently change leading to a darkened shadow or "ghost" image on the screen, called a screen burn-in.
Cathode ray televisions and other devices that use CRTs are all susceptible to phosphor burn-in, as are plasma displays to some extent. Screen-saver programs were designed to help avoid these effects by automatically changing the images on the screen during periods of user inactivity. For CRTs used in public, such as ATMs and railway ticketing machines, the risk of burn-in is high because a stand-by display is shown whenever the machine is not in use. Older machines designed without burn-in problems taken into consideration display evidence of screen damage, with images or text such as "Please insert your card" visible when the display changes while the machine is in use. Blanking the screen is out of the question as the machine would appear to be out of service. In these applications, burn-in can be prevented by shifting the position of the display contents every few seconds, or by having a number of different images that are changed regularly. Modern CRTs are much less susceptible to burn-in than older models due to improvements in phosphor coatings, because modern computer images are lower contrast than the stark green- or white-on-black text and graphics of earlier machines.
LCD computer monitors, including the display panels used in laptop computers, are not susceptible to burn-in because the image is not directly produced by phosphors. While modern screens are not susceptible to the issues discussed above, screensavers are still used; these are for decorative/entertainment purposes, or for password protection. They feature moving images or patterns and sometimes sound effects; as screensavers are expected to activate when users are away from their machines, many screensavers can be configured to ask users for a password before permitting the user to resume work. This is a basic security measure against another person accessing the machine while the user is absent; some screensavers activate a useful background task, such as a virus scan or a distributed computing application. This allows applications to use resources only. Decades before the first computers using this technology were invented, Robert A. Heinlein gave an example of how they might be used in his novel Stranger In A Strange Land: Opposite his chair was a stereovision tank disguised as an aquarium.
The first screensaver was written for the original IBM PC by John Socha, best known for creating the Norton Commander. The screensaver, named scrnsave, was published in the December 1983 issue of the Softalk magazine, it blanked the screen after three minutes of inactivity. By 1983 a Zenith Data Systems executive included "screen-saver" among the new Z-29 computer terminal's features, telling InfoWorld that it "blanks out the display after 15 minutes of nonactivity, preventing burned-in character displays"; the first screensaver that allowed users to change the activating time was released on Apple's Lisa, in 1983. The Atari 400 and 800's screens would go through random screensaver-like color changes if they were left inactive for about 8 minutes. Normal users had no control over this; these computers, released in 1979, are technically earlier "screen savers." And prior to these computers, the 1977 Atari VCS/2600 gaming console included color cycling in games like Combat or Breakout, in order to prevent burn-in of game images to 1970s-era televisions.
In addition, the first model of the TI-30 calculator from 1976 featured a screensaver, which consisted of a decimal point running across the display after 30 seconds of inactivity. This was chiefly used to save battery power, as the LED display was more power intensive than LCD models; these are examples of the firmware of a computer. Today with the help of modern graphics technologies there is a wide variety of screensavers; because of 3D computer graphics, which provide realistic environments, 3D screensavers are available. Screensavers are designed and coded using a variety of programming languages as well as graphics interfaces; the authors of screensavers use the C or C++ programming languages, along with Graphics Device Interface, DirectX, or OpenGL, to craft their final products. Several OS X screensavers are designed using Quartz Composer; the scree
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
NetBSD is a free and open-source Unix-like operating system based on the Berkeley Software Distribution. It was the first open-source BSD descendant released after 386BSD was forked, it continues to be developed and is available for many platforms, including servers, handheld devices, embedded systems. The NetBSD project focuses on code clarity, careful design, portability across many computer architectures, its source code is permissively licensed. NetBSD was derived from the 4.3BSD-Reno release of the Berkeley Software Distribution from the Computer Systems Research Group of the University of California, via their Net/2 source code release and the 386BSD project. The NetBSD project began as a result of frustration within the 386BSD developer community with the pace and direction of the operating system's development; the four founders of the NetBSD project, Chris Demetriou, Theo de Raadt, Adam Glass, Charles Hannum, felt that a more open development model would benefit the project: one centered on portable, correct code.
They aimed to produce a multi-platform, production-quality, BSD-based operating system. The name "NetBSD" was suggested by De Raadt, based on the importance and growth of networks such as the Internet at that time, the distributed, collaborative nature of its development; the NetBSD source code repository was established on 21 March 1993 and the first official release, NetBSD 0.8, was made on 19 April 1993. This was derived from 386BSD 0.1 plus the version 0.2.2 unofficial patchkit, with several programs from the Net/2 release missing from 386BSD re-integrated, various other improvements. The first multi-platform release, NetBSD 1.0, was made in October 1994, being updated with 4.4BSD-Lite sources, it was free of all encumbered 4.3BSD Net/2 code. In 1994, for disputed reasons, one of the founders, Theo de Raadt, was removed from the project, he founded a new project, OpenBSD, from a forked version of NetBSD 1.0 near the end of 1995. In 1998, NetBSD 1.3 introduced the pkgsrc packages collection.
Until 2004, NetBSD 1.x releases were made at annual intervals, with minor "patch" releases in between. From release 2.0 onwards, NetBSD uses semantic versioning, each major NetBSD release corresponds to an incremented major version number, i.e. the major releases following 2.0 are 3.0, 4.0 and so on. The previous minor releases are now divided into two categories: x.y "stable" maintenance releases and x.y.z releases containing only security and critical fixes. As the project's motto suggests, NetBSD has been ported to a large number of 32- and 64-bit architectures; these range from VAX minicomputers to Pocket PC PDAs. As of 2009, NetBSD supports 57 hardware platforms; the kernel and userland for these platforms are all built from a central unified source-code tree managed by CVS. Unlike other kernels such as μClinux, the NetBSD kernel requires the presence of an MMU in any given target architecture. NetBSD's portability is aided by the use of hardware abstraction layer interfaces for low-level hardware access such as bus input/output or DMA.
Using this portability layer, device drivers can be split into "machine-independent" and "machine-dependent" components. This makes a single driver usable on several platforms by hiding hardware access details, reduces the work to port it to a new system; this permits a particular device driver for a PCI card to work without modifications, whether it is in a PCI slot on an IA-32, PowerPC, SPARC, or other architecture with a PCI bus. A single driver for a specific device can operate via several different buses, like ISA, PCI, or PC Card. In comparison, Linux device driver code must be reworked for each new architecture; as a consequence, in porting efforts by NetBSD and Linux developers, NetBSD has taken much less time to port to new hardware. This platform independence aids the development of embedded systems since NetBSD 1.6, when the entire toolchain of compilers, assemblers and other tools support cross-compiling. In 2005, as a demonstration of NetBSD's portability and suitability for embedded applications, Technologic Systems, a vendor of embedded systems hardware and demonstrated a NetBSD-powered kitchen toaster.
Commercial ports to embedded platforms, including the AMD Geode LX800, Freescale PowerQUICC processors, Marvell Orion, AMCC 405 family of PowerPC processors, Intel XScale IOP and IXP series, were available from and supported by Wasabi Systems. The NetBSD cross-compiling framework lets a developer build a complete NetBSD system for an architecture from a more powerful system of different architecture, including on a different operating system. Several embedded systems using NetBSD have required no additional software development other than toolchain and target rehost. NetBSD features pkgsrc, a framework for building and managing third-party application software packages; the pkgsrc collection consists of more than 18,000 packages as of April 2018. Building and installing packages such as KDE, GNOME, the Apache HTTP Server or Perl is performed through the use of a system of makefiles; this can automatically fetch the source code, patch, configure and install the package such that it can be removed again later.
An alternative to compiling from source is to use a precompiled binary package. In either case, any prerequisites/dependencies will be installed automatically by the package system, without need for manual intervention. Pkgsrc supports not only NetBSD, but several other BSD variants like
John Alan Lasseter is an American animator and former chief creative officer of Walt Disney Animation Studios and the defunct Disneytoon Studios. He was the Principal Creative Advisor for Walt Disney Imagineering. Lasseter began his career as an animator with The Walt Disney Company. After being fired from Disney for promoting computer animation, he joined Lucasfilm, where he worked on the then-groundbreaking use of CGI animation; the Graphics Group of the Computer Division of Lucasfilm was sold to Steve Jobs and became Pixar in 1986. Lasseter associated projects as executive producer. In addition, he directed Toy Story, A Bug's Life, Toy Story 2, Cars 2. From 2006 to 2018, Lasseter oversaw all of Walt Disney Animation Studios' films and associated projects as executive producer; the films he has made have grossed more than $19 billion, making him one of the most successful filmmakers of all time. Of the seven animated films that have grossed more than $1 billion, five of them are films executive produced by Lasseter.
The films include Toy Story 3, the first animated film to pass $1 billion, the current highest-grossing animated film of all time, as well as Zootopia, Finding Dory, Incredibles 2. He has won two Academy Awards, for Best Animated Short Film, as well as a Special Achievement Award. In November 2017, Lasseter took a six-month sabbatical from Pixar and Disney Animation after acknowledging "missteps" in his behavior with employees. According to various news outlets, Lasseter had a history of alleged sexual misconduct towards employees. In June 2018, Disney announced that he would be leaving the company at the end of the year when his contract expired, but would take on a consulting role until then. On January 9, 2019, Lasseter was hired to head Skydance Animation. Lasseter was born in California, his mother, Jewell Mae, was an art teacher at Bell Gardens High School, his father, Paul Eual Lasseter, was a parts manager at a Chevrolet dealership. Lasseter is a fraternal twin. Lasseter grew up in California.
His mother's profession contributed to his growing preoccupation with animation. He drew cartoons during church services at the Church of Christ his family attended; as a child, Lasseter would race home from school to watch Chuck Jones cartoons on television. While in high school, he read The Art of Animation by Bob Thomas; the book covered the history of Disney animation and explored the making of Disney's 1959 film Sleeping Beauty, which made Lasseter realize he wanted to do animation himself. When he saw Disney's 1963 film The Sword in the Stone, he made the decision that he should become an animator. Lasseter heard of a new character animation program at the California Institute of the Arts and decided to follow his dream of becoming an animator, his mother further encouraged him to take up a career in animation, in 1975 he enrolled as the second student in the CalArts Character Animation program created by Disney animators Jack Hannah and T. Hee. Lasseter was taught by three members of Disney's Nine Old Men team of veteran animators—Eric Larson, Frank Thomas and Ollie Johnston—and his classmates included future animators and directors like Brad Bird, John Musker, Henry Selick, Tim Burton, Chris Buck.
During his time there, he produced two animated shorts—Lady and the Lamp and Nitemare —which each won the student Academy Award for Animation. While at CalArts, Lasseter first started working for the Walt Disney Company at Disneyland in Anaheim during summer breaks and got a job as a Jungle Cruise skipper, where he learned the basics of comedy and comic timing to entertain captive audiences on the ride. Upon graduating in 1979, Lasseter obtained a job as an animator at Walt Disney Productions due to his success with Lady and the Lamp. To put this into perspective, the studio had reviewed 10,000 portfolios in the late 1970s in search of talent selected only about 150 candidates as apprentices, of which only about 45 were kept on permanently. In the fall of 1979, Disney animator Mel Shaw told the Los Angeles Times that "John's got an instinctive feel for character and movement and shows every indication of blossoming here at our studios... In time, he'll make a fine contribution." At that same time, Lasseter worked on a sequence titled "The Emperor and the Nightingale" for a Disney project called Musicana.
Musicana was never released but led to the development of Fantasia 2000. However, Lasseter soon realized something was missing: after 101 Dalmatians, which in his opinion was the film where Disney had reached its highest plateau, the studio had lost momentum and was criticized for repeating itself without adding any new ideas or innovations. Between 1980 and 1981, he coincidentally came across some video tapes from one of the new computer-graphics conferences, who showed some of the beginnings of computer animation floating spheres and such, which he experienced as a revelation, but it wasn't until shortly after, when he was invited by his friends Jerry Rees and Bill Kroyer, while working on Mickey's Christmas Carol, to come and see the first light cycle sequences for an upcoming film entitled Tron, featuring state-of-the-art computer-generated imagery, that he saw the huge potential of this new technology in animation. Up to tha