A minicomputer, or colloquially mini, is a class of smaller computers, developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, The New York Times suggested a consensus definition of a minicomputer as a machine costing less than US$25,000, with an input-output device such as a teleprinter and at least four thousand words of memory, capable of running programs in a higher level language, such as Fortran or BASIC; the class formed a distinct group with its own software architectures and operating systems. Minis were designed for control, human interaction, communication switching as distinct from calculation and record keeping. Many were sold indirectly to original equipment manufacturers for final end use application. During the two decade lifetime of the minicomputer class 100 companies formed and only a half dozen remained; when single-chip CPU microprocessors appeared, beginning with the Intel 4004 in 1971, the term "minicomputer" came to mean a machine that lies in the middle range of the computing spectrum, in between the smallest mainframe computers and the microcomputers.
The term "minicomputer" is little used today. The term "minicomputer" developed in the 1960s to describe the smaller computers that became possible with the use of transistors and core memory technologies, minimal instructions sets and less expensive peripherals such as the ubiquitous Teletype Model 33 ASR, they took up one or a few 19-inch rack cabinets, compared with the large mainframes that could fill a room. The definition of minicomputer is vague with the consequence that there are a number of candidates for the first minicomputer. An early and successful minicomputer was Digital Equipment Corporation's 12-bit PDP-8, built using discrete transistors and cost from US$16,000 upwards when launched in 1964. Versions of the PDP-8 took advantage of small-scale integrated circuits; the important precursors of the PDP-8 include the PDP-5, LINC, the TX-0, the TX-2, the PDP-1. DEC gave rise to a number of minicomputer companies along Massachusetts Route 128, including Data General, Wang Laboratories, Apollo Computer, Prime Computer.
Minicomputers were known as midrange computers. They grew to have high processing power and capacity, they were used in manufacturing process control, telephone switching and to control laboratory equipment. In the 1970s, they were the hardware, used to launch the computer-aided design industry and other similar industries where a smaller dedicated system was needed; the 7400 series of TTL integrated circuits started appearing in minicomputers in the late 1960s. The 74181 arithmetic logic unit was used in the CPU data paths; each 74181 had a bus width of hence the popularity of bit-slice architecture. Some scientific computers, such as the Nicolet 1080, would use the 7400 series in groups of five ICs for their uncommon twenty bits architecture; the 7400 series offered data-selectors, three-state buffers, etc. in dual in-line packages with one-tenth inch spacing, making major system components and architecture evident to the naked eye. Starting in the 1980s, many minicomputers used VLSI circuits.
At the launch of the MITS Altair 8800 in 1975, Radio Electronics magazine referred to the system as a "minicomputer", although the term microcomputer soon became usual for personal computers based on single-chip microprocessors. At the time, microcomputers were 8-bit single-user simple machines running simple program-launcher operating systems like CP/M or MS-DOS, while minis were much more powerful systems that ran full multi-user, multitasking operating systems, such as VMS and Unix, although the classical mini was a 16-bit computer, the emerging higher performance superminis were 32-bit; the decline of the minis happened due to the lower cost of microprocessor-based hardware, the emergence of inexpensive and deployable local area network systems, the emergence of the 68020, 80286 and the 80386 microprocessors, the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments or "data centers". The result was that minicomputers and computer terminals were replaced by networked workstations, file servers and PCs in some installations, beginning in the latter half of the 1980s.
During the 1990s, the change from minicomputers to inexpensive PC networks was cemented by the development of several versions of Unix and Unix-like systems that ran on the Intel x86 microprocessor architecture, including Solaris, FreeBSD, NetBSD and OpenBSD. The Microsoft Windows series of operating systems, beginning with, now included server versions that supported preemptive multitasking and other features required for servers; as microprocessors have become more powerful, the CPUs built up from multiple components – once the distinguishing feature differentiating mainframes and midrange systems from microcomputers – have become obsolete in the largest mainframe computers. Digital Equipment Corporation was once the leading minicomputer manufacturer, at one time the second-largest computer company after IBM, but as the minicomputer declined in the face of generic Unix servers and Intel-based PCs, not only DEC, but every other minicomputer company including Data General, Computervision and Wang Laboratories, many based in New England collapsed or merg
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
A cult film or cult movie commonly referred to as a cult classic, is a film that has acquired a cult following. Cult films are known for their dedicated, passionate fanbase, an elaborate subculture that engage in repeated viewings, quoting dialogue, audience participation. Inclusive definitions allow for major studio productions box office bombs, while exclusive definitions focus more on obscure, transgressive films shunned by the mainstream; the difficulty in defining the term and subjectivity of what qualifies as a cult film mirror classificatory disputes about art. The term cult film itself was first used in the 1970s to describe the culture that surrounded underground films and midnight movies, though cult was in common use in film analysis for decades prior to that. Cult films trace their origin back to controversial and suppressed films kept alive by dedicated fans. In some cases, reclaimed or rediscovered films have acquired cult followings decades after their original release for their camp value.
Other cult films have since become reassessed as classics. After failing in the cinema, some cult films have become regular fixtures on cable television or profitable sellers on home video. Others have inspired their own film festivals. Cult films can both form their own subcultures. Other media that reference cult films can identify which demographics they desire to attract and offer savvy fans an opportunity to demonstrate their knowledge. Cult films break cultural taboos, many feature excessive displays of violence, sexuality, profanity, or combinations thereof; this can lead to controversy and outright bans. Films that fail to attract requisite amounts of controversy may face resistance when labeled as cult films. Mainstream films and big budget blockbusters have attracted cult followings similar to more underground and lesser known films. Fans who like the films for the wrong reasons, such as perceived elements that represent mainstream appeal and marketing, will be ostracized or ridiculed.
Fans who stray from accepted subcultural scripts may experience similar rejection. Since the late 1970s, cult films have become popular. Films that once would have been limited to obscure cult followings are now capable of breaking into the mainstream, showings of cult films have proved to be a profitable business venture. Overbroad usage of the term has resulted in controversy, as purists state it has become a meaningless descriptor applied to any film, the slightest bit weird or unconventional. Films are stated to be an "instant cult classic" now before they are released. Fickle fans on the Internet have latched on to unreleased films only to abandon them on release. At the same time, other films have acquired massive, quick cult followings, owing to spreading virally through social media. Easy access to cult films via video on demand and peer-to-peer file sharing has led some critics to pronounce the death of cult films. A cult film is any film that has a cult following, although the term is not defined and can be applied to a wide variety of films.
Some definitions exclude films that have been released by major studios or have big budgets, that try to become cult films, or become accepted by mainstream audiences and critics. Cult films are defined by audience reaction as much as by their content; this may take the form of elaborate and ritualized audience participation, film festivals, or cosplay. Over time, the definition has become more vague and inclusive as it drifts away from earlier, stricter views. Increasing use of the term by mainstream publications has resulted in controversy, as cinephiles argue that the term has become meaningless or "elastic, a catchall for anything maverick or strange". Academic Mark Shiel has criticized the term itself as being reliant on subjectivity. According to feminist scholar Joanne Hollows, this subjectivity causes films with large female cult followings to be perceived as too mainstream and not transgressive enough to qualify as a cult film. Academic Mike Chopra‑Gant says that cult films become decontextualized when studied as a group, Shiel criticizes this recontextualization as cultural commodification.
In 2008, Cineaste asked a range of academics for their definition of a cult film. Several people defined cult films in terms of their opposition to mainstream films and conformism, explicitly requiring a transgressive element, though others disputed the transgressive potential, given the demographic appeal to conventional moviegoers and mainstreaming of cult films. Jeffrey Andrew Weinstock instead called them mainstream films with transgressive elements. Most definitions required a strong community aspect, such as obsessed fans or ritualistic behavior. Citing misuse of the term, Mikel J. Koven took a self-described hard-line stance that rejected definitions that use any other criteria. Matt Hills instead stressed the need for an open-ended definition rooted in structuration, where the film and the audience reaction are interrelated and neither is prioritized. Ernest Mathijs focused on the accidental nature of cult followings, arguing that cult film fans consider themselves too savvy to be marketed to, w
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
A microcomputer is a small inexpensive computer with a microprocessor as its central processing unit. It includes a microprocessor and minimal input/output circuitry mounted on a single printed circuit board. Microcomputers became popular in the 1970s and 1980s with the advent of powerful microprocessors; the predecessors to these computers and minicomputers, were comparatively much larger and more expensive. Many microcomputers are personal computers; the abbreviation micro was common during the 1970s and 1980s, but has now fallen out of common usage. The term microcomputer came into popular use after the introduction of the minicomputer, although Isaac Asimov used the term in his short story "The Dying Night" as early as 1956. Most notably, the microcomputer replaced the many separate components that made up the minicomputer's CPU with one integrated microprocessor chip; the French developers of the Micral N filed their patents with the term "Micro-ordinateur", a literal equivalent of "Microcomputer", to designate a solid state machine designed with a microprocessor.
In the USA, the earliest models such as the Altair 8800 were sold as kits to be assembled by the user, came with as little as 256 bytes of RAM, no input/output devices other than indicator lights and switches, useful as a proof of concept to demonstrate what such a simple device could do. However, as microprocessors and semiconductor memory became less expensive, microcomputers in turn grew cheaper and easier to use: Increasingly inexpensive logic chips such as the 7400 series allowed cheap dedicated circuitry for improved user interfaces such as keyboard input, instead of a row of switches to toggle bits one at a time. Use of audio cassettes for inexpensive data storage replaced manual re-entry of a program every time the device was powered on. Large cheap arrays of silicon logic gates in the form of read-only memory and EPROMs allowed utility programs and self-booting kernels to be stored within microcomputers; these stored programs could automatically load further more complex software from external storage devices without user intervention, to form an inexpensive turnkey system that does not require a computer expert to understand or to use the device.
Random access memory became cheap enough to afford dedicating 1-2 kilobytes of memory to a video display controller frame buffer, for a 40x25 or 80x25 text display or blocky color graphics on a common household television. This replaced the slow and expensive teletypewriter, common as an interface to minicomputers and mainframes. All these improvements in cost and usability resulted in an explosion in their popularity during the late 1970s and early 1980s. A large number of computer makers packaged microcomputers for use in small business applications. By 1979, many companies such as Cromemco, Processor Technology, IMSAI, North Star Computers, Southwest Technical Products Corporation, Ohio Scientific, Altos Computer Systems, Morrow Designs and others produced systems designed either for a resourceful end user or consulting firm to deliver business systems such as accounting, database management, word processing to small businesses; this allowed businesses unable to afford leasing of a minicomputer or time-sharing service the opportunity to automate business functions, without hiring a full-time staff to operate the computers.
A representative system of this era would have used an S100 bus, an 8-bit processor such as an Intel 8080 or Zilog Z80, either CP/M or MP/M operating system. The increasing availability and power of desktop computers for personal use attracted the attention of more software developers. In time, as the industry matured, the market for personal computers standardized around IBM PC compatibles running DOS, Windows. Modern desktop computers, video game consoles, tablet PCs, many types of handheld devices, including mobile phones, pocket calculators, industrial embedded systems, may all be considered examples of microcomputers according to the definition given above. Everyday use of the expression "microcomputer" has declined from the mid-1980s and has declined in commonplace usage since 2000; the term is most associated with the first wave of all-in-one 8-bit home computers and small business microcomputers. Although, or because, an diverse range of modern microprocessor-based devices fit the definition of "microcomputer", they are no longer referred to as such in everyday speech.
In common usage, "microcomputer" has been supplanted by the term "personal computer" or "PC", which specifies a computer, designed to be used by one individual at a time, a term first coined in 1959. IBM first promoted the term "personal computer" to differentiate themselves from other microcomputers called "home computers", IBM's own mainframes and minicomputers. However, following its release, the IBM PC itself was imitated, as well as the term; the component parts were available to producers and the BIOS was reverse engineered through cleanroom design techniques. IBM PC compatible "clones" became commonplace, the terms "personal computer", "PC", stuck with the general public specifically for a DOS or Windows-compatible computer. Monitors and other devices for inpu
NCUBE was a series of parallel computing computers from the company of the same name. Early generations of the hardware used a custom microprocessor. With its final generations of servers, nCUBE no longer designed custom microprocessors for machines, but used server class chips manufactured by a third party in massively parallel hardware deployments for the purposes of on-demand video. NCUBE was founded in 1983 in Beaverton, Oregon by a group of Intel employees frustrated by Intel's reluctance to enter the parallel computing market, though Intel released their iPSC/1 in the same year as the first nCUBE was released. In December 1985, the first generation of nCUBE's hypercube machines were released; the second generation was launched in June 1989. The third generation was released in 1995; the fourth generation was released in 1999. In 1988, Larry Ellison invested into nCUBE and became the company's majority share holder; the company's headquarters was relocated to Foster City, California to be closer to the Oracle Corporation.
In 1994, Ronald Dilbeck became chief executive officer and set nCUBE on a fast track to an initial public offering. In 1996, Ellison downsized nCUBE and Dilbeck departed. Ellison took over as acting CEO and redirected the company to become Oracle's Network Computer division. After the network computer diversion, nCUBE resumed development on video servers. NCUBE deployed its first VOD video server in Burj al-Arab hotel in Dubai. In 1999, nCUBE announced it was acquiring a seven-year-old Louisville, Colorado software company SkyConnect, Inc. developers of digital advertising and VOD software for cable television and partner in their Burj Al-Arab hotel deployment. In the 1990s, nCUBE shifted its focus from the parallel computing market and, by 1999, had identified itself as a video on demand solutions provider shipping over 100 VOD systems delivering 17,000 streams and establishing a relationship with Microsoft TV; the company was once again on IPO fast-track, only to be halted again after the bursting of Dot-com bubble.
In 2000, SeaChange International filed a suit against nCUBE, alleging its nCUBE MediaCube-4 product infringed on a SeaChange patent. A jury awarded damages; the U. S. Court of Appeals for the Federal Circuit subsequently overturned the ruling on June 29, 2005; as fallout from the dot-com bubble bursting, the recession, the lawsuit, in April 2001 nCUBE laid-off 17% of its work force and began closing offices to downsize and consolidate the company around the Beaverton manufacturing office. In 2001, after acquiring patents from Oracle's interactive television division, nCUBE filed a patent infringement suit against SeaChange claiming that their competitor's video server offering violated its VOD patent on delivery to set-top boxes. NCUBE was awarded over $2 million in damages. In 2002, Ellison stepped down from CEO and named Michael J. Pohl, the company's president since 1999, as CEO. In January 2005, nCUBE was acquired by C-COR for $89.5 million, with an SEC filing for the purchase in October 2004.
In December 2007, C-COR was acquired by ARRIS. One of the first nCUBE machines to be released was the nCUBE 10 of late 1985, it was called NCUBE/ten but the name morphed over time. These were based on a set of custom chips, where each compute node had a processor chip with 32-bit ALU, a 64-bit IEEE 754 FPU, special communication instructions, 128 KB of RAM. A node delivered 2 MIPS, 500 kiloFLOPS, or 300 kiloFLOPS. There were 64 nodes per board; the host board, based on an Intel 80286, ran Axis, a custom Unix-like operating system, each compute node ran a 4KB kernel, Vertex.nCUBE 10 referred to the machine's ability to build an order-ten hypercube, supporting 1024 CPUs in a single machine. Some of the modules would be used for input/output, which included the nChannel storage control card, frame buffers, the InterSystem card that allowed nCUBEs to be attached to each other. At least one host board needed acting as the terminal driver, it could partition the machine into "sub-cubes" and allocate them separately to different users.
For the second series the naming was changed, they created the single-chip nCUBE 2 processor. This was otherwise similar to the nCUBE 10's CPU, but ran faster at 25 MHz to provide about 7 MIPS and 3.5 megaFLOPS. This was improved to 30 MHz in the 2S model. RAM was increased as well, with 4 to 16 MB of RAM on a "single wide" 1 inch x 3.5 inch module, with additional form factors of "double wide", quadruple that in a double wide, double side module. The I/O cards had less RAM, with different backend interfaces to support SCSI, HIPPI and other protocols; each nCUBE-2 CPU included 13 I/O channels running at 20 Mbit/s. One of these was dedicated to I/O duties, while the other twelve were used as the interconnect system between CPUs; each channel used wormhole routing to forward messages along. The machines themselves were wired up as order-twelve hypercubes, allowing for up to 4096 CPUs in a single machine; each module ran a 200KB microkernel called nCX, but the system now used a Sun Microsystems workstation as the front end and no longer needed the Host Controller.
NCX included a parallel filesystem. C and C++ languages are available, as is NQS, Parasoft's Express; these were supported by an in-house compiler team. The largest nCUBE-2 system installed was at Sandia National Laboratory, a 1024-CPU system that reached 1.91 gigaFLOPS in testing. In addition the nCX operating system, it ran the SUNMOS ligh
Lawrence Livermore National Laboratory
Lawrence Livermore National Laboratory is a federal research facility in Livermore, United States, founded by the University of California, Berkeley in 1952. A Federally Funded Research and Development Center, it is funded by the U. S. Department of Energy and managed and operated by Lawrence Livermore National Security, LLC, a partnership of the University of California, Bechtel, BWX Technologies, AECOM, Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it. LLNL is self-described as "a premier research and development institution for science and technology applied to national security." Its principal responsibility is ensuring the safety and reliability of the nation's nuclear weapons through the application of advanced science and technology. The Laboratory applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.
The Laboratory is located on a one-square-mile site at the eastern edge of Livermore. It operates a 7,000 acres remote experimental test site, called Site 300, situated about 15 miles southeast of the main lab site. LLNL has an annual budget of about $1.5 billion and a staff of 5,800 employees. LLNL was established in 1952 as the University of California Radiation Laboratory at Livermore, an offshoot of the existing UC Radiation Laboratory at Berkeley, it was intended to spur innovation and provide competition to the nuclear weapon design laboratory at Los Alamos in New Mexico, home of the Manhattan Project that developed the first atomic weapons. Edward Teller and Ernest Lawrence, director of the Radiation Laboratory at Berkeley, are regarded as the co-founders of the Livermore facility; the new laboratory was sited at a former naval air station of World War II. It was home to several UC Radiation Laboratory projects that were too large for its location in the Berkeley Hills above the UC campus, including one of the first experiments in the magnetic approach to confined thermonuclear reactions.
About half an hour southeast of Berkeley, the Livermore site provided much greater security for classified projects than an urban university campus. Lawrence tapped age 32, to run Livermore. Under York, the Lab had four main programs: Project Sherwood, Project Whitney, diagnostic weapon experiments, a basic physics program. York and the new lab embraced the Lawrence "big science" approach, tackling challenging projects with physicists, chemists and computational scientists working together in multidisciplinary teams. Lawrence died in August 1958 and shortly after, the university's board of regents named both laboratories for him, as the Lawrence Radiation Laboratory; the Berkeley and Livermore laboratories have had close relationships on research projects, business operations, staff. The Livermore Lab was established as a branch of the Berkeley laboratory; the Livermore lab was not severed administratively from the Berkeley lab until 1971. To this day, in official planning documents and records, Lawrence Berkeley National Laboratory is designated as Site 100, Lawrence Livermore National Lab as Site 200, LLNL's remote test location as Site 300.
The laboratory was renamed Lawrence Livermore Laboratory in 1971. On October 1, 2007 LLNS assumed management of LLNL from the University of California, which had managed and operated the Laboratory since its inception 55 years before; the laboratory was honored in 2012 by having the synthetic chemical element livermorium named after it. The LLNS takeover of the laboratory has been controversial. In May 2013, an Alameda County jury awarded over $2.7 million to five former laboratory employees who were among 430 employees LLNS laid off during 2008. The jury found that LLNS breached a contractual obligation to terminate the employees only for "reasonable cause." The five plaintiffs have pending age discrimination claims against LLNS, which will be heard by a different jury in a separate trial. There are 125 co-plaintiffs awaiting trial on similar claims against LLNS; the May 2008 layoff was the first layoff at the laboratory in nearly 40 years. On March 14, 2011, the City of Livermore expanded the city's boundaries to annex LLNL and move it within the city limits.
The unanimous vote by the Livermore city council expanded Livermore's southeastern boundaries to cover 15 land parcels covering 1,057 acres that comprise the LLNL site. The site was an unincorporated area of Alameda County; the LLNL campus continues to be owned by the federal government. From its inception, Livermore focused on new weapon design concepts; the lab persevered and its subsequent designs proved successful. In 1957, the Livermore Lab was selected to develop the warhead for the Navy's Polaris missile; this warhead required numerous innovations to fit a nuclear warhead into the small confines of the missile nosecone. During the Cold War, many Livermore-designed warheads entered service; these were used in missiles ranging in size from the Lance surface-to-surface tactical missile to the megaton-class Spartan antiballistic missile. Over the years, LLNL designed the following warheads: W27 (Regulus cruise missile.