An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
The Apache Software Foundation describes the current iteration of Apache Royale as an open-source frontend technology that allows a developer to code in ActionScript 3 and MXML and target web, mobile devices and desktop devices on Apache Cordova all at once. Apache Royale is in beta development stage. Flex uses MXML to define UI layout and other non-visual static aspects, ActionScript to address dynamic aspects and as code-behind, requires Adobe AIR or Flash Player at runtime to run the application. Macromedia targeted the enterprise application development market with its initial releases of Flex 1.0 and 1.5. The company offered the technology at a price around US$15,000 per CPU. Required for deployment, the Java EE application server compiled MXML and ActionScript on-the-fly into Flash applications; each server license included 5 licenses for the Flex Builder IDE. Adobe changed the licensing model for the Flex product line with the release of Flex 2; the core Flex 2 SDK, consisting of the command-line compilers and the complete class library of user interface components and utilities, was made available as a free download.
Complete Flex applications can be built and deployed with the Flex 2 SDK, which contains no limitations or restrictions compared to the same SDK included with the Flex Builder IDE. Adobe based the new version of Flex Builder on the open source Eclipse platform; the company released two versions of Standard and Professional. The Professional version includes the Flex Charting Components library. Enterprise-oriented services remain available through Flex Data Services 2; this server component provides data synchronization, data push, publish-subscribe and automated testing. Unlike Flex 1.0 and 1.5, Flex Data Services is not required for the deployment of Flex applications. Coinciding with the release of Flex 2, Adobe introduced a new version of the ActionScript programming language, known as Actionscript 3, reflecting the latest ECMAScript specification; the use of ActionScript 3 and Flex 2 requires version 9 or of the Flash Player runtime. Flash Player 9 incorporated a new and more robust virtual machine for running the new ActionScript 3.
Flex was the first Macromedia product to be re-branded under the Adobe name. On April 26, 2007 Adobe announced their intent to release the Flex 3 SDK under the terms of the Mozilla Public License. Adobe released the first beta of Flex 3, codenamed Moxie, in June 2007. Major enhancements include integration with the new versions of Adobe's Creative Suite products, support for AIR, the addition of profiling and refactoring tools to the Flex Builder IDE. Adobe released Flex 4.0 on March 22, 2010. The Flex 4 development environment is called Adobe Flash Builder known as Adobe Flex Builder; some themes that have been mentioned by Adobe and have been incorporated into Flex 4 are as follows: Design in Mind: The framework has been designed for continuous collaboration between designers and developers. Accelerated Development: Be able to take application development from conception to reality quickly. Horizontal Platform Improvements: Compiler performance, language enhancements, BiDirectional components, enhanced text.
Full Support for Adobe Flash Player 10 and above. Broadening Horizons: Finding ways to make a framework lighter, supporting more deployment runtimes, runtime MXML. Simpler skinning than the previous versions. Integration with Adobe Flash Catalyst. Custom templatesFlash Builder is available in two versions: Standard and Premium, the premium adds the following features. An update to Flash Builder 4.5 and Flex 4.5 adds support for building Flex applications for BlackBerry Tablet OS and Apple iOS. Flex 4.5 SDK delivers many new components and capabilities, along with integrated support in Flash Builder 4.5 and Flash Catalyst CS 5.5. With the Adobe Flex 4.5 SDK, governed by three main goals: Allow developers to use Flex for multiscreen application development Further mature the Spark architecture and component set, introduced in Flex 4 In November 2011 Adobe released Flex SDK update 4.6, with the following changes: More Spark mobile components i
Network File System
Network File System is a distributed file system protocol developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a computer network much like local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call system; the NFS is an open standard defined in Request for Comments, allowing anyone to implement the protocol. Sun used version 1 only for in-house experimental purposes; when the development team added substantial changes to NFS version 1 and released it outside of Sun, they decided to release the new version as v2, so that version interoperation and RPC version fallback could be tested. Version 2 of the protocol operated only over User Datagram Protocol, its designers meant to keep the server side stateless, with locking implemented outside of the core protocol. People involved in the creation of NFS version 2 include Russel Sandberg, Bob Lyon, Bill Joy, Steve Kleiman, others; the Virtual File System interface allows a modular implementation, reflected in a simple protocol.
By February 1986, implementations were demonstrated for operating systems such as System V release 2, DOS, VAX/VMS using Eunice. NFSv2 only allows the first 2 GB of a file to be read due to 32-bit limitations. Version 3 added: support for 64-bit file offsets, to handle files larger than 2 gigabytes; the first NFS Version 3 proposal within Sun Microsystems was created not long after the release of NFS Version 2. The principal motivation was an attempt to mitigate the performance issue of the synchronous write operation in NFS Version 2. By July 1992, implementation practice had solved many shortcomings of NFS Version 2, leaving only lack of large file support a pressing issue; this became an acute pain point for Digital Equipment Corporation with the introduction of a 64-bit version of Ultrix to support their newly released 64-bit RISC processor, the Alpha 21064. At the time of introduction of Version 3, vendor support for TCP as a transport-layer protocol began increasing. While several vendors had added support for NFS Version 2 with TCP as a transport, Sun Microsystems added support for TCP as a transport for NFS at the same time it added support for Version 3.
Using TCP as a transport made using NFS over a WAN more feasible, allowed the use of larger read and write transfer sizes beyond the 8 KB limit imposed by User Datagram Protocol. Version 4, influenced by Andrew File System and Server Message Block, includes performance improvements, mandates strong security, introduces a stateful protocol. Version 4 became the first version developed with the Internet Engineering Task Force after Sun Microsystems handed over the development of the NFS protocols. NFS version 4.1 aims to provide protocol support to take advantage of clustered server deployments including the ability to provide scalable parallel access to files distributed among multiple servers. NFS version 4.2 was published in November 2016 with new features including: server-side clone and copy, application I/O advise, sparse files, space reservation, application data block, labeled NFS with sec_label that accommodates any MAC security system, two new operations for pNFS. WebNFS, an extension to Version 2 and Version 3, allows NFS to integrate more into Web-browsers and to enable operation through firewalls.
In 2007 Sun Microsystems open-sourced their client-side WebNFS implementation. Various side-band protocols have become associated with NFS. Note: the byte-range advisory Network Lock Manager protocol the remote quota-reporting protocol, which allows NFS users to view their data-storage quotas on NFS servers NFS over RDMA, an adaptation of NFS that uses remote direct memory access as a transport NFS-Ganesha, an NFS server, running in user-space and supporting the CephFS FSAL using libcephfsTrusted NFS NFS is used with Unix operating systems, Apple's macOS, Unix-like operating systems, it is available to operating systems such as Acorn RISC OS, the classic Mac OS, OpenVMS, MS-DOS, Microsoft Windows, Novell NetWare, IBM AS/400. Alternative remote file access protocols include the Server Message Block, Apple Filing Protocol, NetWare Core Protocol, OS/400 File Server file system. SMB and NetWare Core Protocol occur more than NFS on systems running Microsoft Windows. Haiku added NFSv4 support as part of a Google Summer of Code project.
Assuming a Unix-style scenario in which one machine needs access to data stored on another machine: The server implements NFS daemon processes, running by default as nfsd, to make its data generically available to clients. The server administrator determines what to make available, exporting the names and parameters of directories using the /etc/exports configuration file and the exportfs command; the server security
Thrift is an interface definition language and binary communication protocol used for defining and creating services for numerous languages. It forms a remote procedure call framework and was developed at Facebook for "scalable cross-language services development", it combines a software stack with a code generation engine to build cross-platform services which can connect applications written in a variety of languages and frameworks, including ActionScript, C, C++, C#, Cocoa, Erlang, Go, Java, Node.js, Objective-C, OCaml, Perl, PHP, Ruby, Rust and Swift. Although developed at Facebook, it is now an open source project in the Apache Software Foundation; the implementation was described in an April 2007 technical paper released by Facebook, now hosted on Apache. Thrift includes a complete stack for creating servers; the top part is generated code from the Thrift definition. From this file, the services generate processor code. In contrast to built-in types, created data structures are sent as result in generated code.
The protocol and transport layer are part of the runtime library. With Thrift, it is possible to define a service and change the protocol and transport without recompiling the code. Besides the client part, Thrift includes server infrastructure to tie protocols and transports together, like blocking, non-blocking, multi-threaded servers; the underlying I/O part of the stack is implemented differently for different languages. Thrift supports a number of protocols: TBinaryProtocol – A straightforward binary format, but not optimized for space efficiency. Faster to process than the text protocol but more difficult to debug. TCompactProtocol – More compact binary format. TSimpleJSONProtocol – A write-only protocol that cannot be parsed by Thrift because it drops metadata using JSON. Suitable for parsing by scripting languages; the supported transports are: TSimpleFileTransport – This transport writes to a file. TFramedTransport – This transport is required when using a non-blocking server, it sends data in frames.
TMemoryTransport – Uses memory for I/O. The Java implementation uses a simple ByteArrayOutputStream internally. TSocket – Uses blocking socket I/O for transport. TZlibTransport – Performs compression using zlib. Used in conjunction with another transport. Thrift provides a number of servers, which are TNonblockingServer – A multi-threaded server using non-blocking I/O. TFramedTransport must be used with this server. TSimpleServer – A single-threaded server using standard blocking I/O. Useful for testing. TThreadedServer – A multi-threaded server using a thread per connection model and standard blocking I/O. TThreadPoolServer – A multi-threaded server using a thread pool and standard blocking I/O; some stated benefits of Thrift include: Cross-language serialization with lower overhead than alternatives such as SOAP due to use of binary format A lean and clean library. No framework to code. No XML configuration files; the language bindings feel natural. For example, Java uses ArrayList<String>. C++ uses std::vector<std::string>.
The application-level wire format and the serialization-level wire format are cleanly separated. They can be modified independently; the predefined serialization styles include: HTTP-friendly and compact binary. Doubles as cross-language file serialization. Soft versioning of the protocol. Thrift does not require a explicit mechanism like major-version/minor-version. Loosely coupled teams can evolve RPC calls. No build dependencies or non-standard software. No mix of incompatible software licenses. Thrift can create code for a number of languages. To create a Thrift service, one has to write Thrift files that describe it, generate the code in the destination language, write some code to start the server, call it from the client. Here is a code example of such a description file: Thrift will generate the code out of this descriptive information. For instance, in Java, the PhoneType will be a simple enum inside the Phone class. Comparison of data serialization formats Apache Avro Abstract Syntax Notation One Hessian Protocol Buffers External Data Representation Internet Communications Engine SDXF GraalVM Official website
Xerox Corporation is an American global corporation that sells print and digital document and services in more than 160 countries. Xerox is headquartered in Norwalk, though its largest population of employees is based around Rochester, New York, the area in which the company was founded; the company purchased Affiliated Computer Services for $6.4 billion in early 2010. As a large developed company, it is placed in the list of Fortune 500 companies. On December 31, 2016, Xerox separated its business process service operations into a new publicly traded company, Conduent. Xerox focuses on its document technology and document outsourcing business, continues to trade on the NYSE. On January 31, 2018, Xerox announced that it would sell a controlling stake to Fujifilm, which has maintained a joint venture in the Asia-Pacific region known as Fuji Xerox. Researchers at Xerox and its Palo Alto Research Center invented several important elements of personal computing, such as the desktop metaphor GUI, the computer mouse and desktop computing.
These concepts were frowned upon by the board of directors, who ordered the Xerox engineers to share them with Apple technicians. The concepts were adopted by Apple and Microsoft. With the help of these innovations and Microsoft came to dominate the personal computing revolution of the 1980s, whereas Xerox was not a major player. Xerox was founded in 1906 in Rochester as The Haloid Photographic Company, which manufactured photographic paper and equipment. In 1938 Chester Carlson, a physicist working independently, invented a process for printing images using an electrically charged photoconductor-coated metal plate and dry powder "toner". However, it would take more than 20 years of refinement before the first automated machine to make copies was commercialized, using a document feeder, scanning light, a rotating drum. Joseph C. Wilson, credited as the "founder of Xerox", took over Haloid from his father, he saw the promise of Carlson's invention and, in 1946, signed an agreement to develop it as a commercial product.
Wilson remained as President/CEO of Xerox until 1967 and served as Chairman until his death in 1971. Looking for a term to differentiate its new system, Haloid coined the term xerography from two Greek roots meaning "dry writing". Haloid subsequently changed its name to Haloid Xerox in 1958 and Xerox Corporation in 1961. Before releasing the 914, Xerox tested the market by introducing a developed version of the prototype hand-operated equipment known as the Flat-plate 1385; the 1385 was not a viable copier because of its speed of operation. As a consequence, it was sold as a platemaker for the Addressograph-Multigraph Multilith 1250 and related sheet-fed offset printing presses in the offset lithography market, it was little more than a high quality, commercially available plate camera mounted as a horizontal rostrum camera, complete with photo-flood lighting and timer. The glass film/plate had been replaced with a selenium-coated aluminum plate. Clever electrics turned this into reusable substitute for film.
A skilled user could produce fast and metal printing plates of a higher quality than any other method. Having started as a supplier to the offset lithography duplicating industry, Xerox now set its sights on capturing some of offset's market share; the 1385 was followed by the first automatic xerographic printer, the Copyflo, in 1955. The Copyflo was a large microfilm printer which could produce positive prints on roll paper from any type of microfilm negative. Following the Copyflo, the process was scaled down to produce the 1824 microfilm printer. At about half the size and weight, this still sizable machine printed onto hand-fed, cut-sheet paper, pulled through the process by one of two gripper bars. A scaled-down version of this gripper feed system was to become the basis for the 813 desktop copier; the company came to prominence in 1959 with the introduction of the Xerox 914, "the most successful single product of all time." The 914, the first plain paper photocopier was developed by John H. Dessauer.
The product was sold by an innovative ad campaign showing that monkeys could make copies at the touch of a button - simplicity would become the foundation of future Xerox products and user interfaces. Revenues leaped to over $500 million by 1965. In the 1960s, Xerox held a dominant position in the photocopier market, the company expanded making millionaires of some long-suffering investors who had nursed the company through the slow research and development phase of the product. In 1960, a xerography research facility called the Wilson Center for Research and Technology was opened in Webster, New York. In 1961, the company changed its name to Xerox Corporation. Xerox common stock was listed on the New York Stock Exchange in 1961 and on the Chicago Stock Exchange in 1990. In 1963 Xerox introduced the Xerox 813, the first desktop plain-paper copier, realizing Carlson's vision of a copier that could fit on anyone's office desk. Ten years in 1973, a basic, color copier, based on the 914, followed.
The 914 itself was sped up to become the 420 and 720. The 813 was developed into the 330 and 660 products and also the 740 desktop microfiche printer. Xerox's first foray into duplicating, as distinct from copying, was with the Xerox 2400, introduced in 1966; the model number denoted the number of prints produced in an hour. Although not as fast as offset printing, this machine introduced the industry's first automatic document feeder, paper slitter and perforator, collato
Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. The concept of idempotence arises in a number of places in abstract algebra and functional programming; the term was introduced by Benjamin Peirce in the context of elements of algebras that remain invariant when raised to a positive integer power, means " the same power", from idem + potence. An element x of a magma is said to be idempotent if: x • x = x. If all elements are idempotent with respect to • • is called idempotent; the formula ∀x, x • x = x is called the idempotency law for •. The natural number 1 is an idempotent element with respect to multiplication, so is 0, but no other natural number is. For the latter reason, multiplication of natural numbers is not an idempotent operation. More formally, in the monoid, idempotent elements are just 0 and 1. In a magma, an identity element e or an absorbing element a, if it exists, is idempotent.
Indeed, e • e = e and a • a = a. In a group, the identity element e is the only idempotent element. Indeed, if x is an element of G such that x • x = x x • x = x • e and x = e by multiplying on the left by the inverse element of x. Taking the intersection x∩y of two sets x and y is an idempotent operation, since x∩x always equals x; this means that the idempotency law ∀ x ∩ x = x is true. Taking the union of two sets is an idempotent operation. Formally, in the monoids and of the power set of the set E with the set union ∪ and set intersection ∩ all elements are idempotent. In the monoids and of the Boolean domain with the logical disjunction ∨ and the logical conjunction ∧ all elements are idempotent. In a Boolean ring, multiplication is idempotent. In the monoid of the functions from a set E to a subset F of E with the function composition ∘, idempotent elements are the functions f: E → F such that f ∘ f = f, in other words such that for all x in E, f = f. For example: Taking the absolute value abs of an integer number x is an idempotent function for the following reason: abs = abs is true for each integer number x.
This means that abs ∘ abs = abs holds, that is, abs is an idempotent element in the set of all functions with respect to function composition. Therefore, abs satisfies the above definition of an idempotent function. Other examples include: the identity function is idempotent. If the set E has n elements, we can partition it into k chosen fixed points and n − k non-fixed points under f, kn−k is the number of different idempotent functions. Hence, taking into account all possible partitions, ∑ k = 0 n k n − k is the total number of possible idempotent functions on the set; the integer sequence of the number of idempotent functions as given by the sum above for n = 0, 1, 2, 3, 4, 5, 6, 7, 8, … starts with 1, 1, 3, 10, 41, 196, 1057, 6322, 41393, …. Neither the property of being idempotent nor that of being not is preserved under function composition; as an example for the former, f = x mod 3 and g = max are both idempotent, but f ∘ g is not, although g ∘ f happens to be. As an example for the latter, the negation function ¬ on the Boolean domain is not idempotent, but ¬ ∘ ¬ is.
Unary negation − of real numbers is not idempotent, but − ∘ − is. In computer science, the term idempotence may have a different meaning depending on the context in which it is applied: in imperative programming, a subroutine with side effects is idempotent if the system state remains the same after one or several calls, in other words if the function from the system state space to itself associated to the subroutine is idempotent in the mathematical sense given in the definition; this is a useful property in many situations, as it means that an operation can be repeated or retried as as necessary without causing unintended effects. With non-idempotent operations, the algorithm may have to keep track of whether the operation was performed or not. A function looking up a customer's name and address in a database is idempotent, since this will not cause the database to change. Changing a customer's address is idempotent, because the final address will be the same no matter how many times it is submitted.
However, placing an order for a cart for the customer is not idempotent, since running the call several t
Sun Microsystems, Inc. was an American company that sold computers, computer components and information technology services and created the Java programming language, the Solaris operating system, ZFS, the Network File System, SPARC. Sun contributed to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, virtualized computing. Sun was founded on February 24, 1982. At its height, the Sun headquarters were in Santa Clara, California, on the former west campus of the Agnews Developmental Center. On April 20, 2009, it was announced; the deal was completed on January 27, 2010. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture, as well as on x86-based AMD Opteron and Intel Xeon processors. Sun developed its own storage systems and a suite of software products, including the Solaris operating system, developer tools, Web infrastructure software, identity management applications. Other technologies included the Java platform and NFS.
In general, Sun was a proponent of open systems Unix. It was a major contributor to open-source software, as evidenced by its $1 billion purchase, in 2008, of MySQL, an open-source relational database management system. At various times, Sun had manufacturing facilities in several locations worldwide, including Newark, California. However, by the time the company was acquired by Oracle, it had outsourced most manufacturing responsibilities; the initial design for what became Sun's first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation, it was designed around the Motorola 68000 processor with an advanced memory management unit to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanford's Department of Computer Science and Silicon Valley supply houses.
On February 24, 1982, Vinod Khosla, Andy Bechtolsheim, Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution, joined soon after and is counted as one of the original founders; the Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68k-based systems with high-quality graphics that were the only computers other than DEC's VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which used it to build Multibus-based systems running Unix from UniSoft. Sun's initial public offering was in 1986 for Sun Workstations; the symbol was changed in 2007 to JAVA. Sun's logo, which features four interleaved copies of the word sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt of Stanford; the initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was subsequently rotated to stand on one corner and re-colored purple, blue.
In the dot-com bubble, Sun began making much more money, its shares rose dramatically. It began spending much more, hiring workers and building itself out; some of this was because of genuine demand, but much was from web start-up companies anticipating business that would never happen. In 2000, the bubble burst. Sales in Sun's important hardware division went into free-fall as customers closed shop and auctioned high-end servers. Several quarters of steep losses led to executive departures, rounds of layoffs, other cost cutting. In December 2001, the stock fell to the 1998, pre-bubble level of about $100, but it kept falling, faster than many other tech companies. A year it had dipped below $10 but bounced back to $20. In mid-2004, Sun closed their Newark, California and consolidated all manufacturing to Hillsboro, Oregon. In 2006, the rest of the Newark campus was put on the market. In 2004, Sun canceled two major processor projects which emphasized high instruction-level parallelism and operating frequency.
Instead, the company chose to concentrate on processors optimized for multi-threading and multiprocessing, such as the UltraSPARC T1 processor. The company announced a collaboration with Fujitsu to use the Japanese company's processor chips in mid-range and high-end Sun servers; these servers were announced on April 17, 2007, as the M-Series, part of the SPARC Enterprise series. In February 2005, Sun announced the Sun Grid, a grid computing deployment on which it offered utility computing services priced at US$1 per CPU/hour for processing and per GB/month for storage; this offering built upon an existing 3,000-CPU server farm used for internal R&D for over 10 years, which Sun marketed as being able to achieve 97% utilization. In August 2005, the first commercial use of this grid was announced for financial risk simulations, launched as its first software as a service product. In January 2005, Sun reported a net profit of $19 million for fiscal 2005 second quarter, for the first time in three years.
This was followed by net loss of $9 million on GAAP basis for the third quarter 2005, as reported on April 14, 2005. In January 2007, Sun reported a net GAAP profit of $126