Storage area network
A storage area network is a network which provides access to consolidated, block level data storage. A SAN typically has its own network of devices that are generally not accessible through the local area network by other devices. The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments, a SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, essentially, a SAN consolidates such storage islands together using a high-speed network. Operating systems maintain their own systems on their own dedicated, non-shared LUNs. If multiple systems were simply to attempt to share a LUN, any planned sharing of data on different computers within a LUN requires software, such as SAN file systems or clustered computing. Despite such issues, SANs help to increase capacity utilization. Common uses of a SAN include provision of transactionally accessed data that require high-speed block-level access to the hard drives such as servers, databases.
Network-attached storage was designed independently of SAN systems, both NAS and SAN have the potential to reduce the amount of excess storage that must be purchased and provisioned as spare space. In a DAS-only architecture, each computer must be provisioned with enough storage to ensure that the computer does not run out of space at an untimely moment. In a DAS architecture the spare storage on one computer cannot be utilized by another, in a NAS the storage devices are directly connected to a file server that makes the storage available at a file-level to the other computers. In a SAN, the storage is available at a lower block-level. SAN protocols include Fibre Channel, iSCSI, ATA over Ethernet, instead the CPUs use the LAN to communicate, potentially creating bandwidth as well as performance bottlenecks. To understand their differences, a comparison of SAN, DAS and NAS architectures may be helpful, sharing storage usually simplifies storage administration and adds flexibility since cables and storage devices do not have to be physically moved to shift storage from one server to another.
Other benefits include the ability to allow servers to boot from the SAN itself and this allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the LUN of the faulty server. SANs tend to more effective disaster recovery processes. A SAN could span a distant location containing a secondary storage array and this enables storage replication either implemented by disk array controllers, by server software, or by specialized SAN devices. Since IP WANs are often the least costly method of long-distance transport, the traditional physical SCSI layer could support only a few meters of distance - not nearly enough to ensure business continuance in a disaster
Small Computer System Interface is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, protocols and optical interfaces, SCSI is derived from SASI, the Shugart Associates System Interface, developed circa 1978 and publicly disclosed in 1981. A SASI controller provided a bridge between a hard disk drives low-level interface and a host computer, which needed to read blocks of data, SASI controller boards were typically the size of a hard disk drive and were usually physically mounted to the drives chassis. SASI, which was used in mini- and early microcomputers, defined the interface as using a 50-pin flat ribbon connector which was adopted as the first-generation SCSI connector. SASI is a fully compliant subset of SCSI-1 so that many, if not all, larry Boucher is considered to be the father of SASI and SCSI due to his pioneering work first at Shugart Associates and at Adaptec. A number of such as NCR Corporation and Optimem were early supporters of the SCSI standard.
The NCR facility in Wichita, Kansas is widely thought to have developed the industrys first SCSI chip, the small part in SCSI is historical, since the mid-1990s, SCSI has been available on even the largest of computer systems. Since its standardization in 1986, SCSI has been used in the Amiga, Apple Macintosh and Sun Microsystems computer lines. Apple started using Parallel ATA for its low-end machines with the Macintosh Quadra 630 in 1994, Apple dropped on-board SCSI completely with the Power Mac G3 in 1999, while still offering a PCI controller card as an option on up to the Power Macintosh G4 models. Sun switched its lower-end range to Serial ATA, Commodore included a SCSI interface on the Amiga 3000/3000T systems and it was an add-on to previous Amiga 500/2000 models. Starting with the Amiga 600/1200/4000 systems Commodore switched to the IDE interface, Atari included SCSI interface as standard in its Atari MEGA STE, Atari TT and Atari Falcon computer models. SCSI has never been popular in the low-priced IBM PC world, owing to the lower cost, however, SCSI drives and even SCSI RAIDs became common in PC workstations for video or audio production.
Although much of the SCSI documentation talks about the parallel interface, Serial interfaces have a number of advantages over parallel SCSI, including higher data rates, simplified cabling, longer reach, and improved fault isolation. ISCSI preserves the basic SCSI paradigm, especially the command set, almost unchanged, through embedding of SCSI-3 over TCP/IP, SCSI is popular on high-performance workstations and storage appliances. Almost all RAID subsystems on servers have used some kind of SCSI hard disk drives for decades, moreover, SAS offers compatibility with SATA devices, creating a much broader range of options for RAID subsystems together with the existence of nearline SAS drives. SCSI is available in a variety of interfaces, the first was parallel SCSI, which uses a parallel bus design. Since 2005, SPI was gradually replaced by Serial Attached SCSI, ISCSI, for example, uses TCP/IP as a transport mechanism, which is most often transported over Gigabit Ethernet or faster network links.
With the advent of SAS and SATA drives, provision for parallel SCSI on motherboards was discontinued, the SCSI Parallel Interface was the only interface using the SCSI protocol
Fibre Channel over Ethernet
Fibre Channel over Ethernet is a computer network technology that encapsulates Fibre Channel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol, the specification was part of the International Committee for Information Technology Standards T11 FC-BB-5 standard published in 2009. FCoE transports Fibre Channel directly over Ethernet while being independent of the Ethernet forwarding scheme, the FCoE protocol specification replaces the FC0 and FC1 layers of the Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE was meant to integrate with existing Fibre Channel networks, data centers used Ethernet for TCP/IP networks and Fibre Channel for storage area networks. With FCoE, Fibre Channel becomes another network protocol running on Ethernet, FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI which runs on top of TCP and IP. As a consequence, FCoE is not routable at the IP layer, since classical Ethernet had no priority-based flow control, unlike Fibre Channel, FCoE required enhancements to the Ethernet standard to support a priority-based flow control mechanism.
The IEEE standards body added priorities in the data center bridging Task Group, Fibre Channel required three primary extensions to deliver the capabilities of Fibre Channel over Ethernet networks, Encapsulation of native Fibre Channel frames into Ethernet Frames. Extensions to the Ethernet protocol itself to enable an Ethernet fabric in which frames are not routinely lost during periods of congestion, mapping between Fibre Channel N_port IDs and Ethernet MAC addresses. Computers can connect to FCoE with converged network adapters, which contain both Fibre Channel host bus adapter and Ethernet network interface controller functionality on the physical card. CNAs have one or more physical Ethernet ports, the main application of FCoE is in data center storage area networks. With FCoE, network and storage data traffic can be consolidated using a single network, a single 4-bit field satisfies the IEEE sub-type requirements. The 802. 1Q tag is optional but may be necessary in a given implementation, the SOF and EOF are encoded as specified in RFC3643.
Reserved bits are present to guarantee that the FCoE frame meets the minimum requirement of Ethernet. Inside the encapsulated Fibre Channel frame, the header is retained so as to allow connecting to a storage network by passing on the Fibre Channel frame directly after de-encapsulation. The FIP is an part of FCoE. Its main goal is to discover and initialize FCoE capable entities connected to an Ethernet cloud, FIP uses a dedicated Ethertype of 0x8914. In October 2003, Azul Technology developed early version and applied for a patent, in April 2007, the FCoE standardization activity started. In October 2007, the first public end-to-end FCoE demo occurred at Storage Network World including adapters from QLogic, switches from Nuova Systems, in April 2008, an early implementor was Nuova Systems, a subsidiary of Cisco Systems, which announced a switch
Sun Fire is a series of server computers introduced in 2001 by Sun Microsystems. The Sun Fire branding coincided with the introduction of the UltraSPARC III processor, in 2003, Sun broadened the Sun Fire brand, introducing Sun Fire servers using the Intel Xeon processor. In 2004, these early Intel Xeon models were superseded by models powered by AMD Opteron processors, in 2004, Sun introduced Sun Fire servers powered by the UltraSPARC IV dual-core processor. In 2007, Sun again introduced Intel Xeon Sun Fire servers, sPARC-based Sun Fire systems were produced until 2010, while x86-64 based machines were marketed until mid-2012. In mid-2012, Oracle Corporation ceased to use the Sun Fire brand for new server models, ultraSPARC-based Sun Fire models are licensed to run the Solaris operating system versions 8,9, and 10. Although not officially supported, some Linux versions are available from third parties. The z suffix was used previously to differentiate the V880z Visualization Server variant of the V880 server.
Suns first-generation blade server platform, the Sun Fire B1600 chassis, Sun blade systems were sold under the Sun Blade brand. In 2007, Sun and Fujitsu Siemens introduced the common SPARC Enterprise brand for server products, the first SPARC Enterprise models were the Fujitsu-developed successors to the midrange and high-end Sun Fire E-series. Later T-series servers have been badged SPARC Enterprise rather than Sun Fire, some servers were produced in two versions, the original version and a RoHS version. As of 2012, the x86 server range continued under the Sun Server or Oracle Server names, fireplane Sun System Handbook, Version 2.1
Sun-1 was the first generation of UNIX computer workstations and servers produced by Sun Microsystems, launched in May 1982. These were based on a CPU board designed by Andy Bechtolsheim while he was a student at Stanford University. The Sun-1 systems ran SunOS0.9, a port of UniSofts UniPlus V7 port of Seventh Edition UNIX to the Motorola 68000 microprocessor, with no window system. Early Sun-1 workstations and servers used the original Sun logo, a series of red Us laid out in a square, the first Sun-1 workstation was sold to Solo Systems in May 1982. The Sun-1/100 was used in the original Lucasfilm EditDroid non-linear editing system, the Sun 1 workstation was based on the Stanford University SUN workstation designed by Andy Bechtolsheim, a graduate student and co-founder of Sun Microsystems. At the heart of this design were the Multibus CPU, the cards used in the Sun-1 workstation were a second-generation design with a private memory bus allowing memory to be expanded to 2 MB without performance degradation.
The Sun 68000 board introduced in 1982 was a powerful single-board computer. 75-inch-deep Multibus form factor, by using the Motorola 68000 processor tightly coupled with the Sun-1 MMU the Sun 68000 CPU board was able to support a multi-tasking operating system such as UNIX. It included an advanced Sun designed multi-process two-level memory management unit with facilities for protection, code sharing. The Sun-1 MMU was necessary because the Motorola 68451 MMU did not always work correctly with the 68000, the CPU board included 256 KB of memory which could be replaced or augmented with two additional memory cards for a total of 2 MB. Although the memory cards used the Multibus form factor, they used the Multibus interface for power. This was a private memory bus which allowed for simultaneous memory input/output transfers. It allowed for full performance zero wait state operation of the memory, when installing the first 1 MB expansion board either the 256 Kb of memory on the CPU board or the first 256 KB on the expansion board had to be disabled.
On-board I/O included a serial port UART and a 16-bit parallel port. The serial ports were implemented with an Intel 8274 UART and with a NEC D7201C UART, serial port A was wired as a Data Communications Equipment port and had full modem control. It was the port if no graphical display was installed in the system. Serial port B was wired as a Data Terminal Equipment port and had no modem control, both serial ports could be used as terminal ports and quite often were allowing 3 people to use one workstation, although two did not have graphical displays. The 16-bit parallel port was a special purpose port for connecting 8-bit parallel port keyboard, the parallel port was never used as a general purpose parallel printer port. The CPU board included a fully compatible Multibus and it was an asynchronous bus that accommodated devices with various transfer rates while maintaining maximum throughput
Fibre Channel, or FC, is a high-speed network technology primarily used to connect computer data storage to servers. Fibre Channel is mainly used in storage area networks in commercial data centers, Fibre Channel networks form a switched fabric because they operate in unison as one big switch. Fibre Channel typically runs on optical fiber cables within and between data centers, most block storage runs over Fibre Channel Fabrics and supports many upper level protocols. Fibre Channel Protocol is a protocol that predominantly transports SCSI commands over Fibre Channel networks. Mainframe computers run the FICON command set over Fibre Channel because of its high reliability, Fibre Channel can be used for flash memory being transported over the NVMe interface protocol. To promote the fiber optic aspects of the technology and to make a unique name, Fibre Channel started in 1988, with ANSI standard approval in 1994, to merge the benefits of multiple physical layer implementations including SCSI, HIPPI and ESCON.
Fibre Channel was designed as an interface to overcome limitations of the SCSI. FC was developed with leading edge multi-mode fiber technologies that overcame the limitations of the ESCON protocol. Initially, the standard ratified lower speed Fibre Channel versions with 132.8125 Mbit/s,265.625 Mbit/s, Fibre Channel saw adoption at 1 Gigabit/s Fibre Channel and its success grew with each successive speed. Fibre Channel has doubled in speed every few years since 1996, Fibre channel has seen active development since its inception, with numerous speed improvements on a variety of underlying transport media. There are three major Fibre Channel topologies, describing how a number of ports are connected together, a port in Fibre Channel terminology is any entity that actively communicates over the network, not necessarily a hardware port. This port is usually implemented in a such as disk storage. Two devices are connected directly to each other and this is the simplest topology, with limited connectivity.
In this design, all devices are in a loop or ring, adding or removing a device from the loop causes all activity on the loop to be interrupted. The failure of one device causes a break in the ring, Fibre Channel hubs exist to connect multiple devices together and may bypass failed ports. A loop may be made by cabling each port to the next in a ring, a minimal loop containing only two ports, while appearing to be similar to point-to-point, differs considerably in terms of the protocol. Only one pair of ports can communicate concurrently on a loop, Arbitrated Loop has been rarely used after 2010. In this design, all devices are connected to Fibre Channel switches, advantages of this topology over point-to-point or Arbitrated Loop include, The Fabric can scale to tens of thousands of ports
Illumos is a free and open-source Unix operating system. It derives from OpenSolaris, which in turn derives from SVR4 UNIX, illumos comprises a kernel, device drivers, system libraries, and utility software for system administration. This core is now the base for many different open-sourced OpenSolaris distributions, the maintainers write illumos in lowercase since some computer fonts do not clearly distinguish a lowercase L from an uppercase i. The name is derived from the Latin illuminare meaning to enlighten, the original plan explicitly stated that illumos would not be a distribution or a fork. However, after Oracle announced discontinuing OpenSolaris, plans were made to fork the final version of the Solaris ON kernel allowing illumos to evolve into a kernel of its own. As of 2010, efforts focused on libc, the NFS lock manager, the illumos Foundation has been incorporated in the State of California as a 5016 trade association, with founding board members Jason Hoffman, Evan Powell, and Garrett DAmore.
As of August 2012 the foundation was in the process of formalizing its by-laws, at OpenStorage Summit 2010, the new logo for illumos was revealed, with official type and branding to follow over. It was originally dependent on OpenSolaris OS/Net, but a fork was made after Oracle silently decided to close the development of Solaris, ZFS, a combined file system and logical volume manager providing a high data integrity for very large storage capacities. Solaris Containers, a low overhead implementation of operating-system-level virtualization technology for x86, dTrace, a comprehensive dynamic tracing framework for troubleshooting kernel and application problems on production systems in real time Kernel-based Virtual Machine, a virtualization infrastructure. KVM supports native virtualization on processors with hardware virtualization extensions, Solaris Distributions, at illumos. org DilOS, with Debian package manager and virtualization support, available for x86-64 and SPARC. Dyson, derived from Debian using libc, and SMF init system, napp-it, ready to use and comfortable ZFS storageserver with a free Web-GUI that includes all functions for a sophisticated NAS or SAN appliance.
NexentaStor, distribution optimized for virtualization, storage area networks, network-attached storage, omniOS, a self-hosting, minimalist Illumos-based release suitable for production deployment. OpenIndiana, a distribution that is a continuation and fork in the spirit of the OpenSolaris operating system, openSXCE, distribution for developers and system administrators for IA-32/x86-64 x86 platforms and SPARC. SmartOS, a distribution for cloud computing with Kernel-based Virtual Machine integration, retro style distribution with modern components, available for x86-64 and SPARC. XStreamOS, a distribution for infrastructure and web development, v9os, a server-only, IPS-based minimal SPARC distribution. Official website OS/Net consolidation, source code community developed and maintained
Each object typically includes the data itself, a variable amount of metadata, and a globally unique identifier. Object storage can be implemented at multiple levels, including the level, the system level. Object storage systems allow retention of massive amounts of unstructured data, Object storage is used for purposes such as storing photos on Facebook, songs on Spotify, or files in online collaboration services, such as Dropbox. In the same year,1995, a Belgium company - FilePool - was established to build the basis for archiving functions, Object storage was proposed at Gibsons Carnegie Mellon University lab as a research project in 1996. Another key concept was abstracting the writes and reads of data to more flexible data containers, fine grained access control through object storage architecture was further described by one of the NASD team, Howard Gobioff, who was one of the inventors of the Google File System. Other related work includes the Coda filesystem project at Carnegie Mellon, which started in 1987, there is the OceanStore project at UC Berkeley, which started in 1999.
Called content-addressable storage, was developed at Filepool, acquired by EMC Corporation in 2001, from 1999 to 2013, at least $300 million of venture financing was related to object storage, including vendors like Amplidata, Cleversafe, Cloudian and Scality. An article written illustrating products timeline was published in July 2016, one of the design principles of object storage is to abstract some of the lower layers of storage away from the administrators and applications. Thus, data is exposed and managed as objects instead of files or blocks, objects contain additional descriptive properties which can be used for better indexing or management. Administrators do not have to lower level storage functions like constructing and managing logical volumes to utilize disk capacity or setting RAID levels to deal with disk failure. Object storage allows the addressing and identification of objects by more than just file name. Object storage adds a unique identifier within a bucket, or across the system, to support much larger namespaces.
At the base level, this includes CRUD functions for basic read, some object storage implementations go further, supporting additional functionality like object versioning, object replication, and movement of objects between different tiers and types of storage. Most API implementations are ReST-based, allowing the use of many standard HTTP calls, t10 is responsible for all SCSI standards. Some distributed file systems use an object-based architecture, where file metadata is stored in metadata servers, File system client software interacts with the distinct servers, and abstracts them to present a full file system to users and applications. IBM Spectrum Scale, Dell EMC Elastic Cloud Storage, XtreemFS, some early incarnations of object storage were used for archiving, as implementations were optimized for data services like immutability, not performance. EMC Centera and Hitachi HCP are two commonly cited object storage products for archiving, another example is Quantum Lattus Object Storage Platform.
The vast majority of storage available in the market leverages an object storage architecture
The Sun-2 series of UNIX workstations and servers was launched by Sun Microsystems in November 1983. As the name suggests, the Sun-2 represented the second generation of Sun systems, superseding the original Sun-1 series.0, based on 4. 1BSD. Early Sun-2 models were based on the Intel Multibus architecture, with models using VMEbus, Sun-2 systems were supported in SunOS until version 4.0.3. A port to support Multibus Sun-2 systems in NetBSD was begun in January 2001 from the Sun-3 support in the NetBSD1.5 release, code supporting the Sun-2 began to be merged into the NetBSD tree in April 2001. Sun2 is considered a tier 2 support platform as of NetBSD7.0.1, models are listed in approximately chronological order. A desktop disk and tape sub-system was introduced for the Sun-2/50 desktop workstation and it could hold a 5 ¼ disk drive and 5 ¼ tape drive. It used DD-50 connectors for its SCSI cables, a Sun specific design and it was often referred to as a Sun Shoebox. Sun-1 systems upgraded with Sun-2 Multibus CPU boards were sometimes referred to as the 2/100U or 2/150U, a typical configuration of a monochrome 2/120 with 4 Mb of memory,71 Mb SCSI disk and 20 Mb 1/4 SCSI tape cost $29,300.
A color 2/160 with 8Mb of memory, two 71 Mb SCSI disks and 60 Mb 1/4 SCSI tape cost $48,800. A Sun 2/170 server with 4Mb of memory, no display, two Fujitu Eagle 380 Mb disk drive, one Xylogics 450 SMD disk controller, a 6250 bpi 1/2 inch tape drive, Sun 2/120 and 2/170 systems were based on the Multibus architecture. The CPU board was based on a 10 MHz 68010 processor with a proprietary Sun Memory Management Unit and could address 8 MB of physical, the top 1 MB of physical memory address space was reserved for the monochrome frame buffer. The Multibus CPU board supported the Sun-1 parallel keyboard and mouse as well as two serial ports, the Sun 2/50, Sun 2/130 and Sun 2/160 used quad-depth, triple height Eurocard VMEbus CPU boards. The VMEbus CPU board was based on the design as the Multibus CPU but included 2Mb or 4Mb of memory, the Sun-2 monochrome frame buffer. Sun provided 1 Mb Multibus memory boards and 1 MB and 4 MB VMEbus memory boards, companies such as Helios Systems made 4 MB memory boards that would work in Sun systems.
A common frame buffer was the Sun-2 Prime Monochrome Video and this board provided an 1152x900 monochrome display with TTL or ECL video signals, and keyboard and mouse ports. It normally occupied the top 1 MB of physical address space. There was a Sun-2 Color Video board available that provided an 1152x900 8-bit color display and this board occupied the top 4 MB of address space. 42MB MFM disks were used for storage
Solaris (operating system)
Solaris is a Unix operating system originally developed by Sun Microsystems. It superseded their earlier SunOS in 1993, Oracle Solaris, so named as of 2010, has been owned by Oracle Corporation since the Sun acquisition by Oracle in January 2010. Solaris is known for its scalability, especially on SPARC systems, Solaris supports SPARC-based and x86-based workstations and servers from Oracle and other vendors, with efforts underway to port to additional platforms. Solaris is registered as compliant with the Single Unix Specification, Solaris was developed as proprietary software. In June 2005, Sun Microsystems released most of the codebase under the CDDL license, with OpenSolaris, Sun wanted to build a developer and user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution, in August 2010, Oracle discontinued providing public updates to the source code of the Solaris kernel, effectively turning Solaris 11 back into a closed source proprietary operating system.
Following that, in 2011 the Solaris 11 kernel source code leaked to BitTorrent, through the Oracle Technology Network, industry partners can still gain access to the in-development Solaris source code. Source code for the source components of Solaris 11 is available for download from Oracle. In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time, BSD, System V and this became Unix System V Release 4. On September 4,1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS4 and this was identified internally as SunOS5, but a new marketing name was introduced at the same time, Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but the OpenWindows graphical user interface and Open Network Computing functionality. Although SunOS4.1. x micro releases were retroactively named Solaris 1 by Sun, for releases based on SunOS5, the SunOS minor version is included in the Solaris release number.
For example, Solaris 2.4 incorporates SunOS5.4. After Solaris 2.6, the 2. was dropped from the name, so Solaris 7 incorporates SunOS5.7. Although SunSoft stated in its initial Solaris 2 press release their intent to support both SPARC and x86 systems, the first two Solaris 2 releases,2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and it included the Wabi emulator to support Windows applications. At the time, Sun offered the Interactive Unix system that it had acquired from Interactive Systems Corporation, in 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. Solaris uses a code base for the platforms it supports, SPARC
The Scalable Processor Architecture is a reduced instruction set computing instruction set architecture originally developed by Sun Microsystems. Since the establishment of SPARC International, Inc. in 1989, SPARC International is responsible for licensing and promoting the SPARC architecture, managing SPARC trademarks, and providing conformance testing. As a result of SPARC International, SPARC is fully open, non-proprietary, later, SPARC processors were used in SMP and CC-NUMA servers produced by Sun and Fujitsu, among others, and designed for 64-bit operation. As of July 2016, the latest commercial high-end SPARC processors are Fujitsus SPARC64 X+ and SPARC64 XIfx, the SPARC architecture was heavily influenced by the earlier RISC designs including the RISC I and II from the University of California and the IBM801. These original RISC designs were minimalist, including as few features or op-codes as possible and this made them similar to the MIPS architecture in many ways, including the lack of instructions such as multiply or divide.
Another feature of SPARC influenced by this early RISC movement is the delay slot. The SPARC processor usually contains as many as 160 general purpose registers, according to the Oracle SPARC Architecture 2015 specification an implementation may contain from 72 to 640 general-purpose 64-bit registers. At any point, only 32 of them are visible to software –8 are a set of global registers. These 24 registers form what is called a window, and at function call/return. Each window has 8 local registers and shares 8 registers with each of the adjacent windows, the shared registers are used for passing function parameters and returning values, and the local registers are used for retaining local values across function calls. Other architectures that include similar register file features include Intel i960, IA-64, the architecture has gone through several revisions. It gained hardware multiply and divide functionality in Version 8, 64-bit were added to the version 9 SPARC specification published in 1994.
In SPARC Version 8, the floating point register file has 16 double precision registers, each of them can be used as two single precision registers, providing a total of 32 single precision registers. An odd-even number pair of double precision registers can be used as a quad precision register, SPARC Version 9 added 16 more double precision registers, but these additional registers can not be accessed as single precision registers. No SPARC CPU implements quad-precision operations in hardware as of 2004, tagged add and subtract instructions perform adds and subtracts on values checking that the bottom two bits of both operands are 0 and reporting overflow if they are not. This can be useful in the implementation of the run time for ML, the endianness of the 32-bit SPARC V8 architecture is purely big-endian. The latter is used for accessing data from inherently little-endian devices. There have been three revisions of the architecture
Sun-3 is a series of UNIX computer workstations and servers produced by Sun Microsystems, launched on September 9,1985. Sun-3 systems were supported in SunOS versions 3.0 to 4.1. 1_U1 and have current support in NetBSD, models are listed in approximately chronological order. Sun3 circuit boards In 1989, coincident with the launch of the SPARCstation 1, unlike previous Sun-3s, these use a Motorola 68030 processor,68882 floating-point unit, and the 68030s integral MMU. This 68030-based architecture is called Sun-3x, Sun 3/260s upgraded with Sun 3400 CPU boards are known as Sun 3/460s. Sun-1 Sun-2 Sun386i Sun-4 SPARCstation Sun Microsystems The Sun Hardware Reference, Part 1 Sun Field Engineer Handbook, — Fan site for old Unix Workstations, including Sun machines ILVSUN3 — Emulate a Sun3 on modern PC hardware