IBM System i
The IBM System i is IBM's previous generation of midrange computer systems for IBM i users, was subsequently replaced by the IBM Power Systems in April 2008. The platform was first introduced as the AS/400 on June 21, 1988 and renamed to the eServer iSeries in 2000; as part of IBM's Systems branding initiative in 2006, it was again renamed to System i. The codename of the AS/400 project was "Silver Lake", named for the lake in downtown Rochester, where development of the system took place. In April 2008, IBM announced its integration with the System p platform; the unified product line is called IBM Power Systems and features support for the IBM i, AIX and GNU/Linux operating systems. The predecessor to AS/400, IBM System/38, was first made available in August 1979 and was marketed as a minicomputer for general business and departmental use, it was sold alongside other product lines, each with a different architecture. Realizing the importance of compatibility with the thousands of programs written in legacy code, IBM launched the AS/400 midrange computer line in 1988.
AS stands for "Application System." Great effort was made during development of the AS/400 to enable programs written for the System/34 and System/36 to be moved to the AS/400. Programs on the System/38 were directly compatible with the new AS/400. In 2000, in accordance with IBM's eServer initiative, the AS/400 series was rebranded as the eServer iSeries. In 2006, it was again rebranded as the IBM System i. In 2008 20 years after being introduced, the System i and IBM System p product lines were combined into a new product line called the IBM Power Systems line; the AS/400 operating system was named OS/400. The operating system has undergone name changes along with the rebranding of IBM's server lineup; the operating system was rebranded as i5/OS to correspond with the introduction of POWER5 processors and the rebranding of the hardware to eServer iSeries. For the 6.1 release, the operating system was again renamed to IBM i. The operating system is object-based. Features include a RDBMS, a menu-driven interface, support for multiple users, block-oriented terminal support, printers.
IBM i has built-in security, support for communications, web-based applications which can be executed inside the optional IBM WebSphere Application Server or as PHP/MySQL applications inside a native port of the Apache web server. Unlike the "everything is a file" feature of Unix and its derivatives, on IBM i everything is an object. IBM i offers Unix-like file directories using the Integrated File System. Java compatibility is implemented through a native port of the Java virtual machine. Like IBM's mainframe operating systems, IBM i uses EBCDIC as the inherent encoding. OS/400 Version 4, Release 4 introduced LPARs allowing multiple virtual systems to run on a single hardware footprint; the IBM System i platform extended the System/38 architecture of an object-based system with an integrated DB2 relational database. Important are the virtual machine and single-level storage concepts which established the platform as an advanced business computer. One feature that has contributed to the longevity of the IBM System i platform is its high-level instruction set, which allows application programs to take advantage of advances in hardware and software without recompilation.
TIMI is a virtual instruction set independent of the underlying machine instruction set of the CPU. User-mode programs contain both TIMI instructions and the machine instructions of the CPU, thus ensuring hardware independence; this is conceptually somewhat similar to the virtual machine architecture of programming environments such as Smalltalk, Java and. NET; the key difference is that it is embedded so into the AS/400's design as to make applications binary-compatible across different processor families. Unlike some other virtual-machine architectures in which the virtual instructions are interpreted at run time, TIMI instructions are never interpreted, they constitute an intermediate compile time step and are translated into the processor's instruction set as the final compilation step. The TIMI instructions are stored within the final program object, in addition to the executable machine instructions; this is how application objects compiled on one processor family could be moved to a new processor without re-compilation.
An application saved from the older 48-bit platform can be restored onto the new 64-bit platform where the operating system discards the old machine instructions and re-translates the TIMI instructions into 64-bit instructions for the new processor. The system's instruction set defines all pointers as 128-bit; this was the original design feature of the System/38 in the mid 1970s planning for future use of faster processors, memory and an expanded address space. When at a point in the future 128-bit general purpose processors would appear, IBM i will be 128-bit enabled; the original AS/400 CISC models used the same 48-bit address space as the S/38. The address space was expanded in 1995 when ith the RISC PowerPC RS64 64-bit CPU processor replaced the 48-bit CISC processor. For 64-bit PowerPC processors, the virtual address resides in the rightmost 64 bits of a pointer while it was 48 bits in the S/38 and CISC AS/400; the 64-bit address space references main memory and disk
The IBM 3090 family was a high-end successor to the IBM System/370 series, thus indirectly the successor to the IBM System/360 launched 25 years earlier. Although the Feb. 12, 1985 initial announcement of the family's first two members, the Model 200 and Model 400, lacked explicit mention of both the name System/370 and the term backward compatibility, the aforementioned pair and the subsequently announced Models 120E, 150, 150E, 180, 180E, 200, 200E, 300, 300E, 400, 400E, 600E, 600J, 600S 3090 were described as using "ideas from the.. IBM 3033, extending them... It took... from the.. IBM 308X."The 400 and 600 were two 200s or 300s coupled together as one complex, could run in either single-system image mode or partitioned into two systems. By the late 1970s and early 1980s, patented technology allowed Amdahl mainframes of this era to be air-cooled, unlike IBM systems that required chilled water and its supporting infrastructure; the eight largest of the 18 models of the ES/9000 systems introduced in 1990 were water-cooled.
A modem for "remote service capabilities" was standard. In October 1985, IBM introduced an optional vector facility for the IBM 3090. IBM entered into partnerships with several universities to promote the use of the 3090 in scientific applications, efforts were made to convert code traditionally run on Cray computers. Along with the vector unit, IBM introduced their Engineering and Scientific Subroutines Library and a facility to run programs written for the discontinued 3838 array processor. IBM photo of 3090, facing operator console View of IBM 3090 600J system "box" IBM System/360 IBM System/370 Prasad, N. S.. IBM Mainframes: Architecture and Design. McGraw-Hill. ISBN 0070506868. — Chapter 10 describes the 3090. The IBM 3090
Microsoft Hyper-V, codenamed Viridian and known as Windows Server Virtualization, is a native hypervisor. Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks. Hyper-V was first released alongside Windows Server 2008, has been available without additional charge since Windows Server 2012 and Windows 8. A standalone Windows Hyper-V Server is free. A beta version of Hyper-V was shipped with certain x86-64 editions of Windows Server 2008; the finalized version was delivered through Windows Update. Hyper-V has since been released with every version of Windows Server. Microsoft provides Hyper-V through two channels: Part of Windows: Hyper-V is an optional component of Windows Server 2008 and later, it is available in x64 SKUs of Pro and Enterprise editions of Windows 8, Windows 8.1 and Windows 10. Hyper-V Server: It is a freeware edition of Windows Server with limited functionality and Hyper-V component.
Hyper-V Server 2008 was released on October 1, 2008. It consists of Hyper-V role. Hyper-V Server 2008 is limited to a command-line interface used to configure the host OS, physical hardware, software. A menu driven CLI interface and some downloadable script files simplify configuration. In addition, Hyper-V Server supports remote access via Remote Desktop Connection; however and configuration of the host OS and the guest virtual machines is done over the network, using either Microsoft Management Consoles on another Windows computer or System Center Virtual Machine Manager. This allows much easier "point and click" configuration, monitoring of the Hyper-V Server. Hyper-V Server 2008 R2 was made available in September 2009 and includes Windows PowerShell v2 for greater CLI control. Remote access to Hyper-V Server requires CLI configuration of network interfaces and Windows Firewall. Using a Windows Vista PC to administer Hyper-V Server 2008 R2 is not supported. Hyper-V implements isolation of virtual machines in terms of a partition.
A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. There must be at least one parent partition in a hypervisor instance, running a supported version of Windows Server; the virtualization software runs in the parent partition and has direct access to the hardware devices. The parent partition creates child partitions. A parent partition creates child partitions using the hypercall API, the application programming interface exposed by Hyper-V. A child partition does not have access to the physical processor, nor does it handle its real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address, depending on the configuration of the hypervisor, might not be the entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition; the hypervisor handles the interrupts to the processor, redirects them to the respective partition using a logical Synthetic Interrupt Controller.
Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by using second level address translation provided by the CPU, referred to as EPT on Intel and RVI on AMD. Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition, which will manage the requests; the VMBus is a logical channel. The response is redirected via the VMBus. If the devices in the parent partition are virtual devices, it will be redirected further until it reaches the parent partition, where it will gain access to the physical devices. Parent partitions run a Virtualization Service Provider, which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client, which redirect the request to VSPs in the parent partition via the VMBus.
This entire process is transparent to the guest OS. Virtual devices can take advantage of a Windows Server Virtualization feature, named Enlightened I/O, for storage and graphics subsystems, among others. Enlightened I/O is a specialized virtualization-aware implementation of high level communication protocols, like SCSI, that allows bypassing any device emulation layer and takes advantage of VMBus directly; this makes the communication more efficient, but requires the guest OS to support Enlightened I/O. Only the following operating systems support Enlightened I/O, allowing them therefore to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware: Windows Server 2008 and Windows Vista and Linux with a 3.4 or kernel FreeBSD The Hyper-V role is only available in the x86-64 variants of Standard and Datacenter editions of Windows Server 2008 and as well as the Pro and Education editions of Windows 8 and later. On Windows Server, it can be installed regardless of whether the installation is a full or core installation.
In addition, Hyper-V can be made available as part of
Cooperative Linux, abbreviated as coLinux, is software which allows Microsoft Windows and the Linux kernel to run in parallel on the same machine. Cooperative Linux utilizes the concept of a Cooperative Virtual Machine. In contrast to traditional virtual machines, the CVM shares resources that exist in the host OS. In traditional VM hosts, resources are virtualized for every OS; the CVM gives both OSs complete control of the host machine while the traditional VM sets every guest OS in an unprivileged state to access the real machine. The term "cooperative" is used to describe two entities working in parallel. In effect Cooperative Linux turns the two different operating system kernels into two big coroutines; each kernel has its own complete CPU context and address space, each kernel decides when to give control back to its partner. However, while both kernels theoretically have full access to the real hardware, modern PC hardware is not designed to be controlled by two different operating systems at the same time.
Therefore, the host kernel is left in control of the real hardware and the guest kernel contains special drivers that communicate with the host and provide various important devices to the guest OS. The host can be any OS kernel that exports basic primitives that allow the Cooperative Linux portable driver to run in CPL0 mode and allocate memory. Dan Aloni started the development of Cooperative Linux based on similar work with User-mode Linux, he announced the development on 25 Jan 2004. In July 2004 he presented a paper at the Linux Symposium; the source was released under the GNU General Public License. Other developers have since contributed various additions to the software. Cooperative Linux is different from full x86 virtualization, which works by running the guest OS in a less privileged mode than that of the host kernel, having all resources delegated by the host kernel. In contrast, Cooperative Linux runs a specially modified Linux kernel, Cooperative in that it takes responsibility for sharing resources with the NT kernel and not instigating race conditions.
Most of the changes in the Cooperative Linux patch are on the i386 tree—the only supported architecture for Cooperative at the time of this writing. The other changes are additions of virtual drivers: cobd and cocon. Most of the changes in the i386 tree involve the setup code, it is a goal of the Cooperative Linux kernel design to remain as close as possible to the standalone i386 kernel, so all changes are localized and minimized as much as possible. The coLinux package installs a port of the Linux kernel and a virtual network device and can run under a version of the Windows operating system such as Windows 2000 or Windows XP, it does not use a virtual machine such as VMware. Debian, Ubuntu and Gentoo are popular with the coLinux users. Due to the rather unusual structure of the virtual hardware, installing Linux distributions under coLinux is difficult. Therefore, users in most cases use either an existing Linux installation on a real partition or a ready made filesystem image distributed by the project.
The filesystem images are made by a variety of methods, including taking images of a normal Linux system, finding ways to make installers run with the strange hardware, building up installs by hand using the package manager or upgrading existing images using tools like yum and apt. An easier way to get an up-to-date filesystem image is to use QEMU to install Linux and "convert" the image by stripping off the first 63 512-byte blocks as described in the coLinux wiki. Since coLinux does not have access to native graphics hardware, X Window or X Servers will not run under coLinux directly, but one can install an X Server under Windows, such as Cygwin/X or Xming and use KDE or GNOME and any other Linux application and distribution. All of these issues are fixed by using coLinux based distributions such as andLinux, based on Ubuntu, or TopologiLinux, based on Slackware. Ethernet network via TAP, PCAP, NDIS and SLiRP. Does not yet support 64-bit Windows or Linux, but a port is under development by the community.
A sponsor was willing to complete the port. No multi-processor support. Linux applications and the underlying kernel are able to use only one CPU. WinLinux Win32-loader Topologilinux, a Slackware-based coLinux distribution andLinux, a Ubuntu-based coLinux distribution Platform virtualization Comparison of platform virtualization software Cygwin MSYS Wubi Longene Chroot Official website Cooperative Linux on SourceForge.net Virtualization with coLinux a developerWorks by M. Tim Jones speedLinux Portable Ubuntu Remix, another Ubuntu-based coLinux distribution
International Business Machines Corporation is an American multinational information technology company headquartered in Armonk, New York, with operations in over 170 countries. The company began in 1911, founded in Endicott, New York, as the Computing-Tabulating-Recording Company and was renamed "International Business Machines" in 1924. IBM produces and sells computer hardware and software, provides hosting and consulting services in areas ranging from mainframe computers to nanotechnology. IBM is a major research organization, holding the record for most U. S. patents generated by a business for 26 consecutive years. Inventions by IBM include the automated teller machine, the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the SQL programming language, the UPC barcode, dynamic random-access memory; the IBM mainframe, exemplified by the System/360, was the dominant computing platform during the 1960s and 1970s. IBM has continually shifted business operations by focusing on higher-value, more profitable markets.
This includes spinning off printer manufacturer Lexmark in 1991 and the sale of personal computer and x86-based server businesses to Lenovo, acquiring companies such as PwC Consulting, SPSS, The Weather Company, Red Hat. In 2014, IBM announced that it would go "fabless", continuing to design semiconductors, but offloading manufacturing to GlobalFoundries. Nicknamed Big Blue, IBM is one of 30 companies included in the Dow Jones Industrial Average and one of the world's largest employers, with over 380,000 employees, known as "IBMers". At least 70% of IBMers are based outside the United States, the country with the largest number of IBMers is India. IBM employees have been awarded five Nobel Prizes, six Turing Awards, ten National Medals of Technology and five National Medals of Science. In the 1880s, technologies emerged that would form the core of International Business Machines. Julius E. Pitrap patented the computing scale in 1885. On June 16, 1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company based in Endicott, New York.
The five companies had offices and plants in Endicott and Binghamton, New York. C.. They manufactured machinery for sale and lease, ranging from commercial scales and industrial time recorders and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr. fired from the National Cash Register Company by John Henry Patterson, called on Flint and, in 1914, was offered a position at CTR. Watson joined CTR as General Manager 11 months was made President when court cases relating to his time at NCR were resolved. Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies, he implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed, dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker". His favorite slogan, "THINK", became a mantra for each company's employees. During Watson's first four years, revenues reached $9 million and the company's operations expanded to Europe, South America and Australia.
Watson never liked the clumsy hyphenated name "Computing-Tabulating-Recording Company" and on February 14, 1924 chose to replace it with the more expansive title "International Business Machines". By 1933 most of the subsidiaries had been merged into one company, IBM. In 1937, IBM's tabulating equipment enabled organizations to process unprecedented amounts of data, its clients including the U. S. Government, during its first effort to maintain the employment records for 26 million people pursuant to the Social Security Act, the tracking of persecuted groups by Hitler's Third Reich through the German subsidiary Dehomag. In 1949, Thomas Watson, Sr. created IBM World Trade Corporation, a subsidiary of IBM focused on foreign operations. In 1952, he stepped down after 40 years at the company helm, his son Thomas Watson, Jr. was named president. In 1956, the company demonstrated the first practical example of artificial intelligence when Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory programmed an IBM 704 not to play checkers but "learn" from its own experience.
In 1957, the FORTRAN scientific programming language was developed. In 1961, IBM developed the SABRE reservation system for American Airlines and introduced the successful Selectric typewriter. In 1963, IBM employees and computers helped. A year it moved its corporate headquarters from New York City to Armonk, New York; the latter half of the 1960s saw IBM continue its support of space exploration, participating in the 1965 Gemini flights, 1966 Saturn flights and 1969 lunar mission. On April 7, 1964, IBM announced the first computer system family, the IBM System/360, it spanned the complete range of commercial and scientific applications from large to small, allowing companies for the first time to upgrade to models with greater computing capability without having to rewrite their applications. It was followed by the IBM System/370 in 1970. Together the
PikeOS is a commercial, hard real-time operating system that offers a separation kernel based hypervisor with multiple partition types for many other operating systems and applications. It enables users to build certifiable smart devices for the Internet of Things according to the high quality and security standards of different industries. PikeOS combines a real-time operating system with a virtualization platform and Eclipse-based integrated development environment for embedded systems, it is a commercial clone of L4 microkernel family. The PikeOS real-time operating system has been developed for safety and security-critical applications with certification needs in the fields of Aerospace & Defense, Automotive & Transportation, Industrial Automation & Medical, Network Infrastructures and Consumer Electronics. One of the key features of PikeOS is the capability to safely execute applications with different safety and security levels concurrently on the same platform; this is achieved by the strict spatial and temporal segregation of these applications by means of software partitions.
A software partition can be seen as a container with pre-allocated privileges that can have access to memory, CPU time, I/O, but a predefined list of PikeOS services. With PikeOS, the term application refers to an executable linked against the PikeOS API library and running as a process inside a partition. Due to the nature of the PikeOS API, applications can range from simple control loops up to complete Para virtualized guest operating systems like Linux or hardware virtualized guests. Software partitions are called Virtual Machines, because it is possible to implement a complete guest operating system inside a partition which executes independently from other partitions and therefore can address use cases with mixed criticality. PikeOS can be seen as a Type 1 Hypervisor; the Eclipse-based IDE CODEO supports system architects with graphical configuration tools, providing all the components that software engineers will need to develop embedded applications, as well as including comprehensive wizards to help embedded project development in a time-saving and cost-efficient way: guided configuration remote debugging target monitoring remote application deployment and timing analysisSeveral dedicated graphical editing views are supporting the system integrator to always keep the overview on important aspects of the PikeOS system configuration showing partition types, communication channels, shared memory and IO device configuration within partitions.
Projects can be defined with the help of reusable templates and distributed to the development groups. Users can configure pre-defined components for their project and can define and add other components during the development process. Real Time Operating System including type 1 hypervisor defined for flexible configuration Supports fast or secure booting times Supporting mixed criticality via separation kernel in one system Configuration of partitions with time and hardware resources Kernel driver and user space drivers supported Hardware independence between processor types and families Easy migration processes and high portability on single- and multi-core Developed to support certification according to multiple safety & security standards Reduced time to market via standard development and verification tools Wide range of supported GuestOS types No export restriction Safety certification standards according to RTCA DO-178B/C ISO 26262 IEC 62304 EN 50128 IEC 61508Security certification standards according to Common Criteria SAR SYSGO is committed to establish the technological and business partnerships that will help software engineers to achieve their goals.
SYSGO is working with about 100 partners worldwide. An excerpt of partners per category is mentioned below: Board Vendors: Curtiss-Wright Controls Embedded Computing, Kontron, MEN or ABACO Silicon Vendors: NXP, Renesas, TI, Infineon, NVidia or Intel Software Partners: CoreAVI, wolfSSL, AdaCore, Esterel, RTI, PrismTech, Systerel, Imagination Technologies or RAPITA Tool Partners: Lauterbach, Vector Software, Rapita, iSYSTEM Supported Architectures: ARM, PPC, X86 or Sparc Linux or Android POSIX PSE51 with PSE52 extensions ARINC 653 RTEMS Java AUTOSAR ADA and others PikeOS product page
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de