Java virtual machine
A Java virtual machine is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are compiled to Java bytecode. The JVM is detailed by a specification that formally describes what is required of a JVM implementation. Having a specification ensures interoperability of Java programs across different implementations so that program authors using the Java Development Kit need not worry about idiosyncrasies of the underlying hardware platform; the JVM reference implementation is developed by the OpenJDK project as open source code and includes a JIT compiler called HotSpot. The commercially supported Java releases available from Oracle Corporation are based on the OpenJDK runtime. Eclipse OpenJ9 is another open source JVM for OpenJDK; the Java virtual machine is an abstract computer defined by a specification. The garbage-collection algorithm used and any internal optimization of the Java virtual machine instructions are not specified; the main reason for this omission is to not unnecessarily constrain implementers.
Any Java application can be run only inside some concrete implementation of the abstract specification of the Java virtual machine. Starting with Java Platform, Standard Edition 5.0, changes to the JVM specification have been developed under the Java Community Process as JSR 924. As of 2006, changes to specification to support changes proposed to the class file format are being done as a maintenance release of JSR 924; the specification for the JVM was published as the blue book, The preface states: We intend that this specification should sufficiently document the Java Virtual Machine to make possible compatible clean-room implementations. Oracle provides tests that verify the proper operation of implementations of the Java Virtual Machine. One of Oracle's JVMs is named the other, inherited from BEA Systems is JRockit. Clean-room Java implementations include Kaffe, IBM J9 and Skelmir's CEE-J. Oracle owns the Java trademark and may allow its use to certify implementation suites as compatible with Oracle's specification.
One of the organizational units of JVM byte code is a class. A class loader implementation must be able to recognize and load anything that conforms to the Java class file format. Any implementation is free to recognize other binary forms besides class files, but it must recognize class files; the class loader performs three basic activities in this strict order: Loading: finds and imports the binary data for a type Linking: performs verification and resolution Verification: ensures the correctness of the imported type Preparation: allocates memory for class variables and initializing the memory to default values Resolution: transforms symbolic references from the type into direct references. Initialization: invokes Java code that initializes class variables to their proper starting values. In general, there are two types of class loader: bootstrap class loader and user defined class loader; every Java virtual machine implementation must have a bootstrap class loader, capable of loading trusted classes.
The Java virtual machine specification doesn't specify. The JVM operates on primitive references; the JVM is fundamentally a 32-bit machine. Long and double types, which are 64-bits, are supported natively, but consume two units of storage in a frame's local variables or operand stack, since each unit is 32 bits. Boolean, byte and char types are all sign-extended and operated on as 32-bit integers, the same as int types; the smaller types only have a few type-specific instructions for loading and type conversion. Boolean is operated on with 0 representing false and 1 representing true; the JVM has a garbage-collected heap for storing arrays. Code and other class data are stored in the "method area"; the method area is logically part of the heap, but implementations may treat the method area separately from the heap, for example might not garbage collect it. Each JVM thread has its own call stack, which stores frames. A new frame is created each time a method is called, the frame is destroyed when that method exits.
Each frame provides an "operand stack" and an array of "local variables". The operand stack is used for operands to computations and for receiving the return value of a called method, while local variables serve the same purpose as registers and are used to pass method arguments. Thus, the JVM is both a register machine; the JVM has instructions for the following groups of tasks: The aim is binary compatibility. Each particular host operating system needs its own implementation of the runtime; these JVMs interpret the bytecode semantically the same way, but the actual implementation may be different. More complex than just emulating bytecode is compatibly and efficiently im
In electronic systems and computing, firmware is a specific class of computer software that provides the low-level control for the device's specific hardware. Firmware can either provide a standardized operating environment for the device's more complex software, or, for less complex devices, act as the device's complete operating system, performing all control and data manipulation functions. Typical examples of devices containing firmware are embedded systems, consumer appliances, computer peripherals, others. All electronic devices beyond the simplest contain some firmware. Firmware is held in non-volatile memory devices such as EPROM, or flash memory. Changing the firmware of a device was or never done during its lifetime in the past but is nowadays a common procedure. Common reasons for updating firmware include adding features to the device; this may require ROM integrated circuits to be physically replaced or flash memory to be reprogrammed through a special procedure. Firmware such as the ROM BIOS of a personal computer may contain only elementary basic functions of a device and may only provide services to higher-level software.
Firmware such as the program of an embedded system may be the only program that will run on the system and provide all of its functions. Before the inclusion of integrated circuits, other firmware devices included a discrete semiconductor diode matrix; the Apollo guidance computer had firmware consisting of a specially manufactured core memory plane, called "core rope memory", where data was stored by physically threading wires through or around the core storing each data bit. Ascher Opler coined the term "firmware" in a 1967 Datamation article, it meant the contents of a writable control store, containing microcode that defined and implemented the computer's instruction set, that could be reloaded to specialize or modify the instructions that the central processing unit could execute. As used, firmware contrasted with hardware and software, it was not composed of CPU machine instructions, but of lower-level microcode involved in the implementation of machine instructions. It existed on the boundary between software.
Over time, popular usage extended the word "firmware" to denote any computer program, linked to hardware, including processor machine instructions for BIOS, bootstrap loaders, or the control systems for simple electronic devices such as a microwave oven, remote control, or computer peripheral. In some respects, the various firmware components are as important as the operating system in a working computer. However, unlike most modern operating systems, firmware has a well-evolved automatic mechanism of updating itself to fix any functionality issues detected after shipping the unit; the BIOS may be "manually" updated by a user. In contrast, firmware in storage devices gets updated when flash storage is used for the firmware. Most computer peripherals are themselves special-purpose computers. Devices such as printers, cameras, USB flash drives have internally stored firmware; some low-cost peripherals no longer contain non-volatile memory for firmware, instead rely on the host system to transfer the device control program from a disk file or CD.
As of 2010, most portable music players support firmware upgrades. Some companies use firmware updates to add new playable file formats. Other features that may change with firmware updates include the GUI or the battery life. Most mobile phones have a Firmware Over The Air firmware upgrade capability for much the same reasons. Since 1996, most automobiles have employed an on-board computer and various sensors to detect mechanical problems; as of 2010, modern vehicles employ computer-controlled anti-lock braking systems and computer-operated transmission control units. The driver can get in-dash information while driving in this manner, such as real-time fuel economy and tire pressure readings. Local dealers can update most vehicle firmware. Examples of firmware include: In consumer products: Timing and control systems for washing machines Controlling sound and video attributes, as well as the channel list, in modern TVs EPROM chips used in the Eventide H-3000 series of digital music processors In computers: The BIOS found in IBM-compatible personal computers The EFI-compliant firmware used on Itanium systems, Intel-based computers from Apple, many Intel desktop computer motherboards Open Firmware, used in SPARC-based computers from Sun Microsystems and Oracle Corporation, PowerPC-based computers from Apple, computers from Genesi ARCS, used in computers from Silicon Graphics Kickstart, used in the Amiga line of computers RTAS, used in computers from IBM The Common Firmware Environment In routers and firewalls: LibreCMC – a 100% free software router distribution based on the Linux-libre kern
The Zend Engine is the open source scripting engine that interprets the PHP programming language. It was developed by Andi Gutmans and Zeev Suraski while they were students at the Technion – Israel Institute of Technology, they founded a company called Zend Technologies in Ramat Gan, Israel. The name Zend is a combination of their forenames and Andi; the first version of the Zend Engine appeared in 1999 in PHP version 4. It was written in C as a optimized modular back-end, which for the first time could be used in applications outside of PHP; the Zend Engine provides memory and resource management, other standard services for the PHP language. Its performance and extensibility played a significant role in PHP's increasing popularity; this was followed by Zend Engine II at the heart of PHP 5. The newest version is Zend Engine III codenamed phpng, developed for PHP 7 and improves performance; the source code for the Zend Engine has been available under the Zend Engine License since 2001, as part of the official releases from php.net, as well as the official git repository or the GitHub mirror.
Various volunteers contribute to the PHP/Zend Engine codebase. Zend Engine is used internally by PHP as a Runtime engine. PHP Scripts are compiled into Zend opcodes; these opcodes are executed and the HTML generated is sent to the client. To implement a Web script interpreter, you need three parts: The interpreter part analyzes the input code, translates it, executes it; the functionality part implements the functionality of the language. The interface part talks to etc.. Zend takes part 1 and a bit of part 2. Zend itself forms only the language core, implementing PHP at its basics with some predefined functions. Zend Engine Homepage Zend Engine 2.0 Design document The Zend Engine License, version 2.00 Official git repository Github repository mirror Zend Engine 1 section in PHP manual Zend Engine 2 API reference in PHP manual Zend Engine 3 section in PHP manual Documentation on the PHP development wiki
Machine code is a computer program written in machine language instructions that can be executed directly by a computer's central processing unit. Each instruction causes the CPU to perform a specific task, such as a load, a store, a jump, or an ALU operation on one or more units of data in CPU registers or memory. Machine code is a numerical language, intended to run as fast as possible, may be regarded as the lowest-level representation of a compiled or assembled computer program or as a primitive and hardware-dependent programming language. While it is possible to write programs directly in machine code, it is tedious and error prone to manage individual bits and calculate numerical addresses and constants manually. For this reason, programs are rarely written directly in machine code in modern contexts, but may be done for low level debugging, program patching, assembly language disassembly; the overwhelming majority of practical programs today are written in higher-level languages or assembly language.
The source code is translated to executable machine code by utilities such as compilers and linkers, with the important exception of interpreted programs, which are not translated into machine code. However, the interpreter itself, which may be seen as an executor or processor, performing the instructions of the source code consists of directly executable machine code. Machine code is by definition the lowest level of programming detail visible to the programmer, but internally many processors use microcode or optimise and transform machine code instructions into sequences of micro-ops, this is not considered to be a machine code per se; every processor or processor family has its own instruction set. Instructions are patterns of bits that by physical design correspond to different commands to the machine. Thus, the instruction set is specific to a class of processors using the same architecture. Successor or derivative processor designs include all the instructions of a predecessor and may add additional instructions.
A successor design will discontinue or alter the meaning of some instruction code, affecting code compatibility to some extent. Systems may differ in other details, such as memory arrangement, operating systems, or peripheral devices; because a program relies on such factors, different systems will not run the same machine code when the same type of processor is used. A processor's instruction set may have all instructions of the same length, or it may have variable-length instructions. How the patterns are organized varies with the particular architecture and also with the type of instruction. Most instructions have one or more opcode fields which specifies the basic instruction type and the actual operation and other fields that may give the type of the operand, the addressing mode, the addressing offset or index, or the actual value itself. Not all machines or individual instructions have explicit operands. An accumulator machine has a combined left operand and result in an implicit accumulator for most arithmetic instructions.
Other architectures have accumulator versions of common instructions, with the accumulator regarded as one of the general registers by longer instructions. A stack machine has all of its operands on an implicit stack. Special purpose instructions often lack explicit operands; this distinction between explicit and implicit operands is important in code generators in the register allocation and live range tracking parts. A good code optimizer can track implicit as well as explicit operands which may allow more frequent constant propagation, constant folding of registers and other code enhancements. A computer program is a list of instructions. A program's execution is done in order for the CPU, executing it to solve a specific problem and thus accomplish a specific result. While simple processors are able to execute instructions one after another, superscalar processors are capable of executing a variety of different instructions at once. Program flow may be influenced by special'jump' instructions that transfer execution to an instruction other than the numerically following one.
Conditional jumps are not depending on some condition. A much more readable rendition of machine language, called assembly language, uses mnemonic codes to refer to machine code instructions, rather than using the instructions' numeric values directly. For example, on the Zilog Z80 processor, the machine code 00000101, which causes the CPU to decrement the B processor register, would be represented in assembly language as DEC B; the MIPS architecture provides a specific example for a machine code whose instructions are always 32 bits long. The general type of instruction is given by the op field. J-type and I-type instructions are specified by op. R-type instructions include an additional field funct to determine the exact operation; the fields used in the
Android Runtime is an application runtime environment used by the Android operating system. Replacing Dalvik, the process virtual machine used by Android, ART performs the translation of the application's bytecode into native instructions that are executed by the device's runtime environment. Android 2.2 "Froyo" brought trace-based just-in-time compilation into Dalvik, optimizing the execution of applications by continually profiling applications each time they run and dynamically compiling executed short segments of their bytecode into native machine code. While Dalvik interprets the rest of application's bytecode, native execution of those short bytecode segments, called "traces", provides significant performance improvements. Unlike Dalvik, ART introduces the use of ahead-of-time compilation by compiling entire applications into native machine code upon their installation. By eliminating Dalvik's interpretation and trace-based JIT compilation, ART improves the overall execution efficiency and reduces power consumption, which results in improved battery autonomy on mobile devices.
At the same time, ART brings faster execution of applications, improved memory allocation and garbage collection mechanisms, new applications debugging features, more accurate high-level profiling of applications. To maintain backward compatibility, ART uses the same input bytecode as Dalvik, supplied through standard.dex files as part of APK files, while the.odex files are replaced with Executable and Linkable Format executables. Once an application is compiled by using ART's on-device dex2oat utility, it is run from the compiled ELF executable; as a downside, ART requires additional time for the compilation when an application is installed, applications take up larger amounts of secondary storage to store the compiled code. Android 4.4 KitKat brought a technology preview of ART, including it as an alternative runtime environment and keeping Dalvik as the default virtual machine. In the subsequent major Android release, Android 5.0 Lollipop, Dalvik was replaced by ART. Android 7.0 Nougat introduced JIT compiler with code profiling to ART, which lets it improve the performance of Android apps as they run.
The JIT compiler complements ART's current Ahead of Time compiler and helps improve runtime performance. Android software development – various concepts and software development utilities used for the creation of Android applications Android version history – a history and descriptions of Android releases, listed by their official API levels Comparison of application virtualization software – various portable and scripting language virtual machines Virtual machine – an emulation of a particular computer system, with different degrees of implemented functionality Official website Android Basics 101: Understanding ART, the Android Runtime on YouTube, XDA Developers, February 12, 2014 ART: Android's Runtime Evolved on YouTube, Google I/O 2014, by Anwar Ghuloum, Brian Carlstrom and Ian Rogers A JIT Compiler for Android's Dalvik VM on YouTube, Google I/O 2010, by Ben Cheng and Bill Buzbee Delivering Highly Optimized Android Runtime and Web Runtime on Intel Architecture, August 4, 2015, by Haitao Feng and Jonathan Ding Android 7.1 for Developers: Profile-guided JIT/AOT compilation, Android Developers, describes ART changes in Android 7 Optimise Android For Better Performance, Refer By Android Developer
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but