The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB; the unit prefix mega is a multiplier of 1000000 in the International System of Units. Therefore, one megabyte is one million bytes of information; this definition has been incorporated into the International System of Quantities. However, in the computer and information technology fields, several other definitions are used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as 1048576bytes, a measurement that conveniently expresses the binary multiples inherent in digital computer memory architectures. However, most standards bodies have deprecated this usage in favor of a set of binary prefixes, in which this quantity is designated by the unit mebibyte. Less common is a convention that used the megabyte to mean 1000×1024 bytes; the megabyte is used to measure either 10002 bytes or 10242 bytes. The interpretation of using base 1024 originated as a compromise technical jargon for the byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name.
As 1024 approximates 1000 corresponding to the SI prefix kilo-, it was a convenient term to denote the binary multiple. In 1998 the International Electrotechnical Commission proposed standards for binary prefixes requiring the use of megabyte to denote 10002 bytes and mebibyte to denote 10242 bytes. By the end of 2009, the IEC Standard had been adopted by the IEEE, EU, ISO and NIST; the term megabyte continues to be used with different meanings: Base 10 1 MB = 1000000 bytes is the definition recommended by the International System of Units and the International Electrotechnical Commission IEC. This definition is used in networking contexts and most storage media hard drives, flash-based storage, DVDs, is consistent with the other uses of the SI prefix in computing, such as CPU clock speeds or measures of performance; the Mac OS X 10.6 file manager is a notable example of this usage in software. Since Snow Leopard, file sizes are reported in decimal units. In this convention, one thousand megabytes is equal to one gigabyte, where 1 GB is one billion bytes.
Base 2 1 MB = 1048576 bytes is the definition used by Microsoft Windows in reference to computer memory, such as RAM. This definition is synonymous with the unambiguous binary prefix mebibyte. In this convention, one thousand and twenty-four megabytes is equal to one gigabyte, where 1 GB is 10243 bytes. Mixed 1 MB = 1024000 bytes is the definition used to describe the formatted capacity of the 1.44 MB 3.5-inch HD floppy disk, which has a capacity of 1474560bytes. Semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two; the capacity of a disk drive is the product of the sector size, number of sectors per track, number of tracks per side, the number of disk platters in the drive. Changes in any of these factors would not double the size. Sector sizes were set as powers of two for convenience in processing, it was a natural extension to give the capacity of a disk drive in multiples of the sector size, giving a mix of decimal and binary multiples when expressing total disk capacity.
Depending on compression methods and file format, a megabyte of data can be: a 1 megapixel bitmap image with 256 colors stored without any compression. A 4 megapixel JPEG image with normal compression. 1 minute of 128 kbit/s MP3 compressed music. 6 seconds of uncompressed CD audio. A typical English book volume in plain text format; the human genome consists of DNA representing 800 MB of data. The parts that differentiate one person from another can be compressed to 4 MB. Timeline of binary prefixes Gigabyte § Consumer confusion Historical Notes About The Cost Of Hard Drive Storage Space the megabyte International Electrotechnical Commission definitions IEC prefixes and symbols for binary multiples
Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
In computing, multitasking is the concurrent execution of multiple tasks over a certain period of time. New tasks can interrupt started ones before they finish, instead of waiting for them to end; as a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units and main memory. Multitasking automatically interrupts the running program, saving its state and loading the saved state of another program and transferring control to it; this "context switch" may be initiated at fixed time intervals, or the running program may be coded to signal to the supervisory software when it can be interrupted. Multitasking does not require parallel execution of multiple tasks at the same time. On multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs. Multitasking is a common feature of computer operating systems, it allows more efficient use of the computer hardware. In a time sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs.
In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing. Multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program. A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors; the term "multitasking" has become an international term, as the same word is used in many other languages such as German, Dutch and Norwegian.
In the early days of computing, CPU time was expensive, peripherals were slow. When the computer ran a program that needed access to a peripheral, the central processing unit would have to stop executing program instructions while the peripheral processed the data; this was very inefficient. The first computer using a multiprogramming system was the British Leo III owned by J. Co.. During batch processing, several different programs were loaded in the computer memory, the first one began to run; when the first program reached an instruction waiting for a peripheral, the context of this program was stored away, the second program in memory was given a chance to run. The process continued; the use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, non-existent and invisible to them. Multiprogramming doesn't give any guarantee.
Indeed, the first program may well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, came back a few hours for printed results. Multiprogramming reduced wait times when multiple batches were being processed. Early multitasking systems used applications; this approach, supported by many computer operating systems, is known today as cooperative multitasking. Although it is now used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and Classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems; as a cooperatively multitasked system relies on each process giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting.
In a server environment, this is a hazard. Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time, it allows the system to deal with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and MULTICS in 1964, in OS/360 MFT in 1967, in Unix in 1969, was available in some operating systems for computers as small as DEC's PDP-8.
Graphical user interface
The graphical user interface is a form of user interface that allows users to interact with electronic devices through graphical icons and visual indicators such as secondary notation, instead of text-based user interfaces, typed command labels or text navigation. GUIs were introduced in reaction to the perceived steep learning curve of command-line interfaces, which require commands to be typed on a computer keyboard; the actions in a GUI are performed through direct manipulation of the graphical elements. Beyond computers, GUIs are used in many handheld mobile devices such as MP3 players, portable media players, gaming devices and smaller household and industrial controls; the term GUI tends not to be applied to other lower-display resolution types of interfaces, such as video games, or not including flat screens, like volumetric displays because the term is restricted to the scope of two-dimensional display screens able to describe generic information, in the tradition of the computer science research at the Xerox Palo Alto Research Center.
Designing the visual composition and temporal behavior of a GUI is an important part of software application programming in the area of human–computer interaction. Its goal is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline named usability. Methods of user-centered design are used to ensure that the visual language introduced in the design is well-tailored to the tasks; the visible graphical interface features of an application are sometimes referred to as chrome or GUI. Users interact with information by manipulating visual widgets that allow for interactions appropriate to the kind of data they hold; the widgets of a well-designed interface are selected to support the actions necessary to achieve the goals of users. A model–view–controller allows flexible structures in which the interface is independent from and indirectly linked to application functions, so the GUI can be customized easily; this allows users to select or design a different skin at will, eases the designer's work to change the interface as user needs evolve.
Good user interface design relates to users more, to system architecture less. Large widgets, such as windows provide a frame or container for the main presentation content such as a web page, email message or drawing. Smaller ones act as a user-input tool. A GUI may be designed for the requirements of a vertical market as application-specific graphical user interfaces. Examples include automated teller machines, point of sale touchscreens at restaurants, self-service checkouts used in a retail store, airline self-ticketing and check-in, information kiosks in a public space, like a train station or a museum, monitors or control screens in an embedded industrial application which employ a real-time operating system. By the 1980s, cell phones and handheld game systems employed application specific touchscreen GUIs. Newer automobiles use GUIs in their navigation systems and multimedia centers, or navigation multimedia center combinations. Sample graphical desktop environments A GUI uses a combination of technologies and devices to provide a platform that users can interact with, for the tasks of gathering and producing information.
A series of elements conforming a visual language have evolved to represent information stored in computers. This makes it easier for people with few computer skills to use computer software; the most common combination of such elements in GUIs is the windows, menus, pointer paradigm in personal computers. The WIMP style of interaction uses a virtual input device to represent the position of a pointing device, most a mouse, presents information organized in windows and represented with icons. Available commands are compiled together in menus, actions are performed making gestures with the pointing device. A window manager facilitates the interactions between windows and the windowing system; the windowing system handles hardware devices such as pointing devices, graphics hardware, positioning of the pointer. In personal computers, all these elements are modeled through a desktop metaphor to produce a simulation called a desktop environment in which the display represents a desktop, on which documents and folders of documents can be placed.
Window managers and other software combine to simulate the desktop environment with varying degrees of realism. Smaller mobile devices such as personal digital assistants and smartphones use the WIMP elements with different unifying metaphors, due to constraints in space and available input devices. Applications for which WIMP is not well suited may use newer interaction techniques, collectively termed post-WIMP user interfaces; as of 2011, some touchscreen-based operating systems such as Apple's iOS and Android use the class of GUIs named post-WIMP. These support styles of interaction using more than one finger in contact with a display, which allows actions such as pinching and rotating, which are unsupported by one pointer and mouse. Human interface devices, for the efficient interaction with a GUI include a computer keyboard used together with keyboard shortcuts, pointing devices for the cursor control: mouse, pointing stick, trackball, virtual keyboards, head-up displays. There are actions performed by programs that affect the GUI.
For example, there are components like inotify or D-Bus to facilitate communication between computer programs. Ivan Sutherland developed Sketchpad in 1963 held as the first graphical co
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function. A computer program is written by a computer programmer in a programming language. From the program in its human-readable form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a computer program may be executed with the aid of an interpreter. A collection of computer programs and related data are referred to as software. Computer programs may be categorized along functional lines, such as application software and system software; the underlying method used for some calculation or manipulation is known as an algorithm. The earliest programmable machines preceded the invention of the digital computer. In 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be repeated by arranging the cards.
In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled; the device would have had a "store"—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the "store" would have been transferred to the "mill", for processing, and a "thread" being the execution of programmed instructions by the device. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea; the memoir covered the Analytical Engine. The translation contained Note G which detailed a method for calculating Bernoulli numbers using the Analytical Engine.
This note is recognized by some historians as the world's first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine, it is a finite-state machine. The machine can move the tape forth, changing its contents as it performs an algorithm; the machine starts in the initial state, goes through a sequence of steps, halts when it encounters the halt state. This machine is considered by some to be the origin of the stored-program computer—used by John von Neumann for the "Electronic Computing Instrument" that now bears the von Neumann architecture name; the Z3 computer, invented by Konrad Zuse in Germany, was a programmable computer. A digital computer uses electricity as the calculating component; the Z3 contained 2,400 relays to create the circuits. The circuits provided a floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.
The Electronic Numerical Integrator And Computer was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together, its 40 units weighed 30 tons, occupied 1,800 square feet, consumed $650 per hour in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables; each function table had 728 rotating knobs. Programming the ENIAC involved setting some of the 3,000 switches. Debugging a program took a week; the programmers of the ENIAC were women who were known collectively as the "ENIAC girls." The ENIAC featured parallel operations. Different sets of accumulators could work on different algorithms, it used punched card machines for input and output, it was controlled with a clock signal. It ran for eight years, calculating hydrogen bomb parameters, predicting weather patterns, producing firing tables to aim artillery guns.
The Manchester Baby was a stored-program computer. Programming transitioned away from setting dials. Only three bits of memory were available to store each instruction, so it was limited to eight instructions. 32 switches were available for programming. Computers manufactured; the computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed; this process was repeated. Computer programs were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed. In 1961, the Burroughs B5000 was built to be programmed in the ALGOL 60 language; the hardware featured circuits to ease the compile phase. In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture; the Model 30 was the least expensive. Customers could retain the same application software; each System/360 model featured multiprogramming.
With operating system support, multiple programs could be in memory at once. When one was waiting for input/output, another could compute; each model could emulate other computers. Customers could upgrade to the System/360 and ret
Byte was an American microcomputer magazine, influential in the late 1970s and throughout the 1980s because of its wide-ranging editorial coverage. Whereas many magazines were dedicated to specific systems or the home or business users' perspective, Byte covered developments in the entire field of "small computers and software," and sometimes other computing fields such as supercomputers and high-reliability computing. Coverage was in-depth with much technical detail, rather than user-oriented. Byte started in 1975, shortly after the first personal computers appeared as kits advertised in the back of electronics magazines. Byte was published monthly, with an initial yearly subscription price of $10. Print publication ceased in 1998 and online publication in 2013. In 1975 Wayne Green was the editor and publisher of 73 and his ex-wife, Virginia Londner Green was the Business Manager of 73 Inc. In the August 1975 issue of 73 magazine Wayne's editorial column started with this item: The response to computer-type articles in 73 has been so enthusiastic that we here in Peterborough got carried away.
On May 25th we made a deal with the publisher of a small computer hobby magazine to take over as editor of a new publication which would start in August... Byte. Carl Helmers published a series of six articles in 1974 that detailed the design and construction of his "Experimenter's Computer System", a personal computer based on the Intel 8008 microprocessor. In January 1975 this became the monthly ECS magazine with 400 subscribers; the last issue was published on May 12, 1975 and in June the subscribers were mailed a notice announcing Byte magazine. Carl wrote to another hobbyist newsletter, Micro-8 Computer User Group Newsletter, described his new job as editor of Byte magazine. I got a note in the mail about two weeks ago from Wayne Green, publisher of'73 Magazine' saying hello and why don't you come up and talk a bit; the net result of a follow up is the decision to create BYTE magazine using the facilities of Green Publishing Inc. I will end up with the editorial focus for the magazine. Virginia Londner Green had returned to 73 in the December 1974 issue and incorporated Green Publishing in March 1975.
The first five issues of Byte were published by Green Publishing and the name was changed to Byte Publications starting with the February 1976 issue. Carl Helmers was a co-owner of Byte Publications; the first four issues were produced in the offices of 73 and Wayne Green was listed as the publisher. One day in November 1975 Wayne came to work and found that the Byte magazine staff had moved out and taken the January issue with them; the February 1976 issue of Byte has a short story about the move. "After a start which reads like a romantic light opera with an episode or two reminiscent of the Keystone Cops, Byte magazine has moved into separate offices of its own." Wayne Green was not happy about losing Byte magazine so he was going to start a new one called Kilobyte. Byte trademarked KILOBYTE as a cartoon series in Byte magazine; the new magazine was called Kilobaud. There was competition and animosity between Byte Publications and 73 Inc. but both remained in the small town of Peterborough, New Hampshire.
Articles in the first issue included Which Microprocessor For You? by Hal Chamberlin, Write Your Own Assembler by Dan Fylstra and Serial Interface by Don Lancaster. Advertisements from Godbout, MITS, Processor Technology, SCELBI, Sphere appear, among others. Early articles in Byte were do-it-yourself electronic or software projects to improve small computers. A continuing feature was Ciarcia's Circuit Cellar, a column in which electronic engineer Steve Ciarcia described small projects to modify or attach to a computer. Significant articles in this period included the "Kansas City" standard for data storage on audio tape, insertion of disk drives into S-100 computers, publication of source code for various computer languages, coverage of the first microcomputer operating system, CP/M. Byte ran Microsoft's first advertisement, as "Micro-Soft", to sell a BASIC interpreter for 8080-based computers. In spring of 1979, owner/publisher Virginia Williamson sold Byte to McGraw-Hill, she became a vice president of McGraw-Hill Publications Company.
Shortly after the IBM PC was introduced, in 1981, the magazine changed editorial policies. It de-emphasized the do-it-yourself electronics and software articles, began running product reviews, it continued its wide-ranging coverage of hardware and software, but now it reported "what it does" and "how it works", not "how to do it". The editorial focus remained on home and personal computers). By the early 1980s Byte had become an "elite" magazine, seen as a peer of Rolling Stone and Playboy, others such as David Bunnell of PC Magazine aspired to emulate its reputation and success, it was the only computer publication on the 1981 Folio 400 list of largest magazines. Byte's 1982 average number of pages was 543, the number of paid advertising pages grew by more than 1,000 while most magazines' amount of advertising did not change, its circulation of 420,000 was the third highest of all computer magazines. Byte earned $9 million from revenue of $36.6 million in 1983, twice the average profit margin for the magazine industry.
It remained successful while many other magazines failed in 1984 during economic weakness in the computer industry. The October 1984 issue had about 300 pages of ads sold at an average of $6,000 per page. From 1975 to 1986 Byte covers featured the artwork of Robert Tinney. Thes