In UNIX computing, the system load is a measure of the amount of computational work that a computer system performs. The load average represents the average system load over a period of time, it conventionally appears in the form of three numbers which represent the system load during the last one-, five-, fifteen-minute periods. All Unix and Unix-like systems generate a dimensionless metric of three "load average" numbers in the kernel. Users can query the current result from a Unix shell by running the uptime command: The w and top commands show the same three load average numbers, as do a range of graphical user interface utilities. In Linux, they can be accessed by reading the /proc/loadavg file. An idle computer has a load number of 0; each process using or waiting for CPU increments the load number by 1. Each process that terminates decrements it by 1. Most UNIX systems count only processes in the running or runnable states. However, Linux includes processes in uninterruptible sleep states, which can lead to markedly different results if many processes remain blocked in I/O due to a busy or stalled I/O system.
This, for example, includes processes blocking too slow media. Such circumstances can result in an elevated load average which does not reflect an actual increase in CPU use. Systems calculate the load average as the exponentially damped/weighted moving average of the load number; the three values of load average refer to the past one and fifteen minutes of system operation. Mathematically speaking, all three values always average all the system load since the system started up, they all decay exponentially, but they decay at different speeds: they decay exponentially by e after 1, 5, 15 minutes respectively. Hence, the 1-minute load average consists of 63% of the load from the last minute and 37% of the average load since start up, excluding the last minute. For the 5- and 15-minute load averages, the same 63%/37% ratio is computed over 5 minutes and 15 minutes respectively. Therefore, it is not technically accurate that the 1-minute load average only includes the last 60 seconds of activity, as it includes 37% of the activity from the past, but it is correct to state that it includes the last minute.
For single-CPU systems that are CPU bound, one can think of load average as a measure of system utilization during the respective time period. For systems with multiple CPUs, one must divide the load by the number of processors in order to get a comparable measure. For example, one can interpret a load average of "1.73 0.60 7.98" on a single-CPU system as: during the last minute, the system was overloaded by 73% on average. During the last 5 minutes, the CPU was idling 40% of the time on average. During the last 15 minutes, the system was overloaded 698% on average; this means that this system could have handled all of the work scheduled for the last minute if it were 1.73 times as fast. In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73 processes ready to run, each one could be scheduled into a CPU. On modern UNIX systems, the treatment of threading with respect to load averages varies; some systems treat threads as processes for the purposes of load average calculation: each thread waiting to run will add 1 to the load.
However, other systems systems implementing so-called M:N threading, use different strategies such as counting the process once for the purpose of load, or counting only threads exposed by the user-thread scheduler to the kernel, which may depend on the level of concurrency set on the process. Linux appears to count each thread separately as adding 1 to the load; the comparative study of different load indices carried out by Ferrari et al. reported that CPU load information based upon the CPU queue length does much better in load balancing compared to CPU utilization. The reason CPU queue length did better is because when a host is loaded, its CPU utilization is to be close to 100% and it is unable to reflect the exact load level of the utilization. In contrast, CPU queue lengths can directly reflect the amount of load on a CPU; as an example, two systems, one with 3 and the other with 6 processes in the queue, are both likely to have utilizations close to 100% although they differ. On Linux systems, the load-average is not calculated on each clock tick, but driven by a variable value, based on the Hz frequency setting and tested on each clock tick.
Although the Hz value can be configured in some versions of the kernel, it is set to 100. The calculation code uses the Hz value to determine the CPU Load calculation frequency; the timer.c::calc_load function will run the algorithm every 5 * Hz, or five times per second. Following is that function in its entirety: The countdown is over a LOAD_FREQ of 5 Hz; the avenrun array contains 5-minute and 15-minute average. The CALC_LOAD macro and its associated values are defined in sched.h: Other commands for assessing system performance include: uptime the system reliability and load average top for an overall system view vmstat vmstat reports in
In computing, a process is the instance of a computer program, being executed. It contains its activity. Depending on the operating system, a process may be made up of multiple threads of execution that execute instructions concurrently. While a computer program is a passive collection of instructions, a process is the actual execution of those instructions. Several processes may be associated with the same program. Multitasking is a method to allow multiple processes to other system resources; each CPU executes a single task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. Depending on the operating system implementation, switches could be performed when tasks perform input/output operations, when a task indicates that it can be switched, or on hardware interrupts. A common form of multitasking is time-sharing. Time-sharing is a method to allow high responsiveness for interactive user applications.
In time-sharing systems, context switches are performed which makes it seem like multiple processes are being executed on the same processor. This seeming execution of multiple processes is called concurrency. For security and reliability, most modern operating systems prevent direct communication between independent processes, providing mediated and controlled inter-process communication functionality. In general, a computer system process consists of the following resources: An image of the executable machine code associated with a program. Memory. Operating system descriptors of resources that are allocated to the process, such as file descriptors or handles, data sources and sinks. Security attributes, such as the process' set of permissions. Processor state, such as the content of registers and physical memory addressing; the state is stored in computer registers when the process is executing, in memory otherwise. The operating system holds most of this information about active processes in data structures called process control blocks.
Any subset of the resources at least the processor state, may be associated with each of the process' threads in operating systems that support threads or child processes. The operating system keeps its processes separate and allocates the resources they need, so that they are less to interfere with each other and cause system failures; the operating system may provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways. A multitasking operating system may just switch between processes to give the appearance of many processes executing though in fact only one process can be executing at any one time on a single CPU, it is usual to associate a single process with a main program, child processes with any spin-off, parallel processes, which behave like asynchronous subroutines. A process is said to own resources. However, in multiprocessing systems many processes may run off of, or share, the same reentrant program at the same location in memory, but each process is said to own its own image of the program.
Processes are called "tasks" in embedded operating systems. The sense of "process" is "something that takes up time", as opposed to "memory", "something that takes up space"; the above description applies to both processes managed by an operating system, processes as defined by process calculi. If a process requests something for which it must wait, it will be blocked; when the process is in the blocked state, it is eligible for swapping to disk, but this is transparent in a virtual memory system, where regions of a process's memory may be on disk and not in main memory at any time. Note that portions of active processes/tasks are eligible for swapping to disk, if the portions have not been used recently. Not all parts of an executing program and its data have to be in physical memory for the associated process to be active. An operating system kernel that allows multitasking needs processes to have certain states. Names for these states are not standardised. First, the process is "created" by being loaded from a secondary storage device into main memory.
After that the process scheduler assigns it the "waiting" state. While the process is "waiting", it waits for the scheduler to do a so-called context switch and load the process into the processor; the process state becomes "running", the processor executes the process instructions. If a process needs to wait for a resource, it is assigned the "blocked" state; the process state is changed back to "waiting". Once the process finishes execution, or is terminated by the operating system, it is no longer needed; the process is removed or is moved to the "terminated" state. When removed, it just waits to be removed from main
A CPU cache is a hardware cache used by the central processing unit of a computer to reduce the average cost to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is organized as a hierarchy of more cache levels. All modern CPUs have multiple levels of CPU caches; the first CPUs that used a cache had only one level of cache. All current CPUs with caches have a split L1 cache, they have L2 caches and, for larger processors, L3 caches as well. The L2 cache is not split and acts as a common repository for the split L1 cache; every core of a multi-core processor has a dedicated L2 cache and is not shared between the cores. The L3 cache, higher-level caches, are shared between the cores and are not split. An L4 cache is uncommon, is on dynamic random-access memory, rather than on static random-access memory, on a separate die or chip.
That was the case with L1, while bigger chips have allowed integration of it and all cache levels, with the possible exception of the last level. Each extra level of cache tends to be optimized differently. Other types of caches exist, such as the translation lookaside buffer, part of the memory management unit that most CPUs have. Caches are sized in powers of two: 4, 8, 16 etc. KiB or MiB sizes; when trying to read from or write to a location in main memory, the processor checks whether the data from that location is in the cache. If so, the processor will read from or write to the cache instead of main memory, much slower. Most modern desktop and server CPUs have at least three independent caches: an instruction cache to speed up executable instruction fetch, a data cache to speed up data fetch and store, a translation lookaside buffer used to speed up virtual-to-physical address translation for both executable instructions and data. A single TLB can be provided for access to both instructions and data, or a separate Instruction TLB and data TLB can be provided.
The data cache is organized as a hierarchy of more cache levels. However, the TLB cache is part of the memory management unit and not directly related to the CPU caches. Data is transferred between memory and cache in blocks of fixed size, called cache lines or cache blocks; when a cache line is copied from memory into the cache, a cache entry is created. The cache entry will include the copied data as well as the requested memory location; when the processor needs to read or write a location in memory, it first checks for a corresponding entry in the cache. The cache checks for the contents of the requested memory location in any cache lines that might contain that address. If the processor finds that the memory location is in the cache, a cache hit has occurred. However, if the processor does not find the memory location in the cache, a cache miss. In the case of a cache hit, the processor reads or writes the data in the cache line. For a cache miss, the cache allocates a new entry and copies data from main memory the request is fulfilled from the contents of the cache.
To make room for the new entry on a cache miss, the cache may have to evict one of the existing entries. The heuristic it uses to choose the entry to evict is called the replacement policy; the fundamental problem with any replacement policy is that it must predict which existing cache entry is least to be used in the future. Predicting the future is difficult, so there is no perfect method to choose among the variety of replacement policies available. One popular replacement policy, least-recently used, replaces the least accessed entry. Marking some memory ranges as non-cacheable can improve performance, by avoiding caching of memory regions that are re-accessed; this avoids the overhead of loading something into the cache without having any reuse. Cache entries may be disabled or locked depending on the context. If data is written to the cache, at some point it must be written to main memory. In a write-through cache, every write to the cache causes a write to main memory. Alternatively, in a write-back or copy-back cache, writes are not mirrored to the main memory, the cache instead tracks which locations have been written over, marking them as dirty.
The data in these locations is written back to the main memory only when that data is evicted from the cache. For this reason, a read miss in a write-back cache may sometimes require two memory accesses to service: one to first write the dirty location to main memory, another to read the new location from memory. A write to a main memory location, not yet mapped in a write-back cache may evict an dirty location, thereby freeing that cache space for the new memory location. There are intermediate policies as well; the cache may be write-through, but the writes may be held in a store data queue temporarily so multiple stores can be processed together. Cached data from the main memory may be changed b
Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
Translation lookaside buffer
A translation lookaside buffer is a memory cache, used to reduce the time taken to access a user memory location. It is a part of the chip’s memory-management unit; the TLB stores the recent translations of virtual memory to physical memory and can be called an address-translation cache. A TLB may reside between the CPU and the CPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache; the majority of desktop and server processors include one or more TLBs in the memory-management hardware, it is nearly always present in any processor that utilizes paged or segmented virtual memory. The TLB is sometimes implemented as content-addressable memory; the CAM search key is the virtual address, the search result is a physical address. If the requested address is present in the TLB, the CAM search yields a match and the retrieved physical address can be used to access memory; this is called a TLB hit. If the requested address is not in the TLB, it is a miss, the translation proceeds by looking up the page table in a process called a page walk.
The page walk is time-consuming when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address. After the physical address is determined by the page walk, the virtual address to physical address mapping is entered into the TLB; the PowerPC 604, for example, has a two-way set-associative TLB for data stores. Some processors have different data address TLBs. A TLB has a fixed number of slots containing segment-table entries; the virtual memory is the memory space as seen from a process. The page table stored in main memory, keeps track of where the virtual pages are stored in the physical memory; this method uses two memory accesses to access a byte. First, the page table is looked up for the frame number. Second, the frame number with the page offset gives the actual address, thus any straightforward virtual memory scheme would have the effect of doubling the memory access time. Hence, the TLB is used to reduce the time taken to access the memory locations in the page-table method.
The TLB is a cache of the page table, representing only a subset of the page-table contents. Referencing the physical memory addresses, a TLB may reside between the CPU and the CPU cache, between the CPU cache and primary storage memory, or between levels of a multi-level cache; the placement determines whether the cache uses virtual addressing. If the cache is addressed, requests are sent directly from the CPU to the cache, the TLB is accessed only on a cache miss. If the cache is physically addressed, the CPU does a TLB lookup on every memory operation, the resulting physical address is sent to the cache. In a Harvard architecture or modified Harvard architecture, a separate virtual address space or memory-access hardware may exist for instructions and data; this can lead to distinct TLBs for each access type, an instruction translation lookaside buffer and a data translation lookaside buffer. Various benefits have been demonstrated with separate data and instruction TLBs; the TLB can be used as a fast lookup hardware cache.
The figure shows the working of a TLB. Each entry in the TLB consists of two parts: a value. If the tag of the incoming virtual address matches the tag in the TLB, the corresponding value is returned. Since the TLB lookup is a part of the instruction pipeline, searches are fast and cause no performance penalty. However, to be able to search within the instruction pipeline, the TLB has to be small. A common optimization for physically addressed caches is to perform the TLB lookup in parallel with the cache access. Upon each virtual-memory reference, the hardware checks the TLB to see whether the page number is held therein. If yes, it is a TLB hit, the translation is made; the frame number is used to access the memory. If the page number is not in the TLB, the page table must be checked. Depending on the CPU, this can be done automatically using a hardware or using an interrupt to the operating system; when the frame number is obtained, it can be used to access the memory. In addition, we add the page number and frame number to the TLB, so that they will be found on the next reference.
If the TLB is full, a suitable block must be selected for replacement. There are different replacement methods like least used, first in, first out etc.. The CPU has to access main memory for data-cache miss, or TLB miss; the third case is where the desired information itself is in a cache, but the information for virtual-to-physical translation is not in a TLB. These are all slow, due to the need to access a slower level of the memory hierarchy, so a well-functioning TLB is important. Indeed, a TLB miss can be more expensive than an instruction or data cache miss, due to the need for not just a load from main memory, but a page walk, requiring several memory accesses; the flowchart provided explains the working of a TLB. If it is a TLB miss the CPU checks the page table for the page table ent
Peter J. Denning
Peter James Denning is an American computer scientist and writer. He is best known for pioneering work in virtual memory for inventing the working-set model for program behavior, which addressed thrashing in operating systems and became the reference standard for all memory management policies, he is known for his works on principles of operating systems, operational analysis of queueing network systems and implementation of CSNET, the ACM digital library, codifying the great principles of computing, most for the book The Innovator's Way, on innovation as a set of learnable practices. Denning was born January 6, 1942, in Queens, NY, raised in Darien, CT, he took an early interest in science, pursuing astronomy, botany and electronics while in grade school. At Fairfield Prep, he submitted home designed computers to the science fair in 1958, 1959, 1960; the second computer, which solved linear equations using pinball machine parts, won the grand prize. He attended Manhattan College for a Bachelor in EE and MIT for a PhD.
At MIT he contributed to the design of Multics. His PhD thesis, "Resource allocation in multiprocess computer systems", introduced seminal ideas in working sets, locality and system balance. At Princeton University from 1968 to 1972, he wrote his classic book, Operating Systems Principles, with E G Coffman, he collaborated with Alfred Aho and Jeffrey Ullman on optimality proofs for paging algorithms and on a simple proof that compilers based on precedence parsing do not need to backtrack. At Purdue University he supervised numerous PhD theses validating locality-based theories of memory management and extending the new mathematics of operational analysis of queueing networks, he co-founded CSNET. He became department head in 1979 and completed another book on computational models, Machines and Computation, with Jack Dennis and Joe Qualitz. At NASA Ames from 1983 to 1991 he founded the Research Institute for Advanced Computer Science and turned it into one of the first centers for interdisciplinary research in computational and space science.
At George Mason University from 1991 to 2002 he headed the Computer Science Department, was an associate dean and vice provost, founded the Center for the New Engineer. The Center was a pioneer in web-based learning, he created a design course for engineers, called Sense 21, the basis of his project to understand innovation as a skill. He created a course on Core of Information, Technology the basis his Great Principles of Computing project. At Naval Postgraduate School since 2002 he heads the Computer Science Department, directs the Cebrowski Institute for Innovation and Information Superiority, he chaired the faculty council. Denning served continuously as a volunteer in Association for Computing Machinery since 1967. In that time he served as president, vice president, three board chairs, Member-at-Large, Editor of ACM Computing Surveys, Editor of the monthly ACM Communications, he received seven ACM awards for service, technical contribution, education. ACM presented him with a special award in June 2007 recognizing 40 years of continuous service.
Denning has received 26 awards for service and technical contribution. These include one quality customer service award, three professional society fellowships, three honorary degrees, six awards for technical contribution, six for distinguished service, seven for education, he married Dorothy E. Denning in 1974, she went on to become a noted computer security expert. Denning's career has been a search for fundamental principles in subfields of computing, he writes prolifically. From 1980 to 1982 he wrote 24 columns as ACM President, focusing on technical and political issues of the field. From 1985 to 1993 he wrote 47 columns on "The Science of Computing" for American Scientist magazine, focusing on scientific principles from across the field. Beginning in 2001 he has written quarterly "IT Profession" columns for Communications of the ACM, focusing on principles of value to practicing professionals. In 1970 he published a classic paper that displayed a scientific framework for virtual memory and the validating scientific evidence, putting to rest a controversy over virtual memory stability and performance.
In 1966 he proposed the working set as a dynamic measure of memory demand and explained why it worked using the locality idea introduced by Les Belady of IBM. His working set paper became a classic, it received an ACM Best paper award in 1968 and a SIGOPS Hall of Fame Award in 2005. In the early 1970s he collaborated with Ed Coffman, Jr. on Operating Systems Theory, which became a classic textbook used in graduate courses and stayed in print until 1995. That book helped to erase doubts. In the middle 1970s he collaborated with Jeffrey Buzen on operational analysis, extending Buzen's basic operational laws to deal with all queueing networks; the operational framework explained why computer performance models work so well though violating the traditional stochastic Markovian assumptions. It has become the preferred method for teaching performance prediction in computing courses. In the early 1980s, he was one of the four founding Principal investigators of Computer Science Network, sponsored by the National Science Foundation The other three were Dave Farber, Larry Landweber, Tony Hearn.
They led the development of a self-supporting CS community network that by 1986 included 165 sites and 50,000 users. CSNET was the key transitional stepping stone from the original ARPANET to the NSFNET and the Internet. In 2009, the Internet Society awarded CSNET its prestigious Jon Postel award, recognizing i
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is