Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
Load balancing (computing)
In computing, load balancing improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process. Load balancing differs from channel bonding in that load balancing divides traffic between network interfaces on a network socket basis, while channel bonding implies a division of traffic between physical interfaces at a lower level, either per packet or on a data link basis with a protocol like shortest path bridging. One of the most used applications of load balancing is to provide a single Internet service from multiple servers, sometimes known as a server farm.
Load-balanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth File Transfer Protocol sites, Network News Transfer Protocol servers, Domain Name System servers, databases. An alternate method of load balancing, which does not require a dedicated software or hardware node, is called round robin DNS. In this technique, multiple IP addresses are associated with a single domain name. IP is assigned to clients for a time quantum. Another more effective technique for load-balancing using DNS is to delegate www.example.org as a sub-domain whose zone is served by each of the same servers that are serving the web site. This technique works well where individual servers are spread geographically on the Internet. For example: one.example.org A 192.0.2.1 two.example.org A 203.0.113.2 www.example.org NS one.example.org www.example.org NS two.example.org However, the zone file for www.example.org on each server is different such that each server resolves its own IP Address as the A-record.
On server one the zone file for www.example.org reports: @ in a 192.0.2.1 On server two the same zone file contains: @ in a 203.0.113.2 This way, when a server is down, its DNS will not respond and the web service does not receive any traffic. If the line to one server is congested, the unreliability of DNS ensures less HTTP traffic reaches that server. Furthermore, the quickest DNS response to the resolver is nearly always the one from the network's closest server, ensuring geo-sensitive load-balancing. A short TTL on the A-record helps to ensure traffic is diverted when a server goes down. Consideration must be given the possibility that this technique may cause individual clients to switch between individual servers in mid-session. Another approach to load balancing is to deliver a list of server IPs to the client, to have client randomly select the IP from the list on each connection; this relies on all clients generating similar loads, the Law of Large Numbers to achieve a reasonably flat load distribution across servers.
It has been claimed that client-side random load balancing tends to provide better load distribution than round-robin DNS. With this approach, the method of delivery of list of IPs to the client can vary, may be implemented as a DNS list, or via hardcoding it to the list. If a "smart client" is used, detecting that randomly selected server is down and connecting randomly again, it provides fault tolerance. For Internet services, a server-side load balancer is a software program, listening on the port where external clients connect to access services; the load balancer forwards requests to one of the "backend" servers, which replies to the load balancer. This allows the load balancer to reply to the client without the client knowing about the internal separation of functions, it prevents clients from contacting back-end servers directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel's network stack or unrelated services running on other ports.
Some load balancers provide a mechanism for doing something special in the event that all backend servers are unavailable. This might include forwarding to a backup load balancer, or displaying a message regarding the outage, it is important that the load balancer itself does not become a single point of failure. Load balancers are implemented in high-availability pairs which may replicate session persistence data if required by the specific application. Numerous scheduling algorithms called load-balancing methods, are used by load balancers to determine which back-end server to send a request to. Simple algorithms include round robin, or least connections. More sophisticated load balancers may take additional factors into account, such as a server's reported load, least response times, up/down status, number of active connections, geographic location, capabilities, or how much traffic it has been assigned. An important issue when operating a load-balanced service is how to handle information that must be kept across the multiple requests in a user's session.
If this information is stored locally on one backend server subsequent requests going to different backend servers would
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
Michael John Muuss was the American author of the freeware network tool ping. A graduate of Johns Hopkins University, Muuss was a senior scientist specializing in geometric solid modeling, ray-tracing, MIMD architectures and digital computer networks at the United States Army Research Laboratory at Aberdeen Proving Ground, Maryland when he died, he contributed to many others. However, the thousand-line ping, which he wrote in December 1983 while working at the Ballistic Research Laboratory, is the program for which he is most remembered. Due to its usefulness, ping has been implemented on a large number of operating systems BSD and Unix, but others including Windows and Mac OS X. In 1993, the USENIX Association gave a Lifetime Achievement Award to the Computer Systems Research Group at University of California, honoring 180 individuals, including Muuss, who contributed to the CSRG's 4.4BSD-Lite release. Muuss is mentioned in two books, The Cuckoo's Egg and Cyberpunk: Outlaws and Hackers on the Computer Frontier, for his role in tracking down crackers.
He is mentioned in Peter Salus's A Quarter Century of UNIX. Muuss died in an automobile collision on Interstate 95 on November 20, 2000; the Michael J. Muuss Research Award, set up by friends and family of Muuss, memorializes him at Johns Hopkins University. Heterogeneous Element Processor ttcp ping BRL-CAD Mike Muuss's home page Mike Muuss, The Story of the PING Program An Early UseNet Post by Mike Muuss Discussing Ping's history ICMP As A Diagnostic Tool? Mike Muuss, The Story of the TTCP Program BRL-CAD
Cisco Systems, Inc. is an American multinational technology conglomerate headquartered in San Jose, California, in the center of Silicon Valley. Cisco develops and sells networking hardware, telecommunications equipment and other high-technology services and products. Through its numerous acquired subsidiaries, such as OpenDNS, WebEx, Jabber and Jasper, Cisco specializes into specific tech markets, such as Internet of Things, domain security and energy management. Cisco stock was added to the Dow Jones Industrial Average on June 8, 2009, is included in the S&P 500 Index, the Russell 1000 Index, NASDAQ-100 Index and the Russell 1000 Growth Stock Index. Cisco Systems was founded in December 1984 by Leonard Bosack and Sandy Lerner, two Stanford University computer scientists, they pioneered the concept of a local area network being used to connect geographically disparate computers over a multiprotocol router system. By the time the company went public in 1990, Cisco had a market capitalization of $224 million.
By the end of the dot-com bubble in the year 2000, Cisco had a more than $500 billion market capitalization. Cisco Systems was founded in December 1984 by Sandy Lerner, a director of computer facilities for the Stanford University Graduate School of Business. Lerner partnered with her husband, Leonard Bosack, in charge of the Stanford University computer science department's computers. Cisco's initial product has roots in Stanford University's campus technology. In the early 1980's students and staff at Stanford; the Blue Box used software, written at Stanford by research engineer William Yeager. In 1985, Bosack and Stanford employee Kirk Lougheed began a project to formally network Stanford's campus, they adapted Yeager's software into what became the foundation for Cisco IOS, despite Yeager's claims that he had been denied permission to sell the Blue Box commercially. On July 11, 1986, Bosack and Lougheed were forced to resign from Stanford and the university contemplated filing criminal complaints against Cisco and its founders for the theft of its software, hardware designs, other intellectual properties.
In 1987, Stanford licensed two computer boards to Cisco. In addition to Bosack, Lougheed, Greg Satz, Richard Troiano, completed the early Cisco team; the company's first CEO was Bill Graves, who held the position from 1987 to 1988. In 1988, John Morgridge was appointed CEO; the name "Cisco" was derived from the city name San Francisco, why the company's engineers insisted on using the lower case "cisco" in its early years. The logo is intended to depict the two towers of the Golden Gate Bridge. On February 16, 1990, Cisco Systems went public with a market capitalization of $224 million, was listed on the NASDAQ stock exchange. On August 28, 1990, Lerner was fired. Upon hearing the news, her husband Bosack resigned in protest; the couple walked away from Cisco with $170 million, 70% of, committed to their own charity. Although Cisco was not the first company to develop and sell dedicated network nodes, it was one of the first to sell commercially successful routers supporting multiple network protocols.
Classical, CPU-based architecture of early Cisco devices coupled with flexibility of operating system IOS allowed for keeping up with evolving technology needs by means of frequent software upgrades. Some popular models of that time managed to stay in production for a decade unchanged; the company was quick to capture the emerging service provider environment, entering the SP market with product lines such as Cisco 7000 and Cisco 8500. Between 1992 and 1994, Cisco acquired several companies in Ethernet switching, such as Kalpana, Grand Junction and most notably, Mario Mazzola's Crescendo Communications, which together formed the Catalyst business unit. At the time, the company envisioned layer 3 routing and layer 2 switching as complementary functions of different intelligence and architecture—the former was slow and complex, the latter was fast but simple; this philosophy dominated the company's product lines throughout the 1990s. In 1995, John Morgridge was succeeded by John Chambers; the Internet Protocol became adopted in the mid-to-late 1990s.
Cisco introduced products ranging from modem access shelves to core GSR routers, making them a major player in the market. In late March 2000, at the height of the dot-com bubble, Cisco became the most valuable company in the world, with a market capitalization of more than $500 billion; as of July 2014, with a market cap of about US$129 billion, it was still one of the most valuable companies. The perceived complexity of programming routing functions in silicon led to the formation of several startups determined to find new ways to process IP and MPLS packets in hardware and blur boundaries between routing and switching. One of them, Juniper Networks, shipped their first product in 1999 and by 2000 chipped away about 30% from Cisco SP Market share. In response, Cisco developed homegrown ASICs and fast processing cards for GSR routers and Catalyst 6500 switches. In 2004, Cisco started migration to new high-end hardware CRS-1 and software architecture IOS-XR; as part of a rebranding campaign in 2006, Cisco Systems adopted the shortened name "Cisco" and created "The Human Network" advertising campaign.
These efforts were meant to make Cisco a "household" brand—a strategy designed to support the low-end Linksys products and future consumer products. On the more traditional business side, Cisco cont
In computer networking, a hop is one portion of the path between source and destination. Data packets pass through bridges and gateways as they travel between source and destination; each time packets are passed to the next network device, a hop occurs. The hop count refers to the number of intermediate devices through which data must pass between source and destination. Since store and forward and other latencies are incurred through each hop, a large number of hops between source and destination implies lower real-time performance; the hop count refers to the number of intermediate network devices through which data must pass between source and destination. Hop count is a rough measure of distance between two hosts. A hop count of n means. On a layer 3 network such as Internet Protocol, each router along the data path constitutes a hop. By itself, this metric is, not useful for determining the optimum network path, as it does not take into consideration the speed, reliability, or latency of any particular hop, but the total count.
Some routing protocols, such as Routing Information Protocol, use hop count as their sole metric. Each time a router receives a packet, it modifies the packet; the router discards. This prevents packets from endlessly bouncing around the network in the event of routing errors. Routers are capable of managing hop counts. Known as time to live in IPv4, hop limit in IPv6, this field specifies a limit on the number of hops a packet is allowed before being discarded. Routers modify IP packets as they are forwarded, decrementing the respective TTL or hop limit fields. Routers do not forward packets with a resultant field of 0 or less; this prevents packets from following a loop forever. Routing term used for the next gateway to which packets should be forwarded along the path to their final destination. One technique to make content of a routing table smaller is called next-hop routing. A routing table contains the IP address of a destination network and the IP address of the next gateway along the path to the final network destination.
Using a routing table to store a next hop for each'known' destination is called next-hop forwarding. Therefore, a given gateway only knows one step along the path, not the complete path to a destination, it is key to know that the next hops listed in a routing table are on networks to which the gateway is directly connected to. The ping or traceroute commands can be used to see how many router hops it takes to get from one host to another. Hop counts are useful to find faults in a network, or to discover if routing is indeed correct. Network utilities like ping can be used to determine the hop count to a specific destination. Ping generates packets. Internet Control Message Protocol Ping Routing traceroute Comer, Douglas E. Internetworking with TCP/IP, fifth edition. Pearson Prentice Hall,2006. ISBN 0-13-187671-6
Van Jacobson is an American computer scientist, renowned for his work on TCP/IP network performance and scaling. He is one of the primary contributors to the TCP/IP protocol stack—the technological foundation of today’s Internet. Since 2013, Jacobson is an adjunct professor at the University of California, Los Angeles working on Named Data Networking. Jacobson studied Modern Poetry and Mathematics and received an M. S. in physics and a B. S. in mathematics from the University of Arizona. He did graduate work at Lawrence Berkeley Laboratory, his work redesigning TCP/IP's flow control algorithms to better handle congestion is said to have saved the Internet from collapsing in the late 1980s and early 1990s. He is known for the TCP/IP Header Compression protocol described in RFC 1144: Compressing TCP/IP Headers for Low-Speed Serial Links, popularly known as Van Jacobson TCP/IP Header Compression, he is the co-author of several used network diagnostic tools, including traceroute and pathchar. He was a leader in the development of the multicast backbone and the multimedia tools vic, wb.
Jacobson worked at the Lawrence Berkeley Laboratory from 1974 to 1998 as a Research scientist in the Real-time Controls Group and group leader for the Network Research Group. He was Chief Scientist at Cisco Systems from 1998 to 2000. In 2000 he became Chief Scientist for Packet Design, Inc. and in 2002 for a spin-off, Precision I/O. He joined PARC as a research fellow in August 2006. In January 2006 at Linux.conf.au, Jacobson presented another idea about network performance improvement, which has since been referred to as network channels. Jacobson discussed his ideas on Named data networking, the focus of his work at PARC, in August 2006 as part of the Google Tech Talks. Van Jacobson is now working with the NDN Consortium funded by the National Science Foundation to explore and create the future of the internet. Van Jacobson together with his colleague at LBL, Steven McCanne, won R&D Magazine's 1995 R&D 100 Award for development of a software toolpack that enables multiparty audio and visual conferencing via the MBone.
For his work, Jacobson received the 2001 ACM SIGCOMM Award for Lifetime Achievement "for contributions to protocol architecture and congestion control", the 2002 IEEE Koji Kobayashi Computers and Communications Award, was elected to the National Academy of Engineering in 2004 for his "contributions to network protocols, including multicasting and the control of congestion."In 2012, Jacobson was inducted into the Internet Hall of Fame by the Internet Society. Content centric networking Van Jacobson's bio page at PARC at the Wayback Machine Jacobson's bio at Packet Design, Inc. at the Wayback Machine Van Jacobson Denies Averting Internet Meltdown in 1980s