A network processor is an integrated circuit which has a feature set targeted at the networking application domain. Network processors are software programmable devices and would have generic characteristics similar to general purpose central processing units that are used in many different types of equipment and products. In modern telecommunications networks, information is transferred as packet data, in contrast to older telecommunications networks that carried information as analog signals such as in the public switched telephone network or analog TV/Radio networks; the processing of these packets has resulted in the creation of integrated circuits that are optimised to deal with this form of packet data. Network Processors have specific features or architectures that are provided to enhance and optimise packet processing within these networks. Network processors have evolved into ICs with specific functions; this evolution has resulted in more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed.
Network processors are used in the manufacture of many different types of network equipment such as: Routers, software routers and switches Firewalls Session border controllers Intrusion detection devices Intrusion prevention devices Network monitoring systems In the generic role as a packet processor, a number of optimised features or functions are present in a network processor, these include: Pattern matching – the ability to find specific patterns of bits or bytes within packets in a packet stream. Key lookup – the ability to undertake a database lookup using a key to find a result routing information. Computation Data bitfield manipulation – the ability to change certain data fields contained in the packet as it is being processed. Queue management – as packets are received and scheduled to be sent onwards, they are stored in queues. Control processing – the micro operations of processing a packet are controlled at a macro level which involves communication and orchestration with other nodes in a system.
Quick allocation and re-circulation of packet buffers. In order to deal with high data-rates, several architectural paradigms are used: Pipeline of processors - each stage of the pipeline consisting of a processor performing one of the functions listed above. Parallel processing with multiple processors including multithreading. Specialized microcoded engines to more efficiently accomplish the tasks at hand. With the advent of multicore architectures, network processors can be used for higher layer processing. Additionally, traffic management, a critical element in L2-L3 network processing and used to be executed by a variety of co-processors, has become an integral part of the network processor architecture, a substantial part of its silicon area is devoted to the integrated traffic manager. Modern network processors are equipped with low-latency high-throughput on-chip interconnection networks optimized for the exchange of small messages among cores; such networks can be used as an alternative facility for the efficient inter-core communication aside of the standard use of shared memory.
Using the generic function of the network processor, a software program implements an application that the network processor executes, resulting in the piece of physical equipment performing a task or providing a service. Some of the applications types implemented as software running on network processors are: Packet or frame discrimination and forwarding, that is, the basic operation of a router or switch. Quality of service enforcement – identifying different types or classes of packets and providing preferential treatment for some types or classes of packet at the expense of other types or classes of packet. Access Control functions – determining whether a specific packet or stream of packets should be allowed to traverse the piece of network equipment. Encryption of data streams – built in hardware-based encryption engines allow individual data flows to be encrypted by the processor. TCP offload processing Content processor Multi core Processor Knowledge based processor Active networking Computer engineering Internet List of defunct Network Processor companies Network Processing Forum Queueing theory Network on a chip
A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions. The instructions are ordinary CPU instructions but the single processor can run multiple instructions on separate cores at the same time, increasing overall speed for programs amenable to parallel computing. Manufacturers integrate the cores onto a single integrated circuit die or onto multiple dies in a single chip package; the microprocessors used in all personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device or loosely. For example, cores may or may not share caches, they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, two-dimensional mesh, crossbar. Homogeneous multi-core systems include only identical cores. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, vector, or multithreading.
Multi-core processors are used across many application domains, including general-purpose, network, digital signal processing, graphics. The improvement in performance gained by the use of a multi-core processor depends much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel on multiple cores. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or more if the problem is split up enough to fit within each core's cache, avoiding use of much slower main-system memory. Most applications, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem; the parallelization of software is a significant ongoing topic of research. The terms multi-core and dual-core most refer to some sort of central processing unit, but are sometimes applied to digital signal processors and system on a chip.
The terms are used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted. In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units; the terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an high number of cores. Some systems use many soft microprocessor cores placed on a single FPGA; each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core. While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern; these physical limitations can cause significant heat data synchronization problems. Various other methods are used to improve CPU performance; some instruction-level parallelism methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code.
Many applications are better suited to thread-level parallelism methods, multiple independent CPUs are used to increase a system's overall TLP. A combination of increased available space and the demand for increased TLP led to the development of multi-core CPUs. Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality for complex instruction set computing architectures. Clock rates increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s; as the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance.
Multiple cores were used on the same CPU chip, which could lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing. Since computer manufacturers have long implemented symmetric multiprocessing designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known. Additionally: Using a proven processing-core design without architectural changes reduces design risk significantly. For general-purpose processors, much of the motivation for multi-core processors comes from diminished gains in processor performance from increasing the operating frequency; this is due to three primary fa
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Video on demand
Video on demand is a programming system which allows users to select and watch/listen to video or audio content such as movies and TV shows whenever they choose, rather than at a scheduled broadcast time, the method that prevailed with over-the-air programming during the 20th century. IPTV technology is used to bring VOD to televisions and personal computers. Television VOD systems can stream content through either a set-top box, a computer or other device, allowing viewing in real time, or download it to a device such as a computer, digital video recorder or portable media player for viewing at any time; the majority of cable- and telephone company–based television providers offer: VOD streaming, whereby a user selects a video program and it begins to play on the television set, or downloading to a digital video recorder rented or purchased from the provider, or downloading onto a PC or to a portable device, for viewing in the future. Internet television, using the Internet, is an popular form of video on demand.
VOD can be accessed via desktop client applications such as the Samsung iCloud online content store. Some airlines offer VOD as in-flight entertainment to passengers through individually controlled video screens embedded in seatbacks or armrests or offered via portable media players; some video on demand services, such as Netflix, use a subscription model that requires users to pay a monthly fee to access a bundled set of content, movies shows. Other services, such as YouTube, use an advertising - model. Downloading and streaming video on demand systems provide the user with all of the features of Portable media players and DVD players; some VOD systems that store and stream programs from hard disk drives use a memory buffer to allow the user to fast forward and rewind digital videos. It is possible to put video servers on local area networks, in which case they can provide rapid response to users. Cable companies have reeled out their own versions of video on demand services through apps, allowing for TV access anywhere where there is a device, internet compatible.
In addition to cable services launching apps that offer on demand video, they have combined it with offering live streaming services as well. The recent launches of apps from cable companies have the phrases "go" or "watch" are attempts to compete with Subscription Video on Demand services since they lack having live news, etc. Streaming video servers can serve a wider community via a WAN, in which case the responsiveness may be reduced. Download VOD services are practical to homes equipped with DSL connections. Servers for traditional cable and telco VOD services are placed at the cable head-end serving a particular market as well as cable hubs in larger markets. In the telco world, they are placed in either the central office, or a newly created location called a Video Head-End Office; the first video on demand systems used tapes. GTE started as a trial in 1990 with AT&T providing all components. By 1992 VOD servers were supplying encoded digital video from disks and DRAM. In the US, the 1982 anti-trust break-up of AT&T resulted in a number of smaller telephone companies called Baby Bells.
Following this the Cable Communications Policy Act of 1984 prohibited telephone companies from providing video services within their operating regions. In 1993 the National Communication and Information Infrastructure was proposed and passed by the US House and Senate, thus opening the way for the seven Baby Bells—Ameritech, Bell Atlantic, BellSouth, NYNEX, Pacific Telesis, Southwestern Bell, US West—to implement VOD systems. All of these companies and others began holding trials to set up systems for supplying video on demand over telephone and cable lines. In November 1992, Bell Atlantic announced a VOD trial. IBM was developing video server code-named Tiger Shark. Concurrently Digital Equipment was developing a scalable video server. Bell Atlantic selected IBM and in April 1993 the system became the first VOD over ADSL to be deployed outside the lab, serving 50 video streams. In June 1993, US West filed for a system consisting of the Digital Equipment Corporation Interactive Information Server, with Scientific Atlanta providing the network, 3DO as the set-top box, with video streams and other information to be deployed to 2500 homes.
In 1994–1995 US West went on to file for VOD at several cities: 330,000 subscribers in Denver, 290,000 in Minneapolis, 140,000 in Portland. Many VOD trials were held with various combinations of server and set-top. Of these the primary players in the US were the telephone companies, using DEC, Oracle, IBM, Hewlett-Packard, USA Video, nCube, SGI, other servers; the DEC server system was used in more of these trials than any other. The DEC VOD server architecture used interactive gateways to set up video streams and other information for delivery from any of a large number of VAX servers, enabling it in 1993 to support more than 100,000 streams with full VCR-like functionality. In 1994, it would upgrade to a DEC Alpha–based computer for its VOD servers, allowing it to support more than a million users. By 1994 the Oracle scalable VOD system used massively parallel processors to support from 500 to 30,000 users; the SGI system supported 4000 users. The servers connected to networks of increasing size to support video stream delivery to whole cities.
In the UK, from September 1994, a VOD service formed a major part of the Cambridge Digital Interactive Television Trial in England. This provided video and data to 250 homes and a number of sc