The grid plan, grid street plan, or gridiron plan is a type of city plan in which streets run at right angles to each other, forming a grid. The infrastructure cost for regular grid patterns is higher than for patterns with discontinuous streets. Costs for streets depend on four variables: street width, street length, block width and pavement width. Two inherent characteristics of the grid plan, frequent intersections and orthogonal geometry, facilitate pedestrian movement; the geometry helps with orientation and wayfinding and its frequent intersections with the choice and directness of route to desired destinations. In ancient Rome, the grid plan method of land measurement was called centuriation; the grid plan originated in multiple cultures. By 2600 BC, Mohenjo-daro and Harappa, major cities of the Indus Valley Civilization, were built with blocks divided by a grid of straight streets, running north–south and east–west; each block was subdivided by small lanes. The cities and monasteries of Gandhara, dating from the 1st millennium BC to the 11th century AD had grid-based designs.
Islamabad, the capital of Pakistan since 1959, was founded on the grid-plan of the nearby ruined city of Sirkap. A workers' village at Giza, housed a rotating labor force and was laid out in blocks of long galleries separated by streets in a formal grid. Many pyramid-cult cities used a common orientation: a north–south axis from the royal palace and an east–west axis from the temple, meeting at a central plaza where King and God merged and crossed. Hammurabi king of the Babylonian Empire in the 17th century BC, ordered the rebuilding of Babylon: constructing and restoring temples, city walls, public buildings, irrigation canals; the streets of Babylon were wide and straight, intersected at right angles, were paved with bricks and bitumen. The tradition of grid plans is continuous in China from the 15th century BC onward in the traditional urban planning of various ancient Chinese states. Guidelines put into written form in the Kaogongji during the Spring and Autumn period stated: "a capital city should be square on plan.
Three gates on each side of the perimeter lead into the nine main streets that crisscross the city and define its grid-pattern. And for its layout the city should have the Royal Court situated in the south, the Marketplace in the north, the Imperial Ancestral Temple in the east and the Altar to the Gods of Land and Grain in the west." Teotihuacan, near modern-day Mexico City, is the largest ancient grid-plan site in the Americas. The city's grid covered 21 square kilometres; the most well-known grid system is that spread through the colonies of the Roman Empire. The archetypal Roman Grid was introduced to Italy first by the Greeks, with such information transferred by way of trade and conquest. Although the idea of the grid was present in Hellenic societal and city planning, it was not pervasive prior to the 5th century BC. However, it gained primacy through the work of Hippodamus of Miletus, who planned and replanned many Greek cities in accordance with this form; the concept of a grid as the ideal method of town planning had become accepted by the time of Alexander the Great.
His conquests were a step in the propagation of the grid plan throughout colonies, some as far-flung as Taxila in Pakistan, that would be mirrored by the expansion of the Roman Empire. The Greek grid had its streets aligned in relation to the cardinal points and looked to take advantage of visual cues based on the hilly landscape typical of Greece and Asia Minor; this was best exemplified in Priene, in present-day western Turkey, where the orthogonal city grid was based on the cardinal points, on sloping terrain that struck views out towards a river and the city of Miletus. The Etruscan people, whose territories in Italy encompassed what would become Rome, founded what is now the city of Marzabotto at the end of the 6th century BC, its layout was based on Greek Ionic ideas, it was here that the main east–west and north–south axes of a town could first be seen in Italy. According to Stanislawski, the Romans did use grids until the time of the late Republic or early Empire, when they introduced centuriation, a system which they spread around the Mediterranean and into northern Europe on.
The military expansion of this period facilitated the adoption of the grid form as standard: the Romans established castra first as military centres. The Roman grid was similar in form to the Greek version of a grid, but allowed for practical considerations. For example, Roman castra were sited on flat land close to or on important nodes like river crossings or intersections of trade routes; the dimensions of the castra were standard, with each of its four walls having a length of 660 metres. Familiarity was the aim of such standardisation: soldiers could be stationed anywhere around the Empire, orientation would be easy within established towns if they had a standard layout; each would have the aforementioned decumanus maximus and cardo maximus at its heart, their intersection would form the forum, around which would be sited important public buildings. Indeed, such was the degree of similarity between towns that Higgins states that soldiers "would be housed at the same address as they moved from castra to castra".
Pompeii has been cited by both Laurence as the best preserved example of the Roman grid. Outside of the castra, large tra
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can be divided into smaller ones, which can be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling; as power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture in the form of multi-core processors. Parallel computing is related to concurrent computing—they are used together, conflated, though the two are distinct: it is possible to have parallelism without concurrency, concurrency without parallelism. In parallel computing, a computational task is broken down into several many similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion.
In contrast, in concurrent computing, the various processes do not address related tasks. Parallel computers can be classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are some of the greatest obstacles to getting good parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law. Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is implemented as a serial stream of instructions; these instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements to solve a problem; this is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Parallel computing was used for scientific computing and the simulation of scientific problems in the natural and engineering sciences, such as meteorology.
This led to the design of parallel software, as well as high performance computing. Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004; the runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V 2 × F, where C is the capacitance being switched per clock cycle, V is voltage, F is the processor frequency. Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with the problem of power consumption and overheating the major central processing unit manufacturers started to produce power efficient processors with multiple cores.
The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers, thus parallelisation of serial programmes has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers have 10 and 12 core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months; this could mean that after 2020 a typical processor will have hundreds of cores. An operating system can ensure that different tasks and user programmes are run in parallel on the available cores. However, for a serial software programme to take full advantage of the multi-core architecture the programmer needs to restructure and parallelise the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelise their software code to take
A tree network, or star-bus network, is a hybrid network topology in which star networks are interconnected via bus networks. Tree networks are hierarchical, each node can have an arbitrary number of child nodes. A regular tree network's topology is characterized by two parameters: the branching, d, the number of generations, G; the total number of the nodes, N, the number of peripheral nodes N p, are given by N = d G + 1 − 1 d − 1, N p = d G A group at MIT has developed a set of functions for Matlab that can help in analyzing the networks. These tools could be used to study the tree networks as well. L. de Weck, Oliver. "MIT Strategic Engineering Research Group, Part II". Retrieved May 1, 2018
Arbitrated Loop known as FC-AL, is a Fibre Channel topology in which devices are connected in a one-way loop fashion in a ring topology. It was a lower-cost alternative to a fabric topology, it allowed connection of many servers and computer storage devices without using very costly Fibre Channel switches. The cost of the switches dropped so by 2007, FC-AL had become rare in server-to-storage communication, it is however still common within storage systems. It is a serial architecture that can be used as the transport layer in a SCSI network, with up to 127 devices; the loop may connect into a fibre channel fabric via one of its ports. The bandwidth on the loop is shared among all ports. Only two ports may communicate at a time on the loop. One port may open one other port in either half or full duplex mode. A loop with two ports is valid and has the same physical topology as point-to-point, but still acts as a loop protocol-wise. Fibre Channel ports capable of arbitrated loop communication are NL_port and FL_port, collectively referred to as the L_ports.
The ports may attach with cables running from the hub to the ports. The physical connectors on the hub are not ports in terms of the protocol. A hub does not contain ports. An arbitrated loop with no fabric port is a private loop. An arbitrated loop connected to a fabric is a public loop. An NL_Port must provide fabric logon and name registration facilities to initiate communication with other node through the fabric. Arbitrated loop can be physically cabled in a ring fashion or using a hub; the physical ring ceases to work. The hub on the other hand, while maintaining a logical ring, allows a star topology on the cable level; each receive port on the hub is passed to next active transmit port, bypassing any inactive or failed ports. Fibre Channel hubs therefore have another function: They provide bypass circuits that prevent the loop from breaking if one device fails or is removed. If a device is removed from a loop, the hub’s bypass circuit detects the absence of signal and begins to route incoming data directly to the loop’s next port, bypassing the missing device entirely.
This gives loops at least a measure of resiliency—failure of one device in a loop doesn’t cause the entire loop to become inoperable. Storage area network Fibre Channel Switched fabric List of Fibre Channel standards
A multi-core processor is a single computing component with two or more independent processing units called cores, which read and execute program instructions. The instructions are ordinary CPU instructions but the single processor can run multiple instructions on separate cores at the same time, increasing overall speed for programs amenable to parallel computing. Manufacturers integrate the cores onto a single integrated circuit die or onto multiple dies in a single chip package; the microprocessors used in all personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device or loosely. For example, cores may or may not share caches, they may implement message passing or shared-memory inter-core communication methods. Common network topologies to interconnect cores include bus, two-dimensional mesh, crossbar. Homogeneous multi-core systems include only identical cores. Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, vector, or multithreading.
Multi-core processors are used across many application domains, including general-purpose, network, digital signal processing, graphics. The improvement in performance gained by the use of a multi-core processor depends much on the software algorithms used and their implementation. In particular, possible gains are limited by the fraction of the software that can run in parallel on multiple cores. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores, or more if the problem is split up enough to fit within each core's cache, avoiding use of much slower main-system memory. Most applications, are not accelerated so much unless programmers invest a prohibitive amount of effort in re-factoring the whole problem; the parallelization of software is a significant ongoing topic of research. The terms multi-core and dual-core most refer to some sort of central processing unit, but are sometimes applied to digital signal processors and system on a chip.
The terms are used only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. This article uses the terms "multi-core" and "dual-core" for CPUs manufactured on the same integrated circuit, unless otherwise noted. In contrast to multi-core systems, the term multi-CPU refers to multiple physically separate processing-units; the terms many-core and massively multi-core are sometimes used to describe multi-core architectures with an high number of cores. Some systems use many soft microprocessor cores placed on a single FPGA; each "core" can be considered a "semiconductor intellectual property core" as well as a CPU core. While manufacturing technology improves, reducing the size of individual gates, physical limits of semiconductor-based microelectronics have become a major design concern; these physical limitations can cause significant heat data synchronization problems. Various other methods are used to improve CPU performance; some instruction-level parallelism methods such as superscalar pipelining are suitable for many applications, but are inefficient for others that contain difficult-to-predict code.
Many applications are better suited to thread-level parallelism methods, multiple independent CPUs are used to increase a system's overall TLP. A combination of increased available space and the demand for increased TLP led to the development of multi-core CPUs. Several business motives drive the development of multi-core architectures. For decades, it was possible to improve performance of a CPU by shrinking the area of the integrated circuit, which reduced the cost per device on the IC. Alternatively, for the same circuit area, more transistors could be used in the design, which increased functionality for complex instruction set computing architectures. Clock rates increased by orders of magnitude in the decades of the late 20th century, from several megahertz in the 1980s to several gigahertz in the early 2000s; as the rate of clock speed improvements slowed, increased use of parallel computing in the form of multi-core processors has been pursued to improve overall processing performance.
Multiple cores were used on the same CPU chip, which could lead to better sales of CPU chips with two or more cores. For example, Intel has produced a 48-core processor for research in cloud computing. Since computer manufacturers have long implemented symmetric multiprocessing designs using discrete CPUs, the issues regarding implementing multi-core processor architecture and supporting it with software are well known. Additionally: Using a proven processing-core design without architectural changes reduces design risk significantly. For general-purpose processors, much of the motivation for multi-core processors comes from diminished gains in processor performance from increasing the operating frequency; this is due to three primary fa
A low-voltage network or secondary network is a part of electric power distribution which carries electric energy from distribution transformers to electricity meters of end customers. Secondary networks are operated at a low voltage level, equal to the mains voltage of electric appliances. Most modern secondary networks are operated at AC rated voltage of 100–127 or 220–240 volts, at the frequency of 50 or 60 hertz. Operating voltage, required number of phases and required reliability dictate topology and configuration of the network; the simplest form are radial service drop lines from the transformer to the customer premises. Low-voltage radial feeders supply multiple customers. For increased reliability, so-called spot networks and grid networks provide supply of customers from multiple distribution transformers and supply paths. Electric wiring can be realized by overhead power lines, aerial or underground power cables, or their mixture. Electric power distribution systems are designed to serve their customers with reliable and high-quality power.
The most common distribution system consists of simple radial circuits that can be overhead, underground, or a combination. From the distribution substation, feeders carry the power to the end customers, forming the medium-voltage or primary network, operated at a medium voltage level 5–35 kV. Feeders range in length from a few kilometers to several tens of kilometers; as they must supply all customers in the designated distribution area, they curve and branch along the assigned corridors. A substation supplies 3–30 feeders. Distribution transformers or secondary transformers, placed along feeders, convert the voltage from the medium to a low voltage level, suitable for direct consumption by end customers. A rural primary feeder supplies up to 50 distribution transformers, spread over a wide region, but the figure varies depending on configuration, they are sited on cellars or designated small plots. From these transformers, low-voltage or secondary network branches off to the customer connections at customer premises, equipped with electricity meters.
Most of differences in the layout and design of low-voltage networks are dictated by the mains voltage rating. In Europe and most of the world 220–240 V is the dominant choice, while in North America 120 V is the standard. ANSI standard C84.1 recommends − 2.5 % tolerance for the voltage range at a service point. North American LV networks feature much shorter secondary connections, up to 250 feet, while in European design they can reach up to 1 mile. North American distribution transformers must be therefore placed much closer to consumers, are smaller, while European ones can cover larger areas and thus have higher ratings; as the low-voltage distribute the electric power to the widest class of end users, another main design concern is safety of consumers who use the electric appliances and their protection against electric shocks. An earthing system, in combination with protective devices such as fuses and residual current devices, must ensure that a person must not come into touch with a metallic object whose potential relative to the person's potential exceeds a "safe" threshold set at about 50 V.
Radial operation is the most economic design of both MV and LV networks. It provides a sufficiently high degree of service continuity for most customers. In American systems, the customers are supplied directly from the distribution transformers via short service drop lines, in star-like topology. In 240 V systems, the customers are served by several low-voltage feeders, realized by overhead power lines, aerial or underground power cables, or their mixture. In a cable network, all necessary connections and protection devices are placed in pad-mounted cabinets or manholes. Power-system protection in radial networks is simple to design and implement, since short-circuit currents have only one possible path that needs to be interrupted. Fuses are most used for both short-circuit and overload protection, while low-voltage circuit breakers may be used in special circumstances. Spot networks are used; the low-voltage network is supplied from two or more distribution transformers at a single site, each fed from a different MV feeder.
The transformers are connected together with a bus or a cable on secondary side, termed paralleling bus or collector bus. The paralleling bus does not have connecting cables to other network units, in which case such networks are termed isolating spot networks. In some cases, fast-acting secondary bus tie breakers may be applied between bus sections to isolate faults in the secondary switchgear and limit loss of service. Spot systems are applied in high load-density areas such as business districts, large hospitals, small industry and important facilities such as water supply systems. In normal operation, the energy supply is provided by both primary feeders in parallel. In case of an outage of either primary feeder, network protector device at the corresponding spot transformer secondary automatically opens
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and