The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. It is a network of networks that consists of private, academic and government networks of local to global scope, linked by a broad array of electronic and optical networking technologies; the Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web, electronic mail and file sharing. Some publications no longer capitalize "internet"; the origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET served as a backbone for interconnection of regional academic and military networks in the 1980s; the funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, the merger of many networks.
The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, generated a sustained exponential growth as generations of institutional and mobile computers were connected to the network. Although the Internet was used by academia since the 1980s, commercialization incorporated its services and technologies into every aspect of modern life. Most traditional communication media, including telephony, television, paper mail and newspapers are reshaped, redefined, or bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, video streaming websites. Newspaper and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators; the Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, social networking. Online shopping has grown exponentially both for major retailers and small businesses and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or sell goods and services online.
Business-to-business and financial services on the Internet affect supply chains across entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers. The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force, a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. In November 2006, the Internet was included on USA Today's list of New Seven Wonders; when the term Internet is used to refer to the specific global system of interconnected Internet Protocol networks, the word is a proper noun that should be written with an initial capital letter.
In common use and the media, it is erroneously not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective; the Internet is often referred to as the Net, as a short form of network. As early as 1849, the word internetted was used uncapitalized as an adjective, meaning interconnected or interwoven; the designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks. The terms Internet and World Wide Web are used interchangeably in everyday speech. However, the World Wide Web or the Web is only one of a large number of Internet services; the Web is a collection of interconnected documents and other web resources, linked by hyperlinks and URLs. As another point of comparison, Hypertext Transfer Protocol, or HTTP, is the language used on the Web for information transfer, yet it is just one of many languages or protocols that can be used for communication on the Internet.
The term Interweb is a portmanteau of Internet and World Wide Web used sarcastically to parody a technically unsavvy user. Research into packet switching, one of the fundamental Internet technologies, started in the early 1960s in the work of Paul Baran and Donald Davies. Packet-switched networks such as the NPL network, ARPANET, the Merit Network, CYCLADES, Telenet were developed in the late 1960s and early 1970s; the ARPANET project led to the development of protocols for internetworking, by which multiple separate networks could be joined into a network of networks. ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, the NLS system at SRI International by Douglas Engelbart in Menlo Park, California, on 29 October 1969; the third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of
Bitcoin is a cryptocurrency, a form of electronic cash. It is a decentralized digital currency without a central bank or single administrator that can be sent from user to user on the peer-to-peer bitcoin network without the need for intermediaries. Transactions are verified by network nodes through cryptography and recorded in a public distributed ledger called a blockchain. Bitcoin was invented by an unknown person or group of people using the name Satoshi Nakamoto and released as open-source software in 2009. Bitcoins are created as a reward for a process known as mining, they can be exchanged for other currencies and services. Research produced by the University of Cambridge estimates that in 2017, there were 2.9 to 5.8 million unique users using a cryptocurrency wallet, most of them using bitcoin. Bitcoin has been criticized for its use in illegal transactions, its high electricity consumption, price volatility, thefts from exchanges, the possibility that bitcoin is an economic bubble. Bitcoin has been used as an investment, although several regulatory agencies have issued investor alerts about bitcoin.
The domain name "bitcoin.org" was registered on 18 August 2008. On 31 October 2008, a link to a paper authored by Satoshi Nakamoto titled Bitcoin: A Peer-to-Peer Electronic Cash System was posted to a cryptography mailing list. Nakamoto implemented the bitcoin software as open-source code and released it in January 2009. Nakamoto's identity remains unknown. On 3 January 2009, the bitcoin network was created when Nakamoto mined the first block of the chain, known as the genesis block. Embedded in the coinbase of this block was the following text: "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks." This note has been interpreted as both a timestamp and a comment on the instability caused by fractional-reserve banking. The receiver of the first bitcoin transaction was cypherpunk Hal Finney, who created the first reusable proof-of-work system in 2004. Finney downloaded the bitcoin software on its release date, on 12 January 2009 received ten bitcoins from Nakamoto. Other early cypherpunk supporters were creators of bitcoin predecessors: Wei Dai, creator of b-money, Nick Szabo, creator of bit gold.
In 2010, the first known commercial transaction using bitcoin occurred when programmer Laszlo Hanyecz bought two Papa John's pizzas for 10,000 bitcoin. Nakamoto is estimated to have mined one million bitcoins before disappearing in 2010, when he handed the network alert key and control of the code repository over to Gavin Andresen. Andresen became lead developer at the Bitcoin Foundation. Andresen sought to decentralize control; this left opportunity for controversy to develop over the future development path of bitcoin. After early "proof-of-concept" transactions, the first major users of bitcoin were black markets, such as Silk Road. During its 30 months of existence, beginning in February 2011, Silk Road accepted bitcoins as payment, transacting 9.9 million in bitcoins, worth about $214 million. In 2011, the price started at $0.30 per bitcoin. The price rose to $31.50 on 8 June. Within a month the price fell to $11.00. The next month it fell to $7.80, in another month to $4.77. Litecoin, an early bitcoin spin-off or altcoin, appeared in October 2011.
Many altcoins have been created since then. In 2012, bitcoin prices started at $5.27 growing to $13.30 for the year. By 9 January the price had risen to $7.38, but crashed by 49% to $3.80 over the next 16 days. The price rose to $16.41 on 17 August, but fell by 57% to $7.10 over the next three days. The Bitcoin Foundation was founded in September 2012 to promote bitcoin's uptake. In 2013, prices started at $13.30 rising to $770 by 1 January 2014. In March 2013 the blockchain temporarily split into two independent chains with different rules; the two blockchains operated for six hours, each with its own version of the transaction history. Normal operation was restored when the majority of the network downgraded to version 0.7 of the bitcoin software. The Mt. Gox exchange halted bitcoin deposits and the price dropped by 23% to $37 before recovering to previous level of $48 in the following hours; the US Financial Crimes Enforcement Network established regulatory guidelines for "decentralized virtual currencies" such as bitcoin, classifying American bitcoin miners who sell their generated bitcoins as Money Service Businesses, that are subject to registration or other legal obligations.
In April, exchanges BitInstant and Mt. Gox experienced processing delays due to insufficient capacity resulting in the bitcoin price dropping from $266 to $76 before returning to $160 within six hours; the bitcoin price rose to $259 on 10 April, but crashed by 83% to $45 over the next three days. On 15 May 2013, US authorities seized accounts associated with Mt. Gox after discovering it had not registered as a money transmitter with FinCEN in the US. On 23 June 2013, the US Drug Enforcement Administration listed 11.02 bitcoins as a seized asset in a United States Department of Justice seizure notice pursuant to 21 U. S. C. § 881. This marked the first time; the FBI seized about 26,000 bitcoins in October 2013 from the dark web website Silk Road during the arrest of Ross William Ulbricht. Bitcoin's price crashed by 50 % to $378 the same day. On 30 November 2013 the price reached $1,163 before starting a long-term crash, declining by 87% to $152 in January 2015. On 5 December 2013, the People's Bank of China prohibited Chinese financial institutions from using bitcoins.
After the announcement, the value of bitcoins dropped, Baidu no longer accepted bitcoins for certain services. Buying real-world goods w
Field-programmable gate array
A Field-Programmable Gate Array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is specified using a hardware description language, similar to that used for an Application-Specific Integrated Circuit. Circuit diagrams were used to specify the configuration, but this is rare due to the advent of electronic design automation tools. FPGAs contain an array of programmable logic blocks, a hierarchy of "reconfigurable interconnects" that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or simple logic gates like AND and XOR. In most FPGAs, logic blocks include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
Contemporary Field-Programmable Gate Arrays have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function; the ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design, offer advantages for many applications. Some FPGAs have analog features in addition to digital functions; the most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on loaded pins that would otherwise ring or couple unacceptably, to set higher rates on loaded pins on high-speed channels that would otherwise run too slowly. Common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer transmit clocks and receiver clock recovery.
Common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters and digital-to-analog converters with analog signal conditioning blocks allowing them to operate as a system-on-a-chip; such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, field-programmable analog array, which carries analog values on its internal programmable interconnect fabric. The FPGA industry sprouted from programmable read-only memory and programmable logic devices. PROMs and PLDs both had the option of being programmed in the field. However, programmable logic was hard-wired between logic gates. Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
In December 2015, Intel acquired Altera. Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064; the XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks, with two three-input lookup tables. More than 20 years Freeman was entered into the National Inventors Hall of Fame for his invention. In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992. Altera and Xilinx continued unchallenged and grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel was serving about 18 percent of the market. By 2013, Altera and Xilinx together represented 77 percent of the FPGA market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer and industrial applications. A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip"; this work mirrors the architecture created by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 All Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore.
The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Mic
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
OpenCL is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units, graphics processing units, digital signal processors, field-programmable gate arrays and other processors or hardware accelerators. OpenCL specifies programming languages for programming these devices and application programming interfaces to control the platform and execute programs on the compute devices. OpenCL provides a standard interface for parallel computing using task- and data-based parallelism. OpenCL is an open standard maintained by the non-profit technology consortium Khronos Group. Conformant implementations are available from Altera, AMD, Apple, ARM, Creative, IBM, Intel, Qualcomm, Vivante and ZiiLABS. OpenCL views a computing system as consisting of a number of compute devices, which might be central processing units or "accelerators" such as graphics processing units, attached to a host processor, it defines a C-like language for writing programs. Functions executed on an OpenCL device are called "kernels".
A single compute device consists of several compute units, which in turn comprise multiple processing elements. A single kernel execution can run on many of the PEs in parallel. How a compute device is subdivided into compute units and PEs is up to the vendor. In addition to its C-like programming language, OpenCL defines an application programming interface that allows programs running on the host to launch kernels on the compute devices and manage device memory, separate from host memory. Programs in the OpenCL language are intended to be compiled at run-time, so that OpenCL-using applications are portable between implementations for various host devices; the OpenCL standard defines host APIs for C and C++. NET. An implementation of the OpenCL standard consists of a library that implements the API for C and C++, an OpenCL C compiler for the compute device targeted. In order to open the OpenCL programming model to other languages or to protect the kernel source from inspection, the Standard Portable Intermediate Representation can be used as a target-independent way to ship kernels between a front-end compiler and the OpenCL back-end.
More Khronos Group has ratified SYCL, a higher-level programming model for OpenCL as single-source DSEL based on pure C++11 to improve programming productivity. OpenCL defines a four-level memory hierarchy for the compute device: global memory: shared by all processing elements, but has high access latency. Not every device needs to implement each level of this hierarchy in hardware. Consistency between the various levels in the hierarchy is relaxed, only enforced by explicit synchronization constructs, notably barriers. Devices may or may not share memory with the host CPU; the host API provides handles on device memory buffers and functions to transfer data back and forth between host and devices. The programming language, used to write compute kernels is called OpenCL C and is based on C99, but adapted to fit the device model in OpenCL. Memory buffers reside in specific levels of the memory hierarchy, pointers are annotated with the region qualifiers __global, __local, __constant, __private, reflecting this.
Instead of a device program having a main function, OpenCL C functions are marked __kernel to signal that they are entry points into the program to be called from the host program. Function pointers, bit fields and variable-length arrays are omitted, recursion is forbidden; the C standard library is replaced by a custom set of standard functions, geared toward math programming. OpenCL C is extended to facilitate use of parallelism with vector types and operations and functions to work with work-items and work-groups. In particular, besides scalar types such as float and double, which behave to the corresponding types in C, OpenCL provides fixed-length vector types such as float4. Vectorized operations on these types are intended to map onto SIMD instructions sets, e.g. SSE or VMX, when running OpenCL programs on CPUs. Other specialized types include 3-d image types; the following is a matrix-vector multiplication algorithm in OpenCL C. The kernel function matvec computes, in each invocation, the dot product of a single row of a matrix A and a vector x: y i = a i,: ⋅ x = ∑ j a i, j x j.
To extend this into a full matrix-vector multiplication, the OpenCL runtime maps the kernel over the rows of the matrix. On the host side, the clEnqueueNDRangeKernel function does this.
CUDA is a parallel computing platform and application programming interface model created by Nvidia. It allows software developers and software engineers to use a CUDA-enabled graphics processing unit for general purpose processing — an approach termed GPGPU; the CUDA platform is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels. The CUDA platform is designed to work with programming languages such as C, C++, Fortran; this accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming. CUDA supports programming frameworks such as OpenACC and OpenCL; when it was first introduced by Nvidia, the name CUDA was an acronym for Compute Unified Device Architecture, but Nvidia subsequently dropped the use of the acronym. The graphics processing unit, as a specialized computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks.
By 2012, GPUs had evolved into parallel multi-core systems allowing efficient manipulation of large blocks of data. This design is more effective than general-purpose central processing unit for algorithms in situations where processing large blocks of data is done in parallel, such as: push-relabel maximum flow algorithm fast sort algorithms of large lists two-dimensional fast wavelet transform molecular dynamics simulations The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives such as OpenACC, extensions to industry-standard programming languages including C, C++ and Fortran. C/C + + programmers can use'CUDA C/C + +', compiled with Nvidia's LLVM-based C/C + + compiler. Fortran programmers can use'CUDA Fortran', compiled with the PGI CUDA Fortran compiler from The Portland Group. In addition to libraries, compiler directives, CUDA C/C++ and CUDA Fortran, the CUDA platform supports other computational interfaces, including the Khronos Group's OpenCL, Microsoft's DirectCompute, OpenGL Compute Shaders and C++ AMP.
Third party wrappers are available for Python, Fortran, Ruby, Common Lisp, Haskell, R, MATLAB, IDL, native support in Mathematica. In the computer game industry, GPUs are used for graphics rendering, for game physics calculations. CUDA has been used to accelerate non-graphical applications in computational biology and other fields by an order of magnitude or more. CUDA provides both a low level API and a higher level API; the initial CUDA SDK was made public on 15 February 2007, for Microsoft Linux. Mac OS X support was added in version 2.0, which supersedes the beta released February 14, 2008. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce and the Tesla line. CUDA is compatible with most standard operating systems. Nvidia states that programs developed for the G8x series will work without modification on all future Nvidia video cards, due to binary compatibility. CUDA 8.0 comes with the following libraries: CUBLAS - CUDA Basic Linear Algebra Subroutines library, see main and docs CUDART - CUDA RunTime library, see docs CUFFT - CUDA Fast Fourier Transform library, see main and docs CURAND - CUDA Random Number Generation library, see main and docs CUSOLVER - CUDA based collection of dense and sparse direct solvers, see main and docs CUSPARSE - CUDA Sparse Matrix library, see main and docs NPP - NVIDIA Performance Primitives library, see main and docs NVGRAPH - NVIDIA Graph Analytics library, see main and docs NVML - NVIDIA Management Library, see main and docs NVRTC - NVIDIA RunTime Compilation library for CUDA C++, see docsCUDA 8.0 comes with these other software components: nView - NVIDIA nView Desktop Management Software, see main and docs NVWMI - NVIDIA Enterprise Management Toolkit, see main and docs PhysX - GameWorks PhysX is a multi-platform game physics engine, see main and docsCUDA 9.0-9.2 comes with these other components: CUTLASS 1.0 - custom linear algebra algorithms, see CUDA 9.2 News, Developer News and dev blog NVCUVID - NVIDIA Video Decoder got deprecated in CUDA 9.2.
This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups. Faster downloads and readbacks to and from the GPU Full support for integer and bitwise operations, including integer texture lookups Whether for the host computer or the GPU device, all CUDA source code is now processed according to C++ syntax rules; this was not always the case. Earlier versions of CUDA were based on C syntax rules; as with the more general case of compiling C code with a C++ compiler, it is therefore possible that old C-style CUDA source code will either fail to compile or will not behave as intended. Interoperability with rendering languages such as OpenGL is one-way, with OpenGL having access to registered CUDA memory but CUDA not having access to OpenGL mem