A supercomputer is a computer with a high level of performance compared to a general-purpose computer. The performance of a supercomputer is measured in floating-point operations per second instead of million instructions per second. Since 2017, there are supercomputers which can perform up to nearly a hundred quadrillion FLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union and Japan to build faster, more powerful and more technologically superior exascale supercomputers. Supercomputers play an important role in the field of computational science, are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research and gas exploration, molecular modeling, physical simulations. Throughout their history, they have been essential in the field of cryptanalysis. Supercomputers were introduced in the 1960s, for several decades the fastest were made by Seymour Cray at Control Data Corporation, Cray Research and subsequent companies bearing his name or monogram.
The first such machines were tuned conventional designs that ran faster than their more general-purpose contemporaries. Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical. From the 1970s, vector processors operating on large arrays of data came to dominate. A notable example is the successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm; the US has long been the leader in the supercomputer field, first through Cray's uninterrupted dominance of the field, through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, but since China has become active in the field; as of November 2018, the fastest supercomputer on the TOP500 supercomputer list is the Summit, in the United States, with a LINPACK benchmark score of 143.5 PFLOPS, followed by, Sierra, by around 48.860 PFLOPS.
The US has five of the top 10 and China has two. In June 2018, all supercomputers on the list combined have broken the 1 exabyte mark. In 1960 Sperry Rand built the Livermore Atomic Research Computer, today considered among the first supercomputers, for the US Navy Research and Development Centre, it still used high-speed drum memory, rather than the newly emerging disk drive technology. Among the first supercomputers was the IBM 7030 Stretch; the IBM 7030 was built by IBM for the Los Alamos National Laboratory, which in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives; the IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France bought the computer and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.
The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words; the Atlas operating system swapped data in the form of pages between the drum. The Atlas operating system introduced time-sharing to supercomputing, so that more than one programe could be executed on the supercomputer at any one time. Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second; the CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicon transistors. Silicon transistors could run faster and the overheating problem was solved by introducing refrigeration to the supercomputer design.
Thus the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each. Cray left CDC in 1972 to form Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray-1 in 1976, which became one of the most successful supercomputers in history; the Cray-2 was released in 1985. It had eight central processing units, liquid cooling and the electronics coolant liquid fluorinert was pumped through the supercomputer architecture, it was the world's second fastest after M-13 supercomputer in Moscow. The only computer to challenge the Cray-1's performance in the 1970s was the ILLIAC IV; this machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as as possible, i
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
Creative Technology Ltd. is a global technology company headquartered in Jurong East, Singapore with additional offices in offices in Silicon Valley, Dublin and Shanghai. The principal activities of the company and its subsidiaries consist of the design and distribution of digitized sound and video boards and related multimedia and personal digital entertainment products, it partners with mainboard manufacturers and laptop brands to embed its Sound Blaster technology on their products. Creative Technology was founded in 1981 by childhood friends and Ngee Ann Polytechnic schoolmates Sim Wong Hoo and Ng Kai Wa. A computer repair shop in Pearl's Centre in Chinatown, the company developed an add-on memory board for the Apple II computer. Creative spent $500,000 developing the Cubic CT, an IBM-compatible PC adapted for the Chinese language and featuring multimedia features like enhanced color graphics and a built-in audio board capable of producing speech and melodies. With lack of demand for multilingual computers and few multimedia software applications available, the Cubic was a commercial failure.
Shifting focus from language to music, Creative developed the Creative Music System, a PC add-on card. Sim established Creative Labs, Inc. in the United States' Silicon Valley and convinced software developers to support the sound card, re-named Game Blaster and marketed by RadioShack's Tandy division. The success of this audio interface led to the development of the standalone Sound Blaster sound card, introduced at the 1989 COMDEX show just as the multimedia PC market, fueled by Intel's 386 card and Windows 3.0, took off. The success of Sound Blaster helped grow Creative's revenue from $5.4 million USD in 1989 to $658 million USD in 1994. In 1993, the year after Creative’s Initial Public Offering, in 1992, former Ashton-Tate CEO Ed Esber joined Creative Labs as CEO to assemble a management team to support the company’s rapid growth. Esber brought in a team of US executives, including Rich Buchanan, Gail Pomerantz, Rich Sorkin; this group played key roles in reversing a brutal market share decline caused by intense competition from Mediavision at the high end and Aztech at the low end.
Sorkin, in particular strengthened the company's brand position through crisp licensing and an aggressive defense of Creative's intellectual property positions while working to shorten product development cycles. At the same time and the original founders of the company had differences of opinion on the strategy and positioning of the company. Esber exited in 1995, followed by Buchanan and Pomerantz. Following Esber’s departure, Sorkin was promoted to General Manager of Audio and Communications Products and Executive Vice-President of Business Development and Corporate Investments, before leaving Creative in 1996 to run Elon Musk’s first startup and internet pioneer Zip2. By 1996, Creative's revenues had peaked at $1.6 billion USD. With pioneering investments in VOIP and media streaming, Creative was well-positioned to take advantage of the internet era, but ventured into the CD-ROM market and was forced to write off nearly $100 million USD in inventory when the market collapsed due to a flood of cheaper alternatives.
The firm had maintained a strong foothold in the EISA PC audio market until 14 July 1997 when Aureal Semiconductor entered the soundcard market with their competitive PCI AU8820 Vortex 3D sound technology. The firm at the time was in development of their own in house PCI audio cards but were finding little success adopting to the PCI standard. In January 1998 in order to facilitate a working PCI audio technology, the firm made the acquisition of Ensoniq for US$77 million. On March 5, 1998 the firm sued Aureal with patent infringement claims over a MIDI caching technology held by E-mu Systems. Aureal filed a counterclaim stating the firm was intentionally interfering with its business prospects, had defamed them, commercially disparaged, engaged in unfair competition with intent to slow down Aureals sales and acted fraudulently; the suit had come only days after Aureal gained a fair market with the AU8820 Vortex1. In August 1998 the Sound Blaster Live! was the firm's first sound card developed for the PCI bus in order to compete with upcoming Aureal AU8830 Vortex2 sound chip.
Aureal at this time were making fliers comparing their new AU8830 chips to the now shipping Sound Blaster Live!. The specifications within these fliers comparing the AU8830 to the Sound Blaster Live! EMU10K1 chip sparked another flurry of lawsuits against Aureal, this time claiming Aureal had falsely advertised the Sound Blaster Live!'s capabilities. In December 1999 after numerous lawsuits, Aureal won a favourable ruling but went bankrupt as a result of legal costs and their investors pulling out, their assets were acquired by Creative through the bankruptcy court in September 2000 for US$32 million. The firm had in effect removed their only major direct competitor in the 3D gaming audio market, excluding their acquisition of Sensaura. In April 1999, the firm launched the NOMAD line of digital audio players that would introduce the MuVo and ZEN series of portable media players. In November 2004, the firm announced a $100 million marketing campaign to promote their digital audio products, including the ZEN range of MP3 players.
The firm applied for U. S. Patent 6,928,433 on 5 January 2001 and was awarded the patent on 9 August 2005; the ZEN Patent was awarded to the firm for the invention of user interface for portable media players. This opened the way for potential legal action against the other competing players; the firm took legal actions against Apple in May 2006. In August, 2
Rule of thumb
The English phrase rule of thumb refers to a principle with broad application, not intended to be accurate or reliable for every situation. It refers to an learned and applied procedure or standard, based on practical experience rather than theory; this usage of the phrase can be traced back to the seventeenth century, has been associated with various trades where quantities were measured by comparison to the width or length of a thumb. A modern folk etymology holds that the phrase derives from the maximum width of a stick allowed for wife-beating under English law; the rumor produced satirical cartoons at Buller's expense. The English jurist Sir William Blackstone wrote in his Commentaries on the Laws of England of an "old law" that once allowed "moderate" beatings by husbands, but did not mention thumbs or any specific implements. While wife beating has been outlawed for centuries in England and the United States, it continued in practice; the exact phrase rule of thumb first became associated with domestic abuse in the 1970s, after which the spurious legal definition was cited as factual in a number of law journals, the U.
S. Commission on Civil Rights published a report on domestic abuse titled "Under the Rule of Thumb" in 1982. In English, rule of thumb refers to an approximate method for doing something, based on practical experience rather than theory; the exact origin of the phrase is uncertain. Its earliest appearance in print comes from a posthumously published collection of sermons by Scottish preacher James Durham: "Many profest Christians are like to foolish builders, who build by guess, by rule of thumb, not by Square and Rule"; the phrase is found in Sir William Hope's The Compleat Fencing Master, 1692: "What he doth, he doth by rule of Thumb, not by Art". James Kelly's The Complete Collection of Scottish Proverbs, 1721, includes: "No Rule so good as Rule of Thumb, if it hit", meaning a practical approximation; the width of the thumb, or "thumb's breadth", was used as the equivalent of an inch in the cloth trade. The thumb has been used in brewing beer, to gauge the heat of the brewing vat. Ebenezer Cobham Brewer writes that rule of thumb means a "rough measurement".
He says that "Ladies measure yard lengths by their thumb. Indeed, the expression'sixteen nails make a yard' seems to point to the thumb-nail as a standard" and that "Countrymen always measure by their thumb". According to Phrasefinder, "The phrase joins the whole nine yards as one that derives from some form of measurement but, unlikely to be definitively pinned down". A modern folk etymology relates the phrase to domestic violence via an alleged rule under English law that allowed for wife beating provided the implement used was a rod or stick no thicker than a man's thumb. While wife beating has been outlawed in England for centuries, enforcement of the law was inconsistent, wife beating did continue. However, such a rule of thumb was never codified in law. English jurist William Blackstone wrote in the late 1700s in his Commentaries on the Laws of England that by an "old law", a husband had been justified in using "moderate correction" against his wife, but was barred from inflicting serious violence.
According to Blackstone, by the late 1600s this custom was in doubt, a woman was by allowed "security of the peace" against an abusive husband. Citing Blackstone, the twentieth-century legal scholar William L. Prosser wrote that there was "probably no truth to the legend" that a husband was allowed to beat his wife "with a stick no thicker than his thumb"; the association between the thumb and implements of domestic violence can be traced to the year 1782, when the English judge Sir Francis Buller was ridiculed for purportedly stating that a husband could beat his wife, provided he used a stick no wider than his thumb. There is no record of Buller making such a statement. In the following century, several court rulings in the United States referred to a supposed common-law doctrine that the judges believed had once allowed wife beating with an implement smaller than a thumb. None of these courts endorsed such a rule. However, all allowed for some degree of wife beating so long. An 1824 court ruling in Mississippi stated that a man was entitled to enforce "domestic discipline" by striking his wife with a whip or stick no wider than the judge's thumb.
In a case in North Carolina, the defendant was found to have struck his wife "with a switch about the size of this fingers". The judgement was upheld by the state supreme court, although the judge stated: Nor is it true that a husband has a right to whip his wife, and if he had, it is not seen how the thumb is the standard of size for the instrument which he may use, as some of the old authorities have said The standard is the effect produced, not the manner of produ
Microsoft Azure is a cloud computing service created by Microsoft for building, testing and managing applications and services through Microsoft-managed data centers. It provides software as a service, platform as a service and infrastructure as a service and supports many different programming languages and frameworks, including both Microsoft-specific and third-party software and systems. Azure was announced in October 2008, started with codename "Project Red Dog", released on February 1, 2010, as "Windows Azure" before being renamed "Microsoft Azure" on March 25, 2014. Microsoft lists over 600 Azure services, of which some are covered below: Virtual machines, infrastructure as a service allowing users to launch general-purpose Microsoft Windows and Linux virtual machines, as well as preconfigured machine images for popular software packages. App services, platform as a service environment letting developers publish and manage websites. Websites, high density hosting of websites allows developers to build sites using ASP.
NET, PHP, Node.js, or Python and can be deployed using FTP, Mercurial, Team Foundation Server or uploaded through the user portal. This feature was announced in preview form in June 2012 at the Meet Microsoft Azure event. Customers can create websites in PHP, ASP. NET, Node.js, or Python, or select from several open source applications from a gallery to deploy. This comprises one aspect of the platform as a service offerings for the Microsoft Azure Platform, it was renamed to Web Apps in April 2015. WebJobs, applications that can be deployed to an App Service environment to implement background processing that can be invoked on a schedule, on demand, or run continuously; the Blob and Queue services can be used to communicate between WebApps and WebJobs and to provide state. Mobile Engagement collects real-time analytics, it provides push notifications to mobile devices. HockeyApp can be used to develop and beta-test mobile apps. Storage Services provides SDK APIs for storing and accessing data on the cloud.
Table Service lets programs store structured text in partitioned collections of entities that are accessed by partition key and primary key. It's a NoSQL non-relational database. Blob Service allows programs to store unstructured text and binary data as blobs that can be accessed by a HTTP path. Blob service provides security mechanisms to control access to data. Queue Service lets programs communicate asynchronously by message using queues. File Service allows access of data on the cloud using the REST APIs or the SMB protocol. Azure Search provides a subset of OData's structured filters using REST or SDK APIs. Cosmos DB is a NoSQL database service that implements a subset of the SQL SELECT statement on JSON documents. Redis Cache is a managed implementation of Redis. StorSimple manages storage tasks between cloud storage. SQL Database known as SQL Azure Database, works to create and extend applications into the cloud using Microsoft SQL Server technology, it integrates with Active Directory and Microsoft System Center and Hadoop.
SQL Data Warehouse is a data warehousing service designed to handle computational and data intensive queries on datasets exceeding 1TB. Azure Data Factory, is a data integration service that allows creation of data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. Azure Data Lake is a scalable data storage and analytic service for big-data analytics workloads that require developers to run massively parallel queries. Azure HDInsight is a big data relevant service, that deploys Hortonworks Hadoop on Microsoft Azure, supports the creation of Hadoop clusters using Linux with Ubuntu. Azure Stream Analytics is a serverless scalable event processing engine that enables users to develop and run real-time analytics on multiple streams of data from sources such as devices, web sites, social media, other applications; the Microsoft Azure Service Bus allows applications running on Azure premises or off premises devices to communicate with Azure. This helps to build reliable applications in a service-oriented architecture.
The Azure service bus supports four different types of communication mechanisms: Event Hubs, which provide event and telemetry ingress to the cloud at massive scale, with low latency and high reliability. For example an event hub can be used to track data from cell phones such as a GPS location coordinate in real time. Queues, which allow one-directional communication. A sender application would send the message to the service bus queue, a receiver would read from the queue. Though there can be multiple readers for the queue only one would process a single message. Topics, which provide one-directional communication using a subscriber pattern, it is similar to a queue, however each subscriber will receive a copy of the message sent to a Topic. Optionally the subscriber can filter out messages based on specific criteria defined by the subscriber. Relays, which provide bi-directional communication. Unlike queues and topics, a relay doesn't store in-flight messages in its own memory. Instead, it just passes them on to the destination application.
A PaaS offering that can be used for content protection, streaming, or analytics. A global content delivery network for audio, applications and other static files, it can be used to cache static assets of websites geographically closer to users to increase performance. The network can be managed by a REST based HTTP API. Azure has 54 point of presence locations worldwide as of August 2018. Applic
Internet of things
The Internet of things is the extension of Internet connectivity into physical devices and everyday objects. Embedded with electronics, Internet connectivity, other forms of hardware, these devices can communicate and interact with others over the Internet, they can be remotely monitored and controlled; the definition of the Internet of things has evolved due to convergence of multiple technologies, real-time analytics, machine learning, commodity sensors, embedded systems. Traditional fields of embedded systems, wireless sensor networks, control systems and others all contribute to enabling the Internet of things. In the consumer market, IoT technology is most synonymous with products pertaining to the concept of the "smart home", covering devices and appliances that support one or more common ecosystems, can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers; the IoT concept has faced prominent criticism in regards to privacy and security concerns related to these devices and their intention of pervasive presence.
The concept of a network of smart devices was discussed as early as 1982, with a modified Coke vending machine at Carnegie Mellon University becoming the first Internet-connected appliance, able to report its inventory and whether newly loaded drinks were cold or not. Mark Weiser's 1991 paper on ubiquitous computing, "The Computer of the 21st Century", as well as academic venues such as UbiComp and PerCom produced the contemporary vision of the IoT. In 1994, Reza Raji described the concept in IEEE Spectrum as " small packets of data to a large set of nodes, so as to integrate and automate everything from home appliances to entire factories". Between 1993 and 1997, several companies proposed solutions like Microsoft's at Work or Novell's NEST; the field gained momentum when Bill Joy envisioned device-to-device communication as a part of his "Six Webs" framework, presented at the World Economic Forum at Davos in 1999. The term "Internet of things" was coined by Kevin Ashton of Procter & Gamble MIT's Auto-ID Center, in 1999, though he prefers the phrase "Internet for things".
At that point, he viewed Radio-frequency identification as essential to the Internet of things, which would allow computers to manage all individual things. A research article mentioning the Internet of Things was submitted to the conference for Nordic Researchers in Norway, in June 2002, preceded by an article published in Finnish in January 2002; the implementation described there was developed by Kary Främling and his team at Helsinki University of Technology and more matches the modern one, i.e. an information system infrastructure for implementing smart, connected objects. Defining the Internet of things as "simply the point in time when more'things or objects' were connected to the Internet than people", Cisco Systems estimated that the IoT was "born" between 2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010. The extensive set of applications for IoT devices is divided into consumer, commercial and infrastructure spaces. A growing portion of IoT devices are created for consumer use, including connected vehicles, home automation, wearable technology, connected health, appliances with remote monitoring capabilities.
IoT devices are a part of the larger concept of home automation, which can include lighting and air conditioning and security systems. Long term benefits could include energy savings by automatically ensuring lights and electronics are turned off. A smart home or automated home could be based on a platform or hubs that control smart devices and appliances. For instance, using Apple's HomeKit, manufacturers can have their home products and accessories controlled by an application in iOS devices such as the iPhone and the Apple Watch; this could be iOS native applications such as Siri. This can be demonstrated in the case of Lenovo's Smart Home Essentials, a line of smart home devices that are controlled through Apple's Home app or Siri without the need for a Wi-Fi bridge. There are dedicated smart home hubs that are offered as standalone platforms to connect different smart home products and these include the Amazon Echo, Google Home, Apple's HomePod, Samsung's SmartThings Hub. One key application of smart home is to provide assistance for those with disabilities and elderly individuals.
These home systems use assistive technology to accommodate an owner's specific disabilities. Voice control can assist users with sight and mobility limitations while alert systems can be connected directly to cochlear implants worn by hearing impaired users, they can be equipped with additional safety features. These features can include sensors that monitor for medical emergencies such as seizures. Smart home technology applied in this way can provide users with more freedom and a higher quality of life; the term "Enterprise IoT" refers to devices used in business and corporate settings. By 2019, it is estimated; the Internet of Medical Things is an application of the IoT for medical and health related purposes, data collection and analysis for research, monitoring. This'Smart Healthcare', as it is called, led to the creation of a digitized healthcare system, connecting available medical resources and healthcare services. IoT devices can be used to enable remote health emergency notification systems.
These health monitoring dev
A personal computer is a multi-purpose computer whose size and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications these systems run commercial software, free-of-charge software or free and open-source software, provided in ready-to-run form. Software for personal computers is developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible; this contrasts with mobile systems, where software is only available through a manufacturer-supported channel, end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry; these include free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices provides the main alternative to Intel's processors; the advent of personal computers and the concurrent Digital Revolution have affected the lives of people in all countries. "PC" is an initialism for "personal computer". The IBM Personal Computer incorporated the designation in its model name, it is sometimes useful to distinguish personal computers of the "IBM Personal Computer" family from personal computers made by other manufacturers. For example, "PC" is used in contrast with "Mac", an Apple Macintosh computer.. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" computers.
The "brain" may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far. In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit trained, person; this mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30of 1956, the Programma 101 introduced in 1964, the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
In what was to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of what would become the staples of daily working life in the 21st century: e-mail, word processing, video conferencing, the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time; the development of the microprocessor, with widespread commercial availability starting in the mid 1970's, made computers cheap enough for small businesses and individuals to own. Early personal computers—generally called microcomputers—were sold in a kit form and in limited volumes, were of interest to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008.
It was built starting in 1972, few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use; the CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP based on the IBM PALM processor with a Philips compact cassette drive, small CRT, full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was available only on mainframe computers, most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC; because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.
C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weigh