Pentium is a brand used for a series of x86 architecture-compatible microprocessors produced by Intel since 1993. In their form as of November 2011, Pentium processors are considered entry-level products that Intel rates as "two stars", meaning that they are above the low-end Atom and Celeron series, but below the faster Core i3, i5, i7, i9, workstation Xeon series; as of 2017, Pentium processors have little more than their name in common with earlier Pentiums, which were Intel's flagship processor for over a decade until the introduction of the Intel Core line in 2006. They are based on both the architecture used in that of Core processors. In the case of Atom architectures, Pentiums are the highest performance implementations of the architecture. Pentium processors with Core architectures prior to 2017 were distinguished from the faster, higher-end i-series processors by lower clock rates and disabling some features, such as hyper-threading and sometimes L3 cache; the name Pentium is derived from the Greek word penta, meaning "five", a reference to the prior numeric naming convention of Intel's 80x86 processors, with the Latin ending -ium.
In 2017, Intel split Pentium into two line-ups. Pentium Silver aiming for low-power devices and shares architecture with Atom and Celeron. Pentium Gold aiming for entry-level desktop and using existing architecture, such as Kaby Lake or Coffee Lake. During development, Intel identifies processors with codenames, such as Prescott, Coppermine, Klamath, or Deschutes; these become known after the processors are given official names on launch. The original Pentium-branded CPUs were expected to be named 586 or i586, to follow the naming convention of prior generations. However, as the firm wanted to prevent their competitors from branding their processors with similar names, Intel filed a trademark application on the name in the United States, but was denied because a series of numbers was considered to lack trademark distinctiveness. Following Intel's prior series of 8086, 80186, 80286, 80386, 80486 microprocessors, the firm's first P5-based microprocessor was released as the original Intel Pentium on March 22, 1993.
Marketing firm Lexicon Branding was hired to coin a name for the new processor. The suffix -ium was chosen as it could connote a fundamental ingredient of a computer, like a chemical element, while the prefix pent- could refer to the fifth generation of x86. Due to its success, the Pentium brand would continue through several generations of high-end processors. In 2006, the name disappeared from Intel's technology roadmaps, only to re-emerge in 2007. In 1998, Intel introduced the Celeron brand for low-priced microprocessors. With the 2006 introduction of the Intel Core brand as the company's new flagship line of processors, the Pentium series was to be discontinued. However, due to a demand for mid-range dual-core processors, the Pentium brand was repurposed to be Intel's mid-range processor series, between the Celeron and Core series, continuing with the Pentium Dual-Core line. In 2009, the "Dual-Core" suffix was dropped, new x86 microprocessors started carrying the plain Pentium name again.
In 2014, Intel released the Pentium 20th Anniversary Edition, to mark the 20th anniversary of the Pentium brand. The processors are unlocked and overclockable. In 2017, Intel splits Pentium into two line-ups, Pentium Silver aiming for low-power devices and shares architecture with Atom and Celeron and Pentium Gold aiming for entry-level desktop and using existing architecture, such as Kaby Lake or Coffee Lake The original Intel P5 or Pentium and Pentium MMX processors were the superscalar follow-on to the 80486 processor and were marketed from 1993 to 1999; some versions of these were available as Pentium OverDrive. In parallel with the P5 microarchitecture, Intel developed the P6 microarchitecture and started marketing it as the Pentium Pro for the high-end market in 1995, it introduced out-of-order execution and an integrated second-level cache on dual-chip processor package. The second P6 generation replaced the original P5 with the Pentium II and rebranded the high-end version as Pentium II Xeon.
It was followed by a third version named the Pentium Pentium III Xeon respectively. The Pentium II line added the MMX instructions that were present in the Pentium MMX. Versions of these processors for the laptop market were named Mobile Pentium II and Mobile Pentium III versions were named Pentium III-M. Starting with the Pentium II, the Celeron brand was used for low-end versions of most Pentium processors with a reduced feature set such as a smaller cache or missing power management features. In 2000, Intel introduced a new microarchitecture named NetBurst, with a much longer pipeline enabling higher clock frequencies than the P6-based processors; these were named Pentium 4, the high-end versions have since been named Xeon. As with Pentium III, there are both Mobile Pentium 4 and Pentium 4 M processors for the laptop market, with Pentium 4 M denoting the more power-efficient versions. Enthusiast versions of the Pentium 4 with the highest clock rates were named Pentium 4 Extreme Edition; the Pentium D was the first multi-core Pentium, integrating two Pentium 4 chips in one package and was available as the enthusiast Pentium Extreme Edition.
In 2003, Intel introduced a new processor based on the P6 microarchitecture named Pentium M, much more power-efficient than the Mobile Pentium 4, Pentium 4 M, Pentium III M. Dual-core versions of the Pentium M were developed under the code name Yonah and sold under the marketing names Core Duo and Pentium Dual-Core. Unlike Pentium D, it
Miguel Illescas Córdoba is a Spanish chess grandmaster. Illescas was a skilled player as a youngster and became junior champion of Catalonia at the age of 12. A trained computer scientist, chess remained his real passion and continued progress brought him an International Master title in 1986, followed by the Grandmaster title in 1988. Illescas became Spain's strongest and most consistent player over many years, registering his country's highest Elo rating in 1993, making him at the time, world number 26, his 1993 match with Ljubomir Ljubojević ended 4-4, with all eight games drawn. Around this time, he established his own chess school - La Escuela de Ajedrez de Miguel Illescas, or EDAMI for short; the school is flexibly structured and allows for students to learn at sessions held in the schools around Barcelona, or on the internet or as private lessons. EDAMI acts as a chess supplier and not unlike the London Chess Centre, provides a shop, publishes a regular chess magazine and arranges events such as tournaments, simultaneous displays and the like.
As a young man, Illescas' tournament results were noteworthy. He kept winning during the latter part of the nineties also. More he finished 1st= at Pamplona in 2003, this time sharing victory with Luke McShane and Emil Sutovsky. So far, he has won the Spanish national championship of 1995, 1998, 1999, 2001, 2004, 2005, 2007 and 2010. In team competition, he has represented his country at many Olympiads from 1986 onwards and won an individual bronze medal at Turin in 2006. In 1997, he was appointed to the IBM-led team that prepared the super-computer Deep Blue in the build up to a second match with Garry Kasparov. Working with Joel Benjamin, Nick DeFirmian and John Fedorowicz, the project and match result were an unreserved success and this undoubtedly enhanced his reputation as an analyst, team player and most someone who understood the psyche of the incumbent world champion better than most, he was a logical choice to join Kramnik as a second for his world championship clash with Kasparov in 2000.
Kramnik became champion and the winning partnership was restored for the 2004 defence against Peter Leko and the 2006 reunification match with Topalov. Illescas sees himself as part coach and part guru in these situations, involving himself with not just the chess analysis, but the thought process and personality traits of the opponent. In 2004 he was awarded the title of FIDE Senior Trainer; as a sideline, he has developed an interest in Fischer-Random chess and laid down a challenge to Bobby Fischer to play a match with him. With the arrival of super-grandmaster Alexei Shirov in 1994 and the subsequent emergence of Vallejo Pons as a world-class grandmaster, Illescas may no longer be Spanish number one, but he remains a tough and respected competitor. Illescas Tournament record Olimpbase - Olympiads and other Team event information Biographical data of Miguel Illescas EDAMI - Miguel Illescas Chess School Homepage Illescas' games at Chessgames.com
A personal computer is a multi-purpose computer whose size and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications these systems run commercial software, free-of-charge software or free and open-source software, provided in ready-to-run form. Software for personal computers is developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible; this contrasts with mobile systems, where software is only available through a manufacturer-supported channel, end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry; these include free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices provides the main alternative to Intel's processors; the advent of personal computers and the concurrent Digital Revolution have affected the lives of people in all countries. "PC" is an initialism for "personal computer". The IBM Personal Computer incorporated the designation in its model name, it is sometimes useful to distinguish personal computers of the "IBM Personal Computer" family from personal computers made by other manufacturers. For example, "PC" is used in contrast with "Mac", an Apple Macintosh computer.. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" computers.
The "brain" may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far. In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit trained, person; this mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30of 1956, the Programma 101 introduced in 1964, the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
In what was to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of what would become the staples of daily working life in the 21st century: e-mail, word processing, video conferencing, the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time; the development of the microprocessor, with widespread commercial availability starting in the mid 1970's, made computers cheap enough for small businesses and individuals to own. Early personal computers—generally called microcomputers—were sold in a kit form and in limited volumes, were of interest to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008.
It was built starting in 1972, few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use; the CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP based on the IBM PALM processor with a Philips compact cassette drive, small CRT, full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was available only on mainframe computers, most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC; because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.
C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weigh
Very Large Scale Integration
Very-large-scale integration is the process of creating an integrated circuit by combining millions of transistors or devices into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed; the microprocessor is a VLSI device. Before the introduction of VLSI technology most ICs had a limited set of functions they could perform. An electronic circuit might consist of ROM, RAM and other glue logic. VLSI lets IC designers add all of these into one chip; the history of the transistor dates to the 1920s when several inventors attempted devices that were intended to control current in solid-state diodes and convert them into triodes. Success came after World War II, when the use of silicon and germanium crystals as radar detectors led to improvements in fabrication and theory. Scientists who had worked on radar returned to solid-state device development. With the invention of transistors at Bell Labs in 1947, the field of electronics shifted from vacuum tubes to solid-state device.
With the small transistor at their hands, electrical engineers of the 1950s saw the possibilities of constructing far more advanced circuits. However, as the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long; the electric signals took time thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block of semiconductor material; the circuits could be made smaller, the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration in the early 1960s, medium-scale integration in the late 1960s, large-scale integration as well as VLSI in the 1970s and 1980s, with tens of thousands of transistors on a single chip; the first semiconductor chips held two transistors each.
Subsequent advances added more transistors, as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices as many as ten diodes, transistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as small-scale integration, improvements in technique led to devices with hundreds of logic gates, known as medium-scale integration. Further improvements led to large-scale integration, i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and billions of individual transistors. At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like ultra-large-scale integration were used, but the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use.
In 2008, billion-transistor processors became commercially available. This became more commonplace as semiconductor fabrication advanced from the then-current generation of 65 nm processes. Current designs, unlike the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks like the SRAM cell, are still designed by hand to ensure the highest efficiency. Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabrics area; this is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting. Structured VLSI design had been popular in the early 1980s, but lost its popularity because of the advent of placement and routing tools wasting a lot of area by routing, tolerated because of the progress of Moore's Law.
When introducing the hardware description language KARL in the mid' 1970s, Reiner Hartenstein coined the term "structured VLSI design", echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured program As microprocessors become more complex due to technology scaling, microprocessor designers have encountered several challenges which force them to think beyond the design plane, look ahead to post-silicon: Process variation – As photolithography techniques get closer to the fundamental laws of optics, achieving high accuracy in doping concentrations and etched wires is becoming more difficult and prone to errors due to variation. Designers now must simulate across multiple fabrication process corners before a chip is certified ready for production, or use system-level techniques for dealing with effects of variation. Stricter design rules – Due to lithography and etch issues with scaling, design rules for layout have become stringent.
Designers must keep in mind an increasing list of rules when laying out custom circuits. The overhead for custom design is now reaching a tipping point, with many design houses opting to switch to electronic design automation tools to automate their design process. Timing/design clo
Carnegie Mellon University
Carnegie Mellon University is a private research university based in Pittsburgh, Pennsylvania. Founded in 1900 by Andrew Carnegie as the Carnegie Technical Schools, the university became the Carnegie Institute of Technology in 1912 and began granting four-year degrees. In 1967, the Carnegie Institute of Technology merged with the Mellon Institute of Industrial Research to form Carnegie Mellon University. With its main campus located 3 miles from Downtown Pittsburgh, Carnegie Mellon has grown into an international university with over a dozen degree-granting locations in six continents, including campuses in Qatar and Silicon Valley, more than 20 research partnerships; the university has seven colleges and independent schools which all offer interdisciplinary programs: the College of Engineering, College of Fine Arts, Dietrich College of Humanities and Social Sciences, Mellon College of Science, Tepper School of Business, H. John Heinz III College of Information Systems and Public Policy, the School of Computer Science.
Carnegie Mellon counts 13,961 students from 109 countries, over 105,000 living alumni, over 5,000 faculty and staff. Past and present faculty and alumni include 20 Nobel Prize laureates, 13 Turing Award winners, 23 Members of the American Academy of Arts and Sciences, 22 Fellows of the American Association for the Advancement of Science, 79 Members of the National Academies, 124 Emmy Award winners, 47 Tony Award laureates, 10 Academy Award winners; the Carnegie Technical Schools were founded in 1900 in Pittsburgh by the Scottish American industrialist and philanthropist Andrew Carnegie, who wrote the time-honored words "My heart is in the work", when he donated the funds to create the institution. Carnegie's vision was to open a vocational training school for the sons and daughters of working-class Pittsburghers. Carnegie was inspired for the design of his school by the Pratt Institute in Brooklyn, New York founded by industrialist Charles Pratt in 1887. In 1912, the institution changed its name to Carnegie Institute of Technology and began offering four-year degrees.
During this time, CIT consisted of four constituent schools: the School of Fine and Applied Arts, the School of Apprentices and Journeymen, the School of Science and Technology, the Margaret Morrison Carnegie School for Women. The Mellon Institute of Industrial Research was founded in 1913 by a banker and industrialist brothers Andrew and Richard B. Mellon in honor of their father, Thomas Mellon, the patriarch of the Mellon family; the Institute began as a research organization which performed work for government and industry on a contract and was established as a department within the University of Pittsburgh. In 1927, the Mellon Institute incorporated as an independent nonprofit. In 1938, the Mellon Institute's iconic building was completed and it moved to its new, current, location on Fifth Avenue. In 1967, with support from Paul Mellon, Carnegie Tech merged with the Mellon Institute of Industrial Research to become Carnegie Mellon University. Carnegie Mellon's coordinate women's college, the Margaret Morrison Carnegie College closed in 1973 and merged its academic programs with the rest of the university.
The industrial research mission of the Mellon Institute survived the merger as the Carnegie Mellon Research Institute and continued doing work on contract to industry and government. CMRI closed in 2001 and its programs were subsumed by other parts of the university or spun off into autonomous entities. Carnegie Mellon's 140-acre main campus is three miles from downtown Pittsburgh, between Schenley Park and the Squirrel Hill and Oakland neighborhoods. Carnegie Mellon is bordered to the west by the campus of the University of Pittsburgh. Carnegie Mellon owns 81 buildings in the Squirrel Hill neighborhoods of Pittsburgh. For decades the center of student life on campus was the University's student union. Built in the 1950s, Skibo Hall's design was typical of Mid-Century Modern architecture, but was poorly equipped to deal with advances in computer and internet connectivity; the original Skibo was razed in the summer of 1994 and replaced by a new student union, wi-fi enabled. Known as University Center, the building was dedicated in 1996.
In 2014, Carnegie Mellon re-dedicated the University Center as the Cohon University Center in recognition of the eighth president of the university, Jared Cohon. A large grassy area known as "the Cut" forms the backbone of the campus, with a separate grassy area known as "the Mall" running perpendicular; the Cut was formed by filling in a ravine with soil from a nearby hill, leveled to build the College of Fine Arts building. The northwestern part of the campus was acquired from the United States Bureau of Mines in the 1980s. In 2006, Carnegie Mellon Trustee Jill Gansman Kraus donated the 80-foot -tall sculpture Walking to the Sky, placed on the lawn facing Forbes Ave between the Cohon University Center and Warner Hall; the sculpture was controversial for its placement, the general lack of input that the campus community had, its aesthetic appeal. In April 2015, Carnegie Mellon University, in collaboration with Jones Lang LaSalle, announced the planning of a second office space structure, alongside the Robert Mehrabian Collaborative Innovation Center, an upscale and full-service hotel, retail and dining development along Forbes Avenue.
This complex will connect to the Tepper Quadrangle, the Heinz College, the Tata Consultancy Services Building, the Gates-Hillman Center to create an innovation corridor on the university campus. The eff
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
Grandmaster is a title awarded to chess players by the world chess organization FIDE. Apart from World Champion, Grandmaster is the highest title. Once achieved, the title is held for life, though exceptionally it may be revoked for cheating; the abbreviation IGM for International Grandmaster is sometimes used in older literature. The title of Grandmaster, along with the lesser FIDE titles of International Master and FIDE Master, is open to both men and women; the vast majority of grandmasters are men, but a number of women have earned the GM title, with the first three having been Nona Gaprindashvili in 1978, Maia Chiburdanidze in 1984 and Susan Polgar in 1991. Since about 2000, most of the top 10 women have held the GM title. There is a Woman Grandmaster title with lower requirements awarded only to women. FIDE awards separate Grandmaster titles to composers and solvers of chess problems, International Grandmaster for chess compositions to the former and International Solving Grandmaster to the latter.
The International Correspondence Chess Federation awards the title of International Correspondence Chess Grandmaster. The first known use of the term grandmaster in connection with chess was in an 1838 issue of Bell's Life, in which a correspondent referred to William Lewis as "our past grandmaster". Lewis himself referred to Philidor as a grandmaster, the term was applied to a few other players. In the Ostend tournament of 1907 the term grandmaster was used; the tournament was divided into two sections: the Championship Tournament and the Masters' Tournament. The Championship section was for players who had won an international tournament. Siegbert Tarrasch won the Championship section, over Carl Schlechter, Dawid Janowski, Frank Marshall, Amos Burn, Mikhail Chigorin; these players were described as grandmasters for the purposes of the tournament. The San Sebastián 1912 tournament won by Akiba Rubinstein was a designated grandmaster event. Rubinstein won with 12½ points out of 19. Tied for second with 12 points were Rudolf Spielmann.
By some accounts, in the St. Petersburg 1914 chess tournament, the title "Grandmaster" was formally conferred by Russian Tsar Nicholas II, who had funded the tournament; the Tsar awarded the title to the five finalists: Emanuel Lasker, José Raúl Capablanca, Alexander Alekhine, Siegbert Tarrasch, Frank Marshall. Chess historian Edward Winter has questioned this, stating that the earliest known sources that support this story are an article by Robert Lewis Taylor in the June 15, 1940, issue of The New Yorker and Marshall's autobiography My 50 Years of Chess. Before 1950, the term grandmaster was sometimes informally applied to world class players; the Fédération Internationale des Échecs was formed in Paris in 1924, but at that time did not award formal titles. In 1927, the Soviet Union's Chess Federation established the title of Grandmaster of the Soviet Union, for their own players, since at that time Soviets were not competing outside their own country; this title was abolished in 1931, after having been awarded to Boris Verlinsky, who won the 1929 Soviet Championship.
The title was brought back in 1935, awarded to Mikhail Botvinnik, who thus became the first "official" Grandmaster of the USSR. Verlinsky did not get his title back; when FIDE reorganized after World War II it adopted regulations concerning international titles. Titles were awarded by a resolution of the FIDE General Assembly and the Qualification Committee, with no formal written criteria. FIDE first awarded the Grandmaster title in 1950 to 27 players; these players were: The top players of the day: world champion Mikhail Botvinnik, those who had qualified for the inaugural Candidates Tournament in 1950: Isaac Boleslavsky, Igor Bondarevsky, David Bronstein, Max Euwe, Reuben Fine, Salo Flohr, Paul Keres, Alexander Kotov, Andor Lilienthal, Miguel Najdorf, Samuel Reshevsky, Vasily Smyslov, Gideon Ståhlberg, László Szabó. Players still living who, though past their best in 1950, were recognised as having been world class when at their peak: Ossip Bernstein, Oldřich Duras, Ernst Grünfeld, Boris Kostić, Grigory Levenfish, Géza Maróczy, Jacques Mieses, Viacheslav Ragozin, Akiba Rubinstein, Friedrich Sämisch, Savielly Tartakower, Milan Vidmar.
Since FIDE did not award the Grandmaster title posthumously, world-class players who died prior to 1950, including World Champions Steinitz, Lasker and Alekhine, never received the title. Title awards under the original regulations were subject to political concerns. Efim Bogoljubov, who had emigrated from the Soviet Union to Germany, was not entered in the first class of Grandmasters though he had played two matches for the World Championship with Alekhine, he received the title by a vote of thirteen to eight with five abstentions. Yugoslavia supported his application. In 1953, FIDE abolished the old regulations, although a provision was maintained that allowed older masters, overlooked to be awarded titles; the new regulations awarded the title of International Grandmaster of the FIDE to players meeting any of the following criteria: The world champion. Masters who have the absolute right to play in the World Championship Candidates Tournament, or any player who replaces an absent contestant and earns at least a 50 percent score.
The winner of an international tournament meeting specified standards, any player placing second in two such tournaments within a span of four years. The tournament must be at least eleven rounds with seven or more players, 80 p