Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
Peer review is the evaluation of work by one or more people with similar competences as the producers of the work. It functions as a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are used to maintain quality standards, improve performance, provide credibility. In academia, scholarly peer review is used to determine an academic paper's suitability for publication. Peer review can be categorized by the type of activity and by the field or profession in which the activity occurs, e.g. medical peer review. Professional peer review focuses on the performance of professionals, with a view to improving quality, upholding standards, or providing certification. In academia, peer review is used to inform in decisions related to faculty tenure. Henry Oldenburg was a British philosopher, seen as the'father' of modern scientific peer review. WA prototype is a professional peer-review process recommended in the Ethics of the Physician written by Ishāq ibn ʻAlī al-Ruhāwī.
He stated that a visiting physician had to make duplicate notes of a patient's condition on every visit. When the patient was cured or had died, the notes of the physician were examined by a local medical council of other physicians, who would decide whether the treatment had met the required standards of medical care. Professional peer review is common in the field of health care, where it is called clinical peer review. Further, since peer review activity is segmented by clinical discipline, there is physician peer review, nursing peer review, dentistry peer review, etc. Many other professional fields have some level of peer review process: accounting, engineering and forest fire management. Peer review is used in education to achieve certain learning objectives as a tool to reach higher order processes in the affective and cognitive domains as defined by Bloom's taxonomy; this may take a variety of forms, including mimicking the scholarly peer review processes used in science and medicine.
Scholarly peer review is the process of subjecting an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field, before a paper describing this work is published in a journal, conference proceedings or as a book. The peer review helps the publisher decide whether the work should be accepted, considered acceptable with revisions, or rejected. Peer review requires a community of experts in a given field, who are qualified and able to perform reasonably impartial review. Impartial review of work in less narrowly defined or inter-disciplinary fields, may be difficult to accomplish, the significance of an idea may never be appreciated among its contemporaries. Peer review is considered necessary to academic quality and is used in most major scholarly journals, but it by no means prevents publication of invalid research. Traditionally, peer reviewers have been anonymous, but there is a significant amount of open peer review, where the comments are visible to readers with the identities of the peer reviewers disclosed as well.
The European Union has been using peer review in the "Open Method of Co-ordination" of policies in the fields of active labour market policy since 1999. In 2004, a program of peer reviews started in social inclusion; each program sponsors about eight peer review meetings in each year, in which a "host country" lays a given policy or initiative open to examination by half a dozen other countries and the relevant European-level NGOs. These meet over two days and include visits to local sites where the policy can be seen in operation; the meeting is preceded by the compilation of an expert report on which participating "peer countries" submit comments. The results are published on the web; the United Nations Economic Commission for Europe, through UNECE Environmental Performance Reviews, uses peer review, referred to as "peer learning", to evaluate progress made by its member countries in improving their environmental policies. The State of California is the only U. S. state to mandate scientific peer review.
In 1997, the Governor of California signed into law Senate Bill 1320, Chapter 295, statutes of 1997, which mandates that, before any CalEPA Board, Department, or Office adopts a final version of a rule-making, the scientific findings and assumptions on which the proposed rule are based must be submitted for independent external scientific peer review. This requirement is incorporated into the California Health and Safety Code Section 57004. Medical peer review may be distinguished in 4 classifications: 1) clinical peer review. Additionally, "medical peer review" has been used by the American Medical Association to refer not only to the process of improving quality and safety in health care organizations, but to the process of rating clinical behavior or compliance with professional society membership standards. Thus, the terminology has poor standardization and specificity as a database search term. To an outsider, the anonymous, pre-publication peer review process is opaque. Certain journals are accused of not carrying out stringent peer review in order to more expand their customer base in journals where authors pay a fee before public
Creative Commons is an American non-profit organization devoted to expanding the range of creative works available for others to build upon and to share. The organization has released several copyright-licenses, known as Creative Commons licenses, free of charge to the public; these licenses allow creators to communicate which rights they reserve and which rights they waive for the benefit of recipients or other creators. An easy-to-understand one-page explanation of rights, with associated visual symbols, explains the specifics of each Creative Commons license. Creative Commons licenses are based upon it, they replace individual negotiations for specific rights between copyright owner and licensee, which are necessary under an "all rights reserved" copyright management, with a "some rights reserved" management employing standardized licenses for re-use cases where no commercial compensation is sought by the copyright owner. The result is an agile, low-overhead and low-cost copyright-management regime, benefiting both copyright owners and licensees.
The organization was founded in 2001 by Lawrence Lessig, Hal Abelson, Eric Eldred with the support of Center for the Public Domain. The first article in a general interest publication about Creative Commons, written by Hal Plotkin, was published in February 2002; the first set of copyright licenses was released in December 2002. The founding management team that developed the licenses and built the Creative Commons infrastructure as we know it today included Molly Shaffer Van Houweling, Glenn Otis Brown, Neeru Paharia, Ben Adida. In 2002 the Open Content Project, a 1998 precursor project by David A. Wiley, announced the Creative Commons as successor project and Wiley joined as CC director. Aaron Swartz played a role in the early stages of Creative Commons; as of May 2018 there were an estimated 1.4 billion works licensed under the various Creative Commons licenses. Wikipedia uses one of these licenses; as of May 2018, Flickr alone hosts over 415 million Creative Commons licensed photos. Creative Commons is governed by a board of directors.
Their licenses have been embraced by many as a way for creators to take control of how they choose to share their copyrighted works. Creative Commons has been described as being at the forefront of the copyleft movement, which seeks to support the building of a richer public domain by providing an alternative to the automatic "all rights reserved" copyright, has been dubbed "some rights reserved". David Berry and Giles Moss have credited Creative Commons with generating interest in the issue of intellectual property and contributing to the re-thinking of the role of the "commons" in the "information age". Beyond that, Creative Commons has provided "institutional and legal support for individuals and groups wishing to experiment and communicate with culture more freely."Creative Commons attempts to counter what Lawrence Lessig, founder of Creative Commons, considers to be a dominant and restrictive permission culture. Lessig describes this as "a culture in which creators get to create only with the permission of the powerful, or of creators from the past."
Lessig maintains that modern culture is dominated by traditional content distributors in order to maintain and strengthen their monopolies on cultural products such as popular music and popular cinema, that Creative Commons can provide alternatives to these restrictions. Until April 2018 Creative Commons had over 100 affiliates working in over 75 jurisdictions to support and promote CC activities around the world. In 2018 this affiliate network has been restructured into a network organisation; the network no longer relies on affiliate organisation but on individual membership organised in Chapter. Creative Commons Japan is the affiliated network of Creative Commons in Japan. In 2003, the International University GLOCOM hold a meeting for the CC Japan preparing. In March 2004, CC Japan was initiated by that University, that, the second CC created among the world. In March 2006, the CC Japan be in motion. In the same year of March, the CC founder Lawrence Lessig came to Japan to be one of the main holder of the open ceremony.
Within same year of May to June, different international events hold in Japan which include iSummit 06 and the first to third round CCJP held. In 2007 of February, ICC x ClipLife 15 sec CM open. In June, iSummit 07 held on. After that month, the fourth CCJP held on. In the 25/7/2007, Tokyo approve Nobuhiro Nakayamato become the NGO chairman of CCJP. In 2008, Taipie ACIA join CCJP; the main theme music which chose by CCJP announced. In 2009, INTO INFINITY shown in Sapporo. I-phone held the shows with Audio Visual Mixer for INTO INFINITY. 2012, the 10 anniversary ceremony held on Japan. 2015, the renew version of CCJP overt. Creative Commons Japan Zero overt. Creative Commons Korea is the affiliated network of Creative Commons in South Korea. In March 2005, CC Korea was initiated by Jongsoo Yoon, a Presiding Judge of Incheon District Court, as a project of Korea Association for Infomedia Law; the major Korean portal sites, including Daum and Naver, have been participating in the use of Creative Commons licences.
In January 2009, the Creative Commons Korea Association was founded as a non-profit incorporated association. Since CC Korea has been promoting the liberal and open culture of creation as well as leading the diffusion of Creative Common in the country. Creative Commons Korea Creative Commons Asia Conference 2010
An academic or scholarly journal is a periodical publication in which scholarship relating to a particular academic discipline is published. Academic journals serve as permanent and transparent forums for the presentation and discussion of research, they are peer-reviewed or refereed. Content takes the form of articles presenting original research, review articles, book reviews; the purpose of an academic journal, according to Henry Oldenburg, is to give researchers a venue to "impart their knowledge to one another, contribute what they can to the Grand design of improving natural knowledge, perfecting all Philosophical Arts, Sciences."The term academic journal applies to scholarly publications in all fields. Scientific journals and journals of the quantitative social sciences vary in form and function from journals of the humanities and qualitative social sciences; the first academic journal was Journal des sçavans, followed soon after by Philosophical Transactions of the Royal Society, Mémoires de l'Académie des Sciences.
The first peer-reviewed journal was Medical Essays and Observations. The idea of a published journal with the purpose of " people know what is happening in the Republic of Letters" was first conceived by Eudes de Mazerai in 1663. A publication titled Journal littéraire général was supposed to be published to fulfill that goal, but never was. Humanist scholar Denis de Sallo and printer Jean Cusson took Mazerai's idea, obtained a royal privilege from King Louis XIV on 8 August 1664 to establish the Journal des sçavans; the journal's first issue was published on 5 January 1665. It was aimed at people of letters, had four main objectives: review newly published major European books, publish the obituaries of famous people, report on discoveries in arts and science, report on the proceedings and censures of both secular and ecclesiastical courts, as well as those of Universities both in France and outside. Soon after, the Royal Society established Philosophical Transactions of the Royal Society in March 1665, the Académie des Sciences established the Mémoires de l'Académie des Sciences in 1666, which more focused on scientific communications.
By the end of the 18th century, nearly 500 such periodical had been published, the vast majority coming from Germany and England. Several of those publications however, in particular the German journals, tended to be short lived. A. J. Meadows has estimated the proliferation of journal to reach 10,000 journals in 1950, 71,000 in 1987. However, Michael Mabe warns that the estimates will vary depending on the definition of what counts as a scholarly publication, but that the growth rate has been "remarkably consistent over time", with an average rates of 3.46% per year from 1800 to 2003. In 1733, Medical Essays and Observations was established by the Medical Society of Edinburgh as the first peer-reviewed journal. Peer review was introduced as an attempt to increase the pertinence of submissions. Other important events in the history of academic journals include the establishment of Nature and Science, the establishment of Postmodern Culture in 1990 as the first online-only journal, the foundation of arXiv in 1991 for the dissemination of preprints to be discussed prior to publication in a journal, the establishment of PLOS One in 2006 as the first megajournal.
There are two kinds of article or paper submissions in academia: solicited, where an individual has been invited to submit work either through direct contact or through a general submissions call, unsolicited, where an individual submits a work for potential publication without directly being asked to do so. Upon receipt of a submitted article, editors at the journal determine whether to reject the submission outright or begin the process of peer review. In the latter case, the submission becomes subject to review by outside scholars of the editor's choosing who remain anonymous; the number of these peer reviewers varies according to each journal's editorial practice – no fewer than two, though sometimes three or more, experts in the subject matter of the article produce reports upon the content and other factors, which inform the editors' publication decisions. Though these reports are confidential, some journals and publishers practice public peer review; the editors either choose to reject the article, ask for a revision and resubmission, or accept the article for publication.
Accepted articles are subjected to further editing by journal editorial staff before they appear in print. The peer review can take from several weeks to several months. Review articles called "reviews of progress," are checks on the research published in journals; some journals are devoted to review articles, some contain a few in each issue, others do not publish review articles. Such reviews cover the research from the preceding year, some for longer or shorter terms; some journals are enumerative. Yet others are evaluative; some journals are published in series, each covering a complete subject field year, or covering specific fields through several years. Unlike original research article
Theoretical computer science
Theoretical computer science is a subset of general computer science and mathematics that focuses on more mathematical topics of computing and includes the theory of computation. It is difficult to circumscribe the theoretical areas precisely; the ACM's Special Interest Group on Algorithms and Computation Theory provides the following description: TCS covers a wide variety of topics including algorithms, data structures, computational complexity and distributed computation, probabilistic computation, quantum computation, automata theory, information theory, program semantics and verification, machine learning, computational biology, computational economics, computational geometry, computational number theory and algebra. Work in this field is distinguished by its emphasis on mathematical technique and rigor. While logical inference and mathematical proof had existed in 1931 Kurt Gödel proved with his incompleteness theorem that there are fundamental limitations on what statements could be proved or disproved.
These developments have led to the modern study of logic and computability, indeed the field of theoretical computer science as a whole. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established. In 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously; this led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render most modern public key cryptography systems uselessly insecure.
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed, as shown below: An algorithm is a step-by-step procedure for calculations. Algorithms are used for calculation, data processing, automated reasoning. An algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. A data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of data structures are suited to different kinds of applications, some are specialized to specific tasks. For example, databases use B-tree indexes for small percentages of data retrieval and compilers and databases use dynamic hash tables as look up tables.
Data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Efficient data structures are key to designing efficient algorithms; some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in both main memory and in secondary memory. Computational complexity theory is a branch of the theory of computation that focuses on classifying computational problems according to their inherent difficulty, relating those classes to each other. A computational problem is understood to be a task, in principle amenable to being solved by a computer, equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used.
The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do. Distributed computing studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages; the components interact with each other. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, independent failure of components. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications, and now a perfect example could be the blockcain.
A computer program that runs in a distributed system is called a distributed program, distributed programming is the process of writing such programs. There are m
Outline of academic disciplines
An academic discipline or field of study is a branch of knowledge and researched as part of higher education. A scholar's discipline is defined by the university faculties and learned societies to which she or he belongs and the academic journals in which she or he publishes research. Disciplines vary between well-established ones that exist in all universities and have well-defined rosters of journals and conferences and nascent ones supported by only a few universities and publications. A discipline may have branches, these are called sub-disciplines. There is no consensus on how some academic disciplines should be classified, for example whether anthropology and linguistics are disciplines of the social sciences or of the humanities; the following outline is provided as topical guide to academic disciplines. Biblical studies Religious studies Biblical Hebrew, Biblical Greek, Aramaic Buddhist theology Christian theology Anglican theology Baptist theology Catholic theology Eastern Orthodox theology Protestant theology Hindu theology Jewish theology Muslim theology Biological anthropology Linguistic anthropology Cultural anthropology Social anthropology Archaeology Accounting Business management Finance Marketing Operations management Edaphology Environmental chemistry Environmental science Gemology Geochemistry Geodesy Physical geography Atmospheric science / Meteorology Biogeography / Phytogeography Climatology / Paleoclimatology / Palaeogeography Coastal geography / Oceanography Edaphology / Pedology or Soil science Geobiology Geology Geostatistics Glaciology Hydrology / Limnology / Hydrogeology Landscape ecology Quaternary science Geophysics Paleontology Paleobiology Paleoecology Astrobiology Astronomy Observational astronomy Gamma ray astronomy Infrared astronomy Microwave astronomy Optical astronomy Radio astronomy UV astronomy X-ray astronomy Astrophysics Gravitational astronomy Black holes Interstellar medium Numerical simulations Astrophysical plasma Galaxy formation and evolution High-energy astrophysics Hydrodynamics Magnetohydrodynamics Star formation Physical cosmology Stellar astrophysics Helioseismology Stellar evolution Stellar nucleosynthesis Planetary science Also a branch of electrical engineering Pure mathematics Applied mathematics Astrostatistics Biostatistics Academia Academic genealogy Curriculum Multidisciplinary approach Interdisciplinarity Transdisciplinarity Professions Classification of Instructional Programs Joint Academic Coding System List of fields of doctoral studies in the United States List of academic fields Abbott, Andrew.
Chaos of Disciplines. University of Chicago Press. ISBN 978-0-226-00101-2. Oleson, Alexandra; the Organization of knowledge in modern America, 1860-1920. ISBN 0-8018-2108-8. US Department of Education Institute of Education Sciences. Classification of Instructional Programs. National Center for Education Statistics. Classification of Instructional Programs: Developed by the U. S. Department of Education's National Center for Education Statistics to provide a taxonomic scheme that will support the accurate tracking and reporting of fields of study and program completions activity. Complete JACS from Higher Education Statistics Agency in the United Kingdom Australian and New Zealand Standard Research Classification Chapter 3 and Appendix 1: Fields of research classification. Fields of Knowledge, a zoomable map allowing the academic disciplines and sub-disciplines in this article be visualised. Sandoz, R. Interactive Historical Atlas of the Disciplines, University of Geneva
Web of Science
Web of Science is an online subscription-based scientific citation indexing service produced by the Institute for Scientific Information maintained by Clarivate Analytics, that provides a comprehensive citation search. It gives access to multiple databases that reference cross-disciplinary research, which allows for in-depth exploration of specialized sub-fields within an academic or scientific discipline. A citation index is built on the fact that citations in science serve as linkages between similar research items, lead to matching or related scientific literature, such as journal articles, conference proceedings, etc. In addition, literature which shows the greatest impact in a particular field, or more than one discipline, can be located through a citation index. For example, a paper's influence can be determined by linking to all the papers. In this way, current trends and emerging fields of research can be assessed. Eugene Garfield, the "father of citation indexing of academic literature," who launched the Science Citation Index, which in turn led to the Web of Science, wrote: Citations are the formal, explicit linkages between papers that have particular points in common.
A citation index is built around these linkages. It identifies the sources of the citations. Anyone conducting a literature search can find from one to dozens of additional papers on a subject just by knowing one, cited, and every paper, found provides a list of new citations with which to continue the search. The simplicity of citation indexing is one of its main strengths. Web of Science is described as a unifying research tool which enables the user to acquire and disseminate database information in a timely manner; this is accomplished because of the creation of a common vocabulary, called ontology, for varied search terms and varied data. Moreover, search terms generate related information across categories. Acceptable content for Web of Science is determined by an evaluation and selection process based on the following criteria: impact, timeliness, peer review, geographic representation. Web of Science employs various analysis capabilities. First, citation indexing is employed, enhanced by the capability to search for results across disciplines.
The influence, impact and methodology of an idea can be followed from its first instance, notice, or referral to the present day. This technology points to a deficiency with the keyword-only method of searching. Second, subtle trends and patterns relevant to the literature or research of interest, become apparent. Broad trends indicate significant topics of the day, as well as the history relevant to both the work at hand, particular areas of study. Third, trends can be graphically represented. Expanding the coverage of Web of Science, in November 2009 Thomson Reuters introduced Century of Social Sciences; this service contains files which trace social science research back to the beginning of the 20th century, Web of Science now has indexing coverage from the year 1900 to the present. As of 3 September 2014, the multidisciplinary coverage of the Web of Science encompasses over 50,000 scholarly books, 12,000 journals and 160,000 conference proceedings; the selection is made on the basis of impact evaluations and comprise open-access journals, spanning multiple academic disciplines.
The coverage includes: the sciences, social sciences and humanities, goes across disciplines. However, Web of Science does not index all journals. There is a positive correlation between Impact Factor and CiteScore. However, analysis by Elsevier has identified 216 journals from 70 publishers to be in the top 10 percent of the most-cited journals in their subject category based on the CiteScore while they did not have Impact Factor, it appears that Impact Factor does not provide a comprehensive and an unbiased coverage of high quality journals. Similar results can be observed by comparing Impact Factor with SCImago Journal Rank. Furthermore, as of September 3, 2014 the total file count of the Web of Science was 90 million records, which included over a billion cited references; this citation service on average indexes around 65 million items per year, it is described as the largest accessible citation database. Titles of foreign-language publications are translated into English and so cannot be found by searches in the original language.
The Web of Science Core Collection consists of six online databases: Science Citation Index Expanded covers more than 8,500 notable journals encompassing 150 disciplines. Coverage is from the year 1900 to the present day. Social Sciences Citation Index covers more than 3,000 journals in social science disciplines. Range of coverage is from the year 1900 to the present day. Arts & Humanities Citation Index covers more than 1,700 arts and humanities journals starting from 1975. In addition, 250 major scientific and social sciences journals are covered. Emerging Sources Citation Index covers over 5,000 journals in the sciences, social science, humanities. Book Citation Index covers more than 60,000 editorially selected books starting from 2005. Conference Proceedings Citation Index covers more than 160,000 conference titles in the Sciences starting from 1990 to the present day Since 2008, the Web of Science hosts a number of regional citation indices; the Chinese Science Citation Database, produced in partnership with the Chinese Academy of Sciences, was the first one in a language other than English.
It was followed in 2013 by the SciELO Citation Index, covering Brazil, Portugal, the Cari