Analytical chemistry studies and uses instruments and methods used to separate and quantify matter. In practice, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of modern, instrumental methods. Classical qualitative methods use separations such as precipitation and distillation. Identification may be based on differences in color, melting point, boiling point, radioactivity or reactivity. Classical quantitative analysis uses volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Qualitative and quantitative analysis can be performed with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields; the same instrument can separate and quantify an analyte.
Analytical chemistry is focused on improvements in experimental design and the creation of new measurement tools. Analytical chemistry has broad applications to forensics, medicine and engineering. Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period significant contributions to analytical chemistry include the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups; the first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium and caesium in 1860. Most of the major developments in analytical chemistry take place after 1900. During this period instrumental analysis becomes progressively dominant in the field. In particular many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century.
The separation sciences follow a similar time line of development and become transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s into the present day analytical chemistry has progressively become more inclusive of biological questions, whereas it had been focused on inorganic or small organic molecules. Lasers have been used in chemistry as probes and to initiate and influence a wide variety of reactions; the late 20th century saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new methods of analysis; the discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in.
An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time; this is true in industrial quality assurance and environmental applications. Analytical chemistry plays an important role in the pharmaceutical industry where, aside from QA, it is used in discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today; these techniques tend to form the backbone of most undergraduate analytical chemistry educational labs. A qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration.
By definition, qualitative analyses do not measure quantity. There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Inorganic qualitative analysis refers to a systematic scheme to confirm the presence of certain aqueous, ions or elements by performing a series of reactions that eliminate ranges of possibilities and confirms suspected ions with a confirming test. Sometimes small carbon containing ions are included in such schemes. With modern instrumentation these tests are used but can be useful for educational purposes and in field work or other situations where access to state-of-the-art instruments are not available or expedient. Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water.
Titration involves the addition of a reactant to a solution being analyzed until some equivalence point is reached. The amount of material in t
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the
In statistics, inter-rater reliability is the degree of agreement among raters. It is a score of how much homogeneity, or consensus, there is in the ratings given by various judges. In contrast, intra-rater reliability is a score of the consistency in ratings given by the same person across multiple instances. Inter-rater and intra-rater reliability are aspects of test validity. Assessments of them are useful in refining the tools given to human judges, for example by determining if a particular scale is appropriate for measuring a particular variable. If various raters do not agree, either the scale is defective or the raters need to be re-trained. There are a number of statistics. Different statistics are appropriate for different types of measurement; some options are: joint-probability of agreement, Cohen's kappa, Scott's pi and the related Fleiss' kappa, inter-rater correlation, concordance correlation coefficient, intra-class correlation, Krippendorff's alpha. For any task in which multiple raters are useful, raters are expected to disagree about the observed target.
By contrast, situations involving unambiguous measurement, such as simple counting tasks do not require more than one person performing the measurement. Measurement involving ambiguity in characteristics of interest in the rating target are improved with multiple trained raters; such measurement tasks involve subjective judgment of quality. Variation across raters in the measurement procedures and variability in interpretation of measurement results are two examples of sources of error variance in rating measurements. Stated guidelines for rendering ratings are necessary for reliability in ambiguous or challenging measurement scenarios. Without scoring guidelines, ratings are affected by experimenter's bias, that is, a tendency of rating values to drift towards what is expected by the rater. During processes involving repeated measurements, correction of rater drift can be addressed through periodic retraining to ensure that raters understand guidelines and measurement goals. There are several operational definitions of "inter-rater reliability" reflecting different viewpoints about what is reliable agreement between raters.
There are three operational definitions of agreement: Reliable raters agree with the "official" rating of a performance. Reliable raters agree with each other about the exact ratings to be awarded. Reliable raters agree about which performance is better and, worse; these combine with two operational definitions of behavior: The joint-probability of agreement is the simplest and least robust measure. It is estimated as the percentage of the time the raters agree in a nominal or categorical rating system, it does not take into account the fact that agreement may happen based on chance. There is some question; when the number of categories being used is small, the likelihood for 2 raters to agree by pure chance increases dramatically. This is because both raters must confine themselves to the limited number of options available, which impacts the overall agreement rate, not their propensity for "intrinsic" agreement. Therefore, the joint probability of agreement will remain high in the absence of any "intrinsic" agreement among raters.
A useful inter-rater reliability coefficient is expected to be close to 0, when there is no "intrinsic" agreement, to increase as the "intrinsic" agreement rate improves. Most chance-corrected agreement coefficients achieve the first objective. However, the second objective is not achieved by many known chance-corrected measures. Cohen's kappa, which works for two raters, Fleiss' kappa, an adaptation that works for any fixed number of raters, improve upon the joint probability in that they take into account the amount of agreement that could be expected to occur through chance, they suffer from the same problem as the joint-probability in that they treat the data as nominal and assume the ratings have no natural ordering. If the data does have an order, the information in the measurements is not taken advantage of. Either Pearson's r, Kendall's τ, or Spearman's ρ can be used to measure pairwise correlation among raters using a scale, ordered. Pearson assumes. If more than two raters are observed, an average level of agreement for the group can be calculated as the mean of the r, τ, or ρ values from each possible pair of raters.
Another way of performing reliability testing is to use the intra-class correlation coefficient. There are several types of this and one is defined as, "the proportion of variance of an observation due to between-subject variability in the true scores"; the range of the ICC may be between 0.0 and 1.0. The ICC will be high when there is little variation between the scores given to each item by the raters, e.g. if all raters give the same, or similar scores to each of the items. The
Douglas Altman FMedSci was an English statistician best known for his work on improving the reliability and reporting of medical research and for cited papers on statistical methodology. He was professor of statistics in medicine at the University of Oxford and Director of Centre for Statistics in Medicine and Cancer Research UK Medical Statistics Group, co-founder of the international Equator Network for health research reliability. Doug Altman graduated in 1970 with an Honours degree in Statistics from Bath University of Technology, now the University of Bath, his first job was in the Department of Community Medicine, St Thomas’s Hospital Medical School, London. He spent 11 years working for the Medical Research Council's Clinical Research Centre where he worked entirely as a statistical consultant in a wide variety of medical areas. In 1988 Doug Altman became head of the newly formed Medical Statistics Laboratory at Imperial Cancer Research Fund, in 1995 became founding director of the Centre for Statistics in Medicine in Oxford.
In 1998 he was made Professor of Statistics in Medicine by the University of Oxford. Altman was chief statistical advisor to the British Medical Journal, where he was a member of the editorial "hanging committee", co-convenor of the statistical Methods Group of the Cochrane Collaboration. Altman was regarded as a leading authority on the execution and reporting of health research, played a leading role in establishing better standards, he was one of the co-founders of the international EQUATOR health research reliability network, a member of the CONSORT Group from 1999, a group dedicated to offering a standardised way for researchers to report trials. He was one of the original authors of the IDEAL framework for improving surgical research. Altman's publications on statistical education, many co-authored with his long-standing collaborator Martin Bland, are well known among the medical profession, being noted for their practical relevance and clarity, his textbook Practical Statistics for Medical Research, published in 1991, has sold 50,000 copies in hardback.
Altman was the author of over 450 papers in statistical methodology, with 11 being cited over 1,000 times. Among them is one Lancet paper, cited over 23,000 times and is ranked 29th in the Nature/Web of Science Top 100 most-cited research papers of all time. Altman was awarded the Bradford Hill Medal by the Royal Statistical Society for his contributions to medical statistics in 1997, a DSc from the University of London in the same year. In 2015 Altman was awarded a lifetime achievement award by the BMJ, where he was credited by the editor, Dr Fiona Godlee, with having "done more than anyone else to encourage researchers to report what they did and all, rather than letting the best be the enemy of the good or, pretending that research is perfect". Altman was editor in chief of Trials, a Fellow of the Academy of Medical Sciences and the Royal Statistical Society. Altman, Douglas G.. Practical Statistics for Medical Research. Monographs on Statistics and Applied Probability. Chapman & Hall. ISBN 978-1-58488-039-4.
Practical Statistics for Medical Research. Douglas G. Altman ISBN 0-412-27630-5 Systematic Reviews in Healthcare: Meta-Analysis in Context. Editors: Douglas G. Altman, Iain Chalmers, Gerd Antes, Michael Bradburn, Mike Clarke, Matthias Egger, George Davey Smith. ISBN 0-7279-1488-X Statistics With Confidence: Confidence Intervals and Statistical Guidelines. Editors: Douglas G. Altman, David Machin, T. N. Bryant, Martin J. Gardner. ISBN 0-7279-0222-9 Systematic Reviews. Editors: Douglas G. Altman, Iain Chalmers. ISBN 0-7279-0904-5 Statistics in Practice: Articles Published in the British Medical Journal.. Editors: Sheila M. Gore, Douglas G. Altman. ISBN 0-7279-0085-4 List of the over 396 articles by Doug Altman available through PubMed. David M, Kenneth FS and Altman DG for the CONSORT Group. Revised recommendations for improving the quality of reports of parallel group randomized trials. Lancet 14, 1191-4. Bland JM, Altman DG. Statistical methods for assessing agreement between 2 methods of clinical measurement.
Lancet i, 307-310. A reprint is available HERE BMJ Statistical Notes - A series of short articles on the use of statistics by Doug Altman and his longtime collaborator Martin Bland. Altman DG, Bland JM. Measurement in medicine - the analysis of method comparison studies; the Statistician 32, 307-317. Bland JM, Altman DG. Measuring agreement in method comparison studies. Statistical Methods in Medical Research 8, 135-160. Bland JM, Altman DG. Comparing methods of measurement - why plotting difference against standard method is misleading. Lancet 346, 1085-1087. Doug Altman's profile at ISI Highly Cited Researcher website Doug Altman's profile at the Centre for Statistics in Medicine CONSORT Statement webpage
OCLC Online Computer Library Center, Incorporated d/b/a OCLC is an American nonprofit cooperative organization "dedicated to the public purposes of furthering access to the world's information and reducing information costs". It was founded in 1967 as the Ohio College Library Center. OCLC and its member libraries cooperatively produce and maintain WorldCat, the largest online public access catalog in the world. OCLC is funded by the fees that libraries have to pay for its services. OCLC maintains the Dewey Decimal Classification system. OCLC began in 1967, as the Ohio College Library Center, through a collaboration of university presidents, vice presidents, library directors who wanted to create a cooperative computerized network for libraries in the state of Ohio; the group first met on July 5, 1967 on the campus of the Ohio State University to sign the articles of incorporation for the nonprofit organization, hired Frederick G. Kilgour, a former Yale University medical school librarian, to design the shared cataloging system.
Kilgour wished to merge the latest information storage and retrieval system of the time, the computer, with the oldest, the library. The plan was to merge the catalogs of Ohio libraries electronically through a computer network and database to streamline operations, control costs, increase efficiency in library management, bringing libraries together to cooperatively keep track of the world's information in order to best serve researchers and scholars; the first library to do online cataloging through OCLC was the Alden Library at Ohio University on August 26, 1971. This was the first online cataloging by any library worldwide. Membership in OCLC is based on use of services and contribution of data. Between 1967 and 1977, OCLC membership was limited to institutions in Ohio, but in 1978, a new governance structure was established that allowed institutions from other states to join. In 2002, the governance structure was again modified to accommodate participation from outside the United States.
As OCLC expanded services in the United States outside Ohio, it relied on establishing strategic partnerships with "networks", organizations that provided training and marketing services. By 2008, there were 15 independent United States regional service providers. OCLC networks played a key role in OCLC governance, with networks electing delegates to serve on the OCLC Members Council. During 2008, OCLC commissioned two studies to look at distribution channels. In early 2009, OCLC negotiated new contracts with the former networks and opened a centralized support center. OCLC provides bibliographic and full-text information to anyone. OCLC and its member libraries cooperatively produce and maintain WorldCat—the OCLC Online Union Catalog, the largest online public access catalog in the world. WorldCat has holding records from private libraries worldwide; the Open WorldCat program, launched in late 2003, exposed a subset of WorldCat records to Web users via popular Internet search and bookselling sites.
In October 2005, the OCLC technical staff began a wiki project, WikiD, allowing readers to add commentary and structured-field information associated with any WorldCat record. WikiD was phased out; the Online Computer Library Center acquired the trademark and copyrights associated with the Dewey Decimal Classification System when it bought Forest Press in 1988. A browser for books with their Dewey Decimal Classifications was available until July 2013; until August 2009, when it was sold to Backstage Library Works, OCLC owned a preservation microfilm and digitization operation called the OCLC Preservation Service Center, with its principal office in Bethlehem, Pennsylvania. The reference management service QuestionPoint provides libraries with tools to communicate with users; this around-the-clock reference service is provided by a cooperative of participating global libraries. Starting in 1971, OCLC produced catalog cards for members alongside its shared online catalog. OCLC commercially sells software, such as CONTENTdm for managing digital collections.
It offers the bibliographic discovery system WorldCat Discovery, which allows for library patrons to use a single search interface to access an institution's catalog, database subscriptions and more. OCLC has been conducting research for the library community for more than 30 years. In accordance with its mission, OCLC makes its research outcomes known through various publications; these publications, including journal articles, reports and presentations, are available through the organization's website. OCLC Publications – Research articles from various journals including Code4Lib Journal, OCLC Research, Reference & User Services Quarterly, College & Research Libraries News, Art Libraries Journal, National Education Association Newsletter; the most recent publications are displayed first, all archived resources, starting in 1970, are available. Membership Reports – A number of significant reports on topics ranging from virtual reference in libraries to perceptions about library funding. Newsletters – Current and archived newsletters for the library and archive community.
Presentations – Presentations from both guest speakers and OCLC research from conferences and other events. The presentations are organized into five categories: Conference presentations, Dewey presentations, Distinguished Seminar Series, Guest presentations, Research staff
MedCalc is a statistical software package designed for the biomedical sciences. It can import files in several formats. MedCalc includes basic parametric and non-parametric statistical procedures and graphs such as descriptive statistics, ANOVA, Mann–Whitney test, Wilcoxon test, χ2 test, linear as well as non-linear regression, logistic regression, etc. Survival analysis includes Kaplan -- Meier survival analysis. Procedures for method evaluation and method comparison include ROC curve analysis, Bland–Altman plot, as well as Deming and Passing–Bablok regression; the software includes meta-analysis and sample size calculations. The first DOS version of MedCalc was released in April 1993 and the first version for Windows was available in November 1996. On 7 March 2007, version 9.3 obtained the Certified for Windows Vista logo. Version 15.2 introduced a user-interface in English, French, Italian, Korean, Portuguese and Spanish. Stephan C, Wesseling S, Schink T, Jung K. “Comparison of eight computer programs for receiver-operating characteristic analysis.”
Clinical Chemistry 2003. Doi:10.1373/49.3.433 Lukic IK. “MedCalc Version 18.104.22.168. Software Review.” Croatian Medical Journal 2003. Garber C. “MedCalc Software for Statistics in Medicine. Software review.” Clinical Chemistry, 1998. Petrovecki M. “MedCalc for Windows. Software Review.” Croatian Medical Journal, 1997. List of statistical packages Comparison of statistical packages MedCalc Statistical Software Homepage
In statistics, an outlier is an observation point, distant from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error. An outlier can cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they indicate either measurement error or that the population has a heavy-tailed distribution. In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate'correct trial' versus'measurement error'. In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable; this can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data.
Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected. Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets. For example, if one is calculating the average temperature of 10 objects in a room, nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object than the mean; as illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set.
Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not. However, the mean is a more precise estimator. In the case of distributed data, the three sigma rule means that 1 in 22 observations will differ by twice the standard deviation or more from the mean, 1 in 370 will deviate by three times the standard deviation. In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution – and not indicate an anomaly. If the sample size is only 100, just three such outliers are reason for concern, being more than 11 times the expected number. In general, if the nature of the population distribution is known a priori, it is possible to test if the number of outliers deviate from what can be expected: for a given cutoff of a given distribution, the number of outliers will follow a binomial distribution with parameter p, which can be well-approximated by the Poisson distribution with λ = pn.
Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, p is 0.3%, thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3. Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end. There is no rigid mathematical definition of.
There are various methods of outlier detection. Some are graphical such as normal probability plots. Others are model-based. Box plots are a hybrid. Model-based methods which are used for identification assume that the data are from a normal distribution, identify observations which are deemed "unlikely" based on mean and standard deviation: Chauvenet's criterion Grubbs's test for outliers Dixon's Q test ASTM E178 Standard Practice for Dealing With Outlying Observations Mahalanobis distance and leverage are used to detect outliers in the development of linear regression models. Subspace and correlation based techniques for high-dimensional numerical data It is proposed to determine in a series of m observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are