The London Inter-bank Offered Rate is an interest-rate average calculated from estimates submitted by the leading banks in London. Each bank estimates what it would be charged were it to borrow from other banks; the resulting rate is abbreviated to Libor or LIBOR, or more to ICE LIBOR. It was known as BBA Libor before the responsibility for the administration was transferred to Intercontinental Exchange, it is the primary benchmark, along with the Euribor, for short-term interest rates around the world. Libor rates are calculated for five currencies and seven borrowing periods ranging from overnight to one year and are published each business day by Thomson Reuters. Many financial institutions, mortgage lenders and credit card agencies set their own rates relative to it. At least $350 trillion in derivatives and other financial products are tied to Libor. In June 2012, multiple criminal settlements by Barclays Bank revealed significant fraud and collusion by member banks connected to the rate submissions, leading to the Libor scandal.
The British Bankers' Association said on 25 September 2012 that it would transfer oversight of Libor to UK regulators, as proposed by Financial Services Authority managing director Martin Wheatley's independent review recommendations. Wheatley's review recommended that banks submitting rates to Libor must base them on actual inter-bank deposit market transactions and keep records of those transactions, that individual banks' Libor submissions be published after three months, recommended criminal sanctions for manipulation of benchmark interest rates. Financial institution customers may experience higher and more volatile borrowing and hedging costs after implementation of the recommended reforms; the UK government agreed to accept all of the Wheatley Review's recommendations and press for legislation implementing them. Significant reforms, in line with the Wheatley Review, came into effect in 2013 and a new administrator took over in early 2014; the British government regulates Libor through criminal and regulatory laws passed by the Parliament.
In particular, the Financial Services Act 2012 brings Libor under UK regulatory oversight and creates a criminal offence for knowingly or deliberately making false or misleading statements relating to benchmark-setting. The London Interbank Offered Rate came into widespread use in the 1970s as a reference interest rate for transactions in offshore Eurodollar markets. In 1984, it became apparent that an increasing number of banks were trading in a variety of new market instruments, notably interest rate swaps, foreign currency options and forward rate agreements. While recognizing that such instruments brought more business and greater depth to the London Inter-bank market, bankers worried that future growth could be inhibited unless a measure of uniformity was introduced. In October 1984, the British Bankers' Association —working with other parties, such as the Bank of England—established various working parties, which culminated in the production of the BBA standard for interest rate swaps, or "BBAIRS" terms.
Part of this standard included the fixing of BBA interest-settlement rates, the predecessor of BBA Libor. From 2 September 1985, the BBAIRS terms became standard market practice. BBA Libor fixings did not commence before 1 January 1986. Before that date, some rates were fixed for a trial period commencing in December 1984. Member banks are international in scope, with more than sixty nations represented among its 223 members and 37 associated professional firms as of 2008. Seventeen banks for example contribute to the fixing of US Dollar Libor; the panel contains the following member banks: Bank of America Bank of Tokyo-Mitsubishi UFJ Barclays Bank Citibank NA Credit Agricole CIB Credit Suisse Deutsche Bank HSBC JP Morgan Chase Lloyds Banking Group Rabobank Royal Bank of Canada Société Générale Sumitomo Mitsui Banking Corporation Europe Ltd Norinchukin Bank Royal Bank of Scotland UBS AG Libor is used as a reference rate for many financial instruments in both financial markets and commercial fields.
There are three major classifications of interest rate fixings instruments, including standard inter bank products, commercial field products, hybrid products which use Libor as their reference rate. Standard interbank products: Forward rate agreements Interest rate futures, e.g. Eurodollar futures Interest rate swaps Swaptions Overnight indexed swaps, e.g. Libor–OIS spread Interest rate options, Interest rate cap and floorCommercial field products: Floating rate notes Floating rate certificates of deposit Syndicated loans Variable rate mortgages Term loansHybrid products: Range accrual notes Step up callable notes Target redemption notes Hybrid perpetual notes Collateralized mortgage obligations Collateralized debt obligationsIn the United States in 2008, around sixty percent of prime adjustable-rate mortgages and nearly all subprime mortgages were indexed to the US dollar Libor. In 2012, around 45 percent of prime adjustable rate mortgages and more than 80 percent of subprime mortgages were indexed to the Libor.
American municipalities borrowed around 75 percent of their money through financial products that were linked to the Libor. In the UK, the three-month British pound Libor is used for some mortgages—especially for those with adverse credit history; the Swiss franc Libor is used by the Swiss National Bank as their reference rate for monetary policy. The usual reference rate for euro denominated interest rate products, however, is the Euribor compiled by the European Banking Federation from a larger bank panel. A euro Libor does exist, but for continuity
The median is the value separating the higher half from the lower half of a data sample. For a data set, it may be thought of as the "middle" value. For example, in the data set, the median is 6, the fourth largest, the fifth smallest, number in the sample. For a continuous probability distribution, the median is the value such that a number is likely to fall above or below it; the median is a used measure of the properties of a data set in statistics and probability theory. The basic advantage of the median in describing data compared to the mean is that it is not skewed so much by large or small values, so it may give a better idea of a "typical" value. For example, in understanding statistics like household income or assets which vary a mean may be skewed by a small number of high or low values. Median income, for example, may be a better way to suggest; because of this, the median is of central importance in robust statistics, as it is the most resistant statistic, having a breakdown point of 50%: so long as no more than half the data are contaminated, the median will not give an arbitrarily large or small result.
The median of a finite list of numbers can be found by arranging all the numbers from smallest to greatest. If there is an odd number of numbers, the middle one is picked. For example, consider the list of numbers 1, 3, 3, 6, 7, 8, 9This list contains seven numbers; the median is the fourth of them, 6. If there is an number of observations there is no single middle value. For example, in the data set 1, 2, 3, 4, 5, 6, 8, 9the median is the mean of the middle two numbers: this is / 2, 4.5.. The formula used to find the index of the middle number of a data set of n numerically ordered numbers is / 2; this either gives the halfway point between the two middle values. For example, with 14 values, the formula will give an index of 7.5, the median will be taken by averaging the seventh and eighth values. So the median can be represented by the following formula: m e d i a n = a ⌈ # x ÷ 2 ⌉ + a ⌈ # x ÷ 2 + 1 ⌉ 2 One can find the median using the Stem-and-Leaf Plot. There is no accepted standard notation for the median, but some authors represent the median of a variable x either as x͂ or as μ1/2 sometimes M.
In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced. The median is used for skewed distributions, which it summarizes differently from the arithmetic mean. Consider the multiset; the median is 2 in this case, it might be seen as a better indication of central tendency than the arithmetic mean of 4. The median is a popular summary statistic used in descriptive statistics, since it is simple to understand and easy to calculate, while giving a measure, more robust in the presence of outlier values than is the mean; the cited empirical relationship between the relative locations of the mean and the median for skewed distributions is, not true. There are, various relationships for the absolute difference between them. With an number of observations no value need be at the value of the median. Nonetheless, the value of the median is uniquely determined with the usual definition. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid.
In a population, at most half have values less than the median and at most half have values greater than it. If each group contains less than half the population some of the population is equal to the median. For example, if a < b < c the median of the list is b, and, if a < b < c < d the median of the list is the mean of b and c. Indeed, as it is based on the middle data in a group, it is not necessary to know the value of extreme results in order to calculate a median. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated; the median can be used as a measure of location when a distribution is skewed, when end-values are not known, or when one requires reduced importance to be attached to outliers, e.g. because they may be measurement errors. A median is only defined on ordered one-dimensional data, is independent of any distance metric. A geometric median, on the other hand, is defined in any number of dimensions.
The median is one of a number of ways
In statistics, an outlier is an observation point, distant from other observations. An outlier may be due to variability in the measurement or it may indicate experimental error. An outlier can cause serious problems in statistical analyses. Outliers can occur by chance in any distribution, but they indicate either measurement error or that the population has a heavy-tailed distribution. In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate'correct trial' versus'measurement error'. In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable; this can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data.
Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected. Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets. For example, if one is calculating the average temperature of 10 objects in a room, nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object than the mean; as illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set.
Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not. However, the mean is a more precise estimator. In the case of distributed data, the three sigma rule means that 1 in 22 observations will differ by twice the standard deviation or more from the mean, 1 in 370 will deviate by three times the standard deviation. In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution – and not indicate an anomaly. If the sample size is only 100, just three such outliers are reason for concern, being more than 11 times the expected number. In general, if the nature of the population distribution is known a priori, it is possible to test if the number of outliers deviate from what can be expected: for a given cutoff of a given distribution, the number of outliers will follow a binomial distribution with parameter p, which can be well-approximated by the Poisson distribution with λ = pn.
Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, p is 0.3%, thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3. Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end. There is no rigid mathematical definition of.
There are various methods of outlier detection. Some are graphical such as normal probability plots. Others are model-based. Box plots are a hybrid. Model-based methods which are used for identification assume that the data are from a normal distribution, identify observations which are deemed "unlikely" based on mean and standard deviation: Chauvenet's criterion Grubbs's test for outliers Dixon's Q test ASTM E178 Standard Practice for Dealing With Outlying Observations Mahalanobis distance and leverage are used to detect outliers in the development of linear regression models. Subspace and correlation based techniques for high-dimensional numerical data It is proposed to determine in a series of m observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are
Statistics is a branch of mathematics dealing with data collection, analysis and presentation. In applying statistics to, for example, a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics; when census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements.
In contrast, an observational study does not involve experimental manipulation. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, inferential statistics, which draw conclusions from data that are subject to random variation. Descriptive statistics are most concerned with two sets of properties of a distribution: central tendency seeks to characterize the distribution's central or typical value, while dispersion characterizes the extent to which members of the distribution depart from its center and each other. Inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets.
Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors and Type II errors. Multiple problems have come to be associated with this framework: ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis. Measurement processes that generate statistical data are subject to error. Many of these errors are classified as random or systematic, but other types of errors can be important; the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics can be said to have begun in ancient civilization, going back at least to the 5th century BC, but it was not until the 18th century that it started to draw more from calculus and probability theory. In more recent years statistics has relied more on statistical software to produce tests such as descriptive analysis.
Some definitions are: Merriam-Webster dictionary defines statistics as "a branch of mathematics dealing with the collection, analysis and presentation of masses of numerical data." Statistician Arthur Lyon Bowley defines statistics as "Numerical statements of facts in any department of inquiry placed in relation to each other."Statistics is a mathematical body of science that pertains to the collection, interpretation or explanation, presentation of data, or as a branch of mathematics. Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty and decision making in the face of uncertainty. Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, measure-theoretic probability theory.
In applying statistics to a problem, it is common practice to start with a population or process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Ideally, statisticians compile data about the entire population; this may be organized by governmental statistical institutes. Descriptive statistics can be used to summarize the population data. Numerical descriptors include mean and standard deviation for continuous data types, while frequency and percentage are more useful in terms of describing categorical data; when a census is not feasible, a chosen subset of the population called. Once a sample, representative of the population is determined, data is collected for the sample members in an observational or experimental setting. Again, descriptive statistics can be used to summarize the sample data. However, the drawing of the sample has been subject to an element of randomness, hence the established numerical descriptors from the sample are due to uncertainty.
To still draw meaningful conclusions about the entire population, in
Everything2, or E2 for short, is a collaborative Web-based community consisting of a database of interlinked user-submitted written material. E2 has no formal policy on subject matter. Writing on E2 covers a wide range of topics and genres, including encyclopedic articles, diary entries, poetry and fiction; the predecessor of E2 was a similar database called Everything, started around March 1998 by Nathan Oostendorp and was closely aligned with and promoted by the technology-related news website Slashdot sharing some administrators. The Everything1 software offered vastly more features, the Everything1 data was twice incorporated into E2: once on November 13, 1999, again in January 2000; the Everything2 server used to be colocated with the Slashdot servers. However, some time after OSDN acquired Slashdot, moved the Slashdot servers, this hosting was terminated on short notice; this resulted in Everything2 being offline from November 6 to December 9, 2003. Everything2 was hosted by the University of Michigan for a time.
As the Everything2 site put it on October 2, 2006: Now, we have an arrangement with the University of Michigan, located in Ann Arbor. We exist thanks to their generosity, they gave us some servers and act as our ISP, free of charge, all they ask in exchange is that we not display advertisements. The Everything2 servers were moved to the nearby Michigan State University in February 2007. E2 was owned by the Blockstackers Intergalactic company, but does not make a profit and is viewed by its long-term users as a collaborative work-in-progress; until mid-2007 it accepted donations of money and, on occasion, of computer hardware but no longer does so. Some of its administrators are affiliated with Blockstackers, some are not; the site is not a democracy, the degree to which users influence decisions depends on the nature of the decisions and the administrators making them. As of January 23, 2012, it was announced that the site had been sold to long-time user and coder Jay Bonci under the name Everything2 Media LLC.
Writeups in E1 were limited to 512 bytes in size. This, plus the predominantly "geek" membership back and the lack of chat facilities, meant the early work was of poor quality and was filled with self-referential humor; as E2 has expanded, stricter quality standards have developed, much of the old material has been removed, the membership has become broader in interest, although smaller in number. Many noders prefer to write encyclopedic articles similar to those on Wikipedia; some write fiction or poetry, some discuss issues, some write daily journals, called "daylogs." Unlike Wikipedia, E2 does not have an enforced neutral point of view. An informal survey of noder political beliefs indicates that the user base tends to lean left politically. There are conservative voices as well and while debate nodes are tolerated, well-formed points of view from any part of the political or cultural spectrum are. According to E2's "Site Trajectory", traffic has dropped from 9976 new write-ups created in the month of August 2000, down to 93 new write-ups in February 2017.
Some of the management regard Everything2 as a publication. Although Everything2 does not seek to become an encyclopedia, a substantial amount of factual content has been submitted to Everything2. Policy states that "Everything2 is not a bulletin board." Writeups which exist as replies to other writeups, or which add a minor point to them or which otherwise do not stand well alone are discouraged, not least because the deletion of the original writeup orphans any replies. This policy helps to moderate flame wars on controversial topics. Everything2 is not a wiki, there is no direct way for non-content editors to make corrections or amendments to another author's article. Avenues for correction involve discussing the writeup with its author. Like other online communities, E2 has a social hierarchy and code of behavior, to which it is sometimes difficult for a newcomer to adjust. Moreover, some people complain that new users are held to a different standard from established contributors, that their writeups are singled out for deletion regardless of content.
Another complaint is that all too site administrators remove articles that they do not agree with or which they do not see explicit value in, thus biasing the content of the database. Others dismiss such complaints as unjustified. There is no consistent, written site policy on acceptable behavior, although the usual intolerance for trolling or hatemongering remains, as is the case with most web-based communities. Bans have occurred for antisocial and/or insulting behaviour, albeit rarely and only after a more personal approach to change the offender's behavior. Though these decisions are broadly accepted, some current and ex-members of the site believe that this amounts to mismanagement, point to accumulation of disgruntled ex-users as evidence of a problem. A noder will request their E2 account be locked, preventing them from logging in; the causes for this are varied as the causes f