1.
Decision tree learning
–
Decision tree learning uses a decision tree as a predictive model which maps observations about an item to conclusions about the items target value. It is one of the predictive modelling approaches used in statistics, data mining, Decision trees where the target variable can take continuous values are called regression trees. In decision analysis, a tree can be used to visually and explicitly represent decisions. In data mining, a decision tree describes data and this page deals with decision trees in data mining. Decision tree learning is a commonly used in data mining. The goal is to create a model that predicts the value of a target based on several input variables. An example is shown in the diagram at right, each interior node corresponds to one of the input variables, there are edges to children for each of the possible values of that input variable. Each leaf represents a value of the target given the values of the input variables represented by the path from the root to the leaf. A decision tree is a representation for classifying examples. For this section, assume all of the input features have finite discrete domains. Each element of the domain of the classification is called a class, a decision tree or a classification tree is a tree in which each internal node is labeled with an input feature. Each leaf of the tree is labeled with a class or a probability distribution over the classes, a tree can be learned by splitting the source set into subsets based on an attribute value test. This process is repeated on each derived subset in a manner called recursive partitioning. See the examples illustrated in the figure for spaces that have and have not been partitioned using recursive partitioning, the recursion is completed when the subset at a node has all the same value of the target variable, or when splitting no longer adds value to the predictions. This process of induction of decision trees is an example of a greedy algorithm. In data mining, decision trees can be described also as the combination of mathematical and computational techniques to aid the description, categorization and generalization of a set of data. Data comes in records of the form, = The dependent variable, Y, is the variable that we are trying to understand. The vector x is composed of the variables, x1, x2
2.
Economics
–
Economics is a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and services according to the Merriam-Webster Dictionary. Economics focuses on the behaviour and interactions of economic agents and how economies work, consistent with this focus, textbooks often distinguish between microeconomics and macroeconomics. Microeconomics examines the behaviour of elements in the economy, including individual agents and markets, their interactions. Individual agents may include, for example, households, firms, buyers, macroeconomics analyzes the entire economy and issues affecting it, including unemployment of resources, inflation, economic growth, and the public policies that address these issues. Economic analysis can be applied throughout society, as in business, finance, health care, Economic analyses may also be applied to such diverse subjects as crime, education, the family, law, politics, religion, social institutions, war, science, and the environment. At the turn of the 21st century, the domain of economics in the social sciences has been described as economic imperialism. The ultimate goal of economics is to improve the conditions of people in their everyday life. There are a variety of definitions of economics. Some of the differences may reflect evolving views of the subject or different views among economists, to supply the state or commonwealth with a revenue for the publick services. Say, distinguishing the subject from its uses, defines it as the science of production, distribution. On the satirical side, Thomas Carlyle coined the dismal science as an epithet for classical economics, in this context and it enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. He affirmed that previous economists have usually centred their studies on the analysis of wealth, how wealth is created, distributed, and consumed, but he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it, generates both cost and benefits, and, resources are used to attain the goal. If the war is not winnable or if the costs outweigh the benefits. Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets, there are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. The same source reviews a range of included in principles of economics textbooks. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, microeconomics examines how entities, forming a market structure, interact within a market to create a market system
3.
Italians
–
Italians are a nation and ethnic group native to Italy who share a common culture, ancestry and speak the Italian language as a native tongue. The majority of Italian nationals are speakers of Standard Italian. Italians have greatly influenced and contributed to the arts and music, science, technology, cuisine, sports, fashion, jurisprudence, banking, Italian people are generally known for their localism and their attention to clothing and family values. The term Italian is at least 3,000 years old and has a history that goes back to pre-Roman Italy. According to one of the common explanations, the term Italia, from Latin, Italia, was borrowed through Greek from the Oscan Víteliú. The bull was a symbol of the southern Italic tribes and was often depicted goring the Roman wolf as a defiant symbol of free Italy during the Social War. Greek historian Dionysius of Halicarnassus states this account together with the legend that Italy was named after Italus, mentioned also by Aristotle and Thucydides. The Etruscan civilization reached its peak about the 7th century BC, but by 509 BC, when the Romans overthrew their Etruscan monarchs, its control in Italy was on the wane. By 350 BC, after a series of wars between Greeks and Etruscans, the Latins, with Rome as their capital, gained the ascendancy by 272 BC, and they managed to unite the entire Italian peninsula. This period of unification was followed by one of conquest in the Mediterranean, in the course of the century-long struggle against Carthage, the Romans conquered Sicily, Sardinia and Corsica. Finally, in 146 BC, at the conclusion of the Third Punic War, with Carthage completely destroyed and its inhabitants enslaved, octavian, the final victor, was accorded the title of Augustus by the Senate and thereby became the first Roman emperor. After two centuries of rule, in the 3rd century AD, Rome was threatened by internal discord and menaced by Germanic and Asian invaders. Emperor Diocletians administrative division of the empire into two parts in 285 provided only temporary relief, it became permanent in 395, in 313, Emperor Constantine accepted Christianity, and churches thereafter rose throughout the empire. However, he moved his capital from Rome to Constantinople. The last Western emperor, Romulus Augustulus, was deposed in 476 by a Germanic foederati general in Italy and his defeat marked the end of the western part of the Roman Empire. During most of the period from the fall of Rome until the Kingdom of Italy was established in 1861, Odoacer ruled well for 13 years after gaining control of Italy in 476. Then he was attacked and defeated by Theodoric, the king of another Germanic tribe, Theodoric and Odoacer ruled jointly until 493, when Theodoric murdered Odoacer. Theodoric continued to rule Italy with an army of Ostrogoths and a government that was mostly Italian, after the death of Theodoric in 526, the kingdom began to grow weak
4.
Statistics
–
Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e. g. a scientific, industrial, or social problem, populations can be diverse topics such as all people living in a country or every atom composing a crystal. Statistics deals with all aspects of data including the planning of data collection in terms of the design of surveys, statistician Sir Arthur Lyon Bowley defines statistics as Numerical statements of facts in any department of inquiry placed in relation to each other. When census data cannot be collected, statisticians collect data by developing specific experiment designs, representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. In contrast, an observational study does not involve experimental manipulation, inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two data sets, or a data set and a synthetic data drawn from idealized model. A hypothesis is proposed for the relationship between the two data sets, and this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the hypothesis is done using statistical tests that quantify the sense in which the null can be proven false. Working from a hypothesis, two basic forms of error are recognized, Type I errors and Type II errors. Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis, measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random or systematic, the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics continues to be an area of research, for example on the problem of how to analyze Big data. Statistics is a body of science that pertains to the collection, analysis, interpretation or explanation. Some consider statistics to be a mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty, mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. In applying statistics to a problem, it is practice to start with a population or process to be studied. Populations can be diverse topics such as all living in a country or every atom composing a crystal. Ideally, statisticians compile data about the entire population and this may be organized by governmental statistical institutes
5.
Sociology
–
Sociology is the study of social behaviour or society, including its origins, development, organisation, networks, and institutions. It is a science that uses various methods of empirical investigation and critical analysis to develop a body of knowledge about social order, disorder. Many sociologists aim to research that may be applied directly to social policy and welfare. Subject matter ranges from the level of individual agency and interaction to the macro level of systems. The traditional focuses of sociology include social stratification, social class, social mobility, religion, secularization, law, sexuality, the range of social scientific methods has also expanded. Social researchers draw upon a variety of qualitative and quantitative techniques, the linguistic and cultural turns of the mid-twentieth century led to increasingly interpretative, hermeneutic, and philosophic approaches towards the analysis of society. There is often a great deal of crossover between social research, market research, and other statistical fields, Sociology is distinguished from various general social studies courses, which bear little relation to sociological theory or to social-science research-methodology. The US National Science Foundation classifies sociology as a STEM field, Sociological reasoning pre-dates the foundation of the discipline. Social analysis has origins in the stock of Western knowledge and philosophy. The origin of the survey, i. e, there is evidence of early sociology in medieval Arab writings. The word sociology is derived from both Latin and Greek origins, the Latin word, socius, companion, the suffix -logy, the study of from Greek -λογία from λόγος, lógos, word, knowledge. It was first coined in 1780 by the French essayist Emmanuel-Joseph Sieyès in an unpublished manuscript, Sociology was later defined independently by the French philosopher of science, Auguste Comte, in 1838. Comte used this term to describe a new way of looking at society, Comte had earlier used the term social physics, but that had subsequently been appropriated by others, most notably the Belgian statistician Adolphe Quetelet. Comte endeavoured to unify history, psychology and economics through the understanding of the social realm. Comte believed a positivist stage would mark the final era, after conjectural theological and metaphysical phases, Comte gave a powerful impetus to the development of sociology, an impetus which bore fruit in the later decades of the nineteenth century. To say this is not to claim that French sociologists such as Durkheim were devoted disciples of the high priest of positivism. To be sure, beginnings can be traced back well beyond Montesquieu, for example, Marx rejected Comtean positivism but in attempting to develop a science of society nevertheless came to be recognized as a founder of sociology as the word gained wider meaning. For Isaiah Berlin, Marx may be regarded as the father of modern sociology
6.
Italian language
–
By most measures, Italian, together with Sardinian, is the closest to Latin of the Romance languages. Italian is a language in Italy, Switzerland, San Marino, Vatican City. Italian is spoken by minorities in places such as France, Montenegro, Bosnia & Herzegovina, Crimea and Tunisia and by large expatriate communities in the Americas. Many speakers are native bilinguals of both standardized Italian and other regional languages, Italian is the fourth most studied language in the world. Italian is a major European language, being one of the languages of the Organisation for Security and Cooperation in Europe. It is the third most widely spoken first language in the European Union with 65 million native speakers, including Italian speakers in non-EU European countries and on other continents, the total number of speakers is around 85 million. Italian is the working language of the Holy See, serving as the lingua franca in the Roman Catholic hierarchy as well as the official language of the Sovereign Military Order of Malta. Italian is known as the language of music because of its use in musical terminology and its influence is also widespread in the arts and in the luxury goods market. Italian has been reported as the fourth or fifth most frequently taught foreign language in the world, Italian was adopted by the state after the Unification of Italy, having previously been a literary language based on Tuscan as spoken mostly by the upper class of Florentine society. Its development was influenced by other Italian languages and to some minor extent. Its vowels are the second-closest to Latin after Sardinian, unlike most other Romance languages, Italian retains Latins contrast between short and long consonants. As in most Romance languages, stress is distinctive, however, Italian as a language used in Italy and some surrounding regions has a longer history. What would come to be thought of as Italian was first formalized in the early 14th century through the works of Tuscan writer Dante Alighieri, written in his native Florentine. Dante is still credited with standardizing the Italian language, and thus the dialect of Florence became the basis for what would become the language of Italy. Italian was also one of the recognised languages in the Austro-Hungarian Empire. Italy has always had a dialect for each city, because the cities. Those dialects now have considerable variety, as Tuscan-derived Italian came to be used throughout Italy, features of local speech were naturally adopted, producing various versions of Regional Italian. Even in the case of Northern Italian languages, however, scholars are not to overstate the effects of outsiders on the natural indigenous developments of the languages
7.
Income
–
Income is the consumption and savings opportunity gained by an entity within a specified timeframe, which is generally expressed in monetary terms. However, for households and individuals, income is the sum of all the wages, salaries, profits, interests payments, rents, in a given period of time. In the field of economics, the term may refer to the accumulation of both monetary and non-monetary consumption ability, with the former being used as a proxy for total income. Income per capita has been increasing steadily in almost every country, many factors contribute to people having a higher income such as education, globalisation and favorable political circumstances such as economic freedom and peace. Increase in income tends to lead to people choosing to work less hours. Developed countries have higher incomes as opposed to developing countries tending to have lower incomes, from labor services, as well as ownership of land and capital. In consumer theory income is another name for the budget constraint, the basic equation for this is Y = P x ⋅ x + P y ⋅ y This equation implies two things. First buying one unit of good x implies buying P x P y less units of good y. So, P x P y is the price of a unit of x as to the number of units given up in y. Second, if the price of x falls for a fixed Y, the usual hypothesis is that the quantity demanded of x would increase at the lower price, the law of demand. The generalization to more than two goods consists of modelling y as a composite good, the theoretical generalization to more than one period is a multi-period wealth and income constraint. For example, the person can gain more productive skills or acquire more productive income-earning assets to earn a higher income. In the multi-period case, something might happen to the economy beyond the control of the individual to reduce the flow of income. Changing measured income and its relation to consumption over time might be modeled accordingly, full income refers to the accumulation of both the monetary and the non-monetary consumption-ability of any given entity, such as a person or a household. According to what the economist Nicholas Barr describes as the definition of income, income may be defined as the. Sum of the value of rights exercised in consumption and the change in the value of the store of property rights. Since the consumption potential of non-monetary goods, such as leisure, cannot be measured, as such, however, it is criticized for being unreliable, i. e. failing to accurately reflect affluence of any given agent. It omits the utility a person may derive from non-monetary income and, on a macroeconomic level, according to Barr, in practice money income as a proportion of total income varies widely and unsystematically
8.
Social inequality
–
It is the differentiation preference of access of social goods in the society brought about by power, religion, kinship, prestige, race, ethnicity, gender, age, and class. The social rights include labor market, the source of income, health care, and freedom of speech, education, political representation, and participation. Social inequality linked to Economic inequality, usually described on the basis of the distribution of income or wealth, is a frequently studied type of social inequality. However, social and natural resources other than purely economic resources are unevenly distributed in most societies. Many societies worldwide claim to be meritocracies – that is, that their societies exclusively distribute resources on the basis of merit, a modern representation of the sort of “meritocracy” Young feared may be seen in the series 3%. In many cases, social inequality is linked to racial inequality, ethnic inequality, and gender inequality, as well as other social statuses and these forms can be related to corruption. The most common metric for comparing social inequality in different nations is the Gini coefficient, Two nations may have identical Gini coefficients but dramatically different economic and/or quality of life, so the Gini coefficient must be contextualized for meaningful comparisons to be made. Social inequality is found in almost every society, in simple societies, those that have few social roles and statuses occupied by its members, social inequality may be very low. Anthropologists identify such highly egalitarian cultures as kinship-oriented, which appear to value social harmony more than wealth or status and these cultures are contrasted with materially oriented cultures in which status and wealth are prized and competition and conflict are common. Kinship-oriented cultures may actively work to prevent social hierarchies from developing because they believe that could lead to conflict, in todays world, most of our population lives in more complex than simple societies. As social complexity increases, inequality tends to increase along with a gap between the poorest and the most wealthy members of society. Social inequality can be classified into egalitarian societies, ranked society, egalitarian societies are those communities advocating for social equality through equal opportunities and rights hence no discrimination. People with special skills were not viewed as superior compared to the rest, the leaders do not have the power they only have influence. The norms and the beliefs the egalitarian society holds are for sharing equally, ranked society mostly is agricultural communities who hierarchically grouped from the chief who is viewed to have a status in the society. In this society, people are clustered regarding status and prestige and not by access to power, the chief is the most influential person followed by his family and relative, and those further related to him are less ranked. Stratified society is societies which horizontally ranked into the class, middle class. The classification is regarding wealth, power, and prestige, the upper class are mostly the leaders and are the most influential in the society. Its possible for a person in the society to move from one stratum to the other, the social status is also hereditable from one generation to the next
9.
Income inequality metrics
–
While different theories may try to explain how income inequality comes about, income inequality metrics simply provide a system of measurement used to determine the dispersion of incomes. The concept of inequality is distinct from poverty and fairness, Income distribution has always been a central concern of economic theory and economic policy. It is often related to wealth distribution although separate factors influence wealth inequality, modern economists have also addressed this issue, but have been more concerned with the distribution of income across individuals and households. Important theoretical and policy concerns include the relationship between income inequality and economic growth, the article Economic inequality discusses the social and policy aspects of income distribution questions. All of the metrics described below are applicable to evaluating the distributional inequality of various kinds of resources, here the focus is on income as a resource. As there are forms of income, the investigated kind of income has to be clearly described. One form of income is the amount of goods and services that a person receives. If a subsistence farmer in Uganda grows his own grain, it will count as income, services like public health and education are also counted in. Often expenditure or consumption is used to measure income, the World Bank uses the so-called living standard measurement surveys to measure income. These consist of questionnaires with more than 200 questions, surveys have been completed in most developing countries. Applied to the analysis of income inequality within countries, income often stands for the income per individual or per household. Here, income inequality measures also can be used to compare the income distributions before and after taxation in order to measure the effects of progressive tax rates. In the discrete case, an economic inequality index may be represented by a function I and this property distinguishes the concept of inequality from that of fairness where who owns a particular level of income and how it has been acquired is of central importance. An inequality metric is a statement simply about how income is distributed, in other words, if every persons income in an economy is doubled then the overall metric of inequality should not change. Of course the same thing applies to poorer economies, the inequality income metric should be independent of the aggregate level of income. This may be stated as, I = I where α is a real number. Population independence Similarly, the income inequality metric should not depend on whether an economy has a large or small population, an economy with only a few people should not be automatically judged by the metric as being more equal than a large economy with lots of people. This means that the metric should be independent of the level of population and this is generally written, I = I where x ∪ x is the union of x with itself
10.
Slovenia
–
Slovenia, officially the Republic of Slovenia, is a nation state in southern Central Europe, located at the crossroads of main European cultural and trade routes. It is bordered by Italy to the west, Austria to the north, Hungary to the northeast, Croatia to the south and southeast, and it covers 20,273 square kilometers and has a population of 2.06 million. It is a republic and a member of the United Nations, European Union. The capital and largest city is Ljubljana, additionally, the Dinaric Alps and the Pannonian Plain meet on the territory of Slovenia. The country, marked by a significant biological diversity, is one of the most water-rich in Europe, with a river network, a rich aquifer system. Over half of the territory is covered by forest, the human settlement of Slovenia is dispersed and uneven. Slovenia has historically been the crossroads of South Slavic, Germanic, Romance, although the population is not homogeneous, the majority is Slovene. Slovene is the language throughout the country. Slovenia is a largely secularized country, but its culture and identity have been influenced by Catholicism as well as Lutheranism. The economy of Slovenia is small, open, and export-oriented and has strongly influenced by international conditions. It has been hurt by the Eurozone crisis, started in the late 2000s. The main economic field is services, followed by industry and construction, Historically, the current territory of Slovenia was part of many different state formations, including the Roman Empire and the Holy Roman Empire, followed by the Habsburg Monarchy. In October 1918, the Slovenes exercised self-determination for the first time by co-founding the State of Slovenes, Croats, in December 1918, they merged with the Kingdom of Serbia into the Kingdom of Serbs, Croats and Slovenes. During World War II, Slovenia was occupied and annexed by Germany, Italy, and Hungary, with a tiny area transferred to the Independent State of Croatia, in June 1991, after the introduction of multi-party representative democracy, Slovenia split from Yugoslavia and became an independent country. Present-day Slovenia has been inhabited since prehistoric times, and there is evidence of habitation from around 250,000 years ago. A pierced cave bear bone, dating from 43100 ±700 BP, in the 1920s and 1930s, artifacts belonging to the Cro-Magnon such as pierced bones, bone points, and needle were found by archaeologist Srečko Brodar in Potok Cave. It shows that wooden wheels appeared almost simultaneously in Mesopotamia and Europe, in the transition period between the Bronze age to the Iron age, the Urnfield culture flourished. Archaeological remains dating from the Hallstatt period have been found, particularly in southeastern Slovenia, among them a number of situlas in Novo Mesto, in the Iron Age, present-day Slovenia was inhabited by Illyrian and Celtic tribes until the 1st century BC
11.
Chile
–
Chile, officially the Republic of Chile, is a South American country occupying a long, narrow strip of land between the Andes to the east and the Pacific Ocean to the west. It borders Peru to the north, Bolivia to the northeast, Argentina to the east, Chilean territory includes the Pacific islands of Juan Fernández, Salas y Gómez, Desventuradas, and Easter Island in Oceania. Chile also claims about 1,250,000 square kilometres of Antarctica, the arid Atacama Desert in northern Chile contains great mineral wealth, principally copper. Southern Chile is rich in forests and grazing lands, and features a string of volcanoes and lakes, the southern coast is a labyrinth of fjords, inlets, canals, twisting peninsulas, and islands. Spain conquered and colonized Chile in the century, replacing Inca rule in northern and central Chile. After declaring its independence from Spain in 1818, Chile emerged in the 1830s as a relatively stable authoritarian republic, in the 1960s and 1970s the country experienced severe left-right political polarization and turmoil. The regime, headed by Augusto Pinochet, ended in 1990 after it lost a referendum in 1988 and was succeeded by a coalition which ruled through four presidencies until 2010. Chile is today one of South Americas most stable and prosperous nations and it leads Latin American nations in rankings of human development, competitiveness, income per capita, globalization, state of peace, economic freedom, and low perception of corruption. It also ranks high regionally in sustainability of the state, Chile is a founding member of the United Nations, the Union of South American Nations and the Community of Latin American and Caribbean States. There are various theories about the origin of the word Chile, another theory points to the similarity of the valley of the Aconcagua with that of the Casma Valley in Peru, where there was a town and valley named Chili. Another origin attributed to chilli is the onomatopoeic cheele-cheele—the Mapuche imitation of the warble of a locally known as trile. The Spanish conquistadors heard about this name from the Incas, ultimately, Almagro is credited with the universalization of the name Chile, after naming the Mapocho valley as such. The older spelling Chili was in use in English until at least 1900 before switching over to Chile, stone tool evidence indicates humans sporadically frequented the Monte Verde valley area as long as 18,500 years ago. About 10,000 years ago, migrating Native Americans settled in fertile valleys, settlement sites from very early human habitation include Monte Verde, Cueva del Milodon and the Pali Aike Craters lava tube. They fought against the Sapa Inca Tupac Yupanqui and his army, the result of the bloody three-day confrontation known as the Battle of the Maule was that the Inca conquest of the territories of Chile ended at the Maule river. The next Europeans to reach Chile were Diego de Almagro and his band of Spanish conquistadors, the Spanish encountered various cultures that supported themselves principally through slash-and-burn agriculture and hunting. The conquest of Chile began in earnest in 1540 and was carried out by Pedro de Valdivia, one of Francisco Pizarros lieutenants, who founded the city of Santiago on 12 February 1541. Although the Spanish did not find the gold and silver they sought, they recognized the agricultural potential of Chiles central valley
12.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
13.
Lorenz curve
–
In economics, the Lorenz curve is a graphical representation of the distribution of income or of wealth. It was developed by Max O. Lorenz in 1905 for representing inequality of the wealth distribution, the curve is a graph showing the proportion of overall income or wealth assumed by the bottom x% of the people, although this is not rigorously true for a finite population. It is often used to represent income distribution, where it shows for the bottom x% of households, the percentage of households is plotted on the x-axis, the percentage of income on the y-axis. It can also be used to show distribution of assets, in such use, many economists consider it to be a measure of social inequality. It is also useful in modeling, e. g. in consumer finance. Points on the Lorenz curve represent statements like the bottom 20% of all households have 10% of the total income, a perfectly equal income distribution would be one in which every person has the same income. In this case, the bottom N% of society would always have N% of the income and this can be depicted by the straight line y = x, called the line of perfect equality. By contrast, an unequal distribution would be one in which one person has all the income. In that case, the curve would be at y = 0% for all x < 100% and this curve is called the line of perfect inequality. The Gini coefficient is the ratio of the area between the line of equality and the observed Lorenz curve to the area between the line of perfect equality and the line of perfect inequality. The higher the coefficient, the more unequal the distribution is, in the diagram on the right, this is given by the ratio A/, where A and B are the areas of regions as marked in the diagram. The Lorenz curve L may then be plotted as a function parametric in x, L vs. F, in other contexts, the quantity computed here is known as the length biased distribution, it also has an important role in renewal theory. However, the formula can still apply by generalizing the definition of x, x = inf For an example of a Lorenz curve. A Lorenz curve always starts at and ends at, the Lorenz curve is not defined if the mean of the probability distribution is zero or infinite. The Lorenz curve for a probability distribution is a continuous function, however, Lorenz curves representing discontinuous functions can be constructed as the limit of Lorenz curves of probability distributions, the line of perfect inequality being an example. The information in a Lorenz curve may be summarized by the Gini coefficient, the Lorenz curve cannot rise above the line of perfect equality. If the variable being measured cannot take negative values, the Lorenz curve, note however that a Lorenz curve for net worth would start out by going negative due to the fact that some people have a negative net worth because of debt. The Lorenz curve is invariant under positive scaling, if X is a random variable, for any positive number c the random variable c X has the same Lorenz curve as X
14.
Ratio
–
In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, thus, a ratio can be a fraction as opposed to a whole number. Also, in example the ratio of lemons to oranges is 6,8. The numbers compared in a ratio can be any quantities of a kind, such as objects, persons, lengths. A ratio is written a to b or a, b, when the two quantities have the same units, as is often the case, their ratio is a dimensionless number. A rate is a quotient of variables having different units, but in many applications, the word ratio is often used instead for this more general notion as well. The numbers A and B are sometimes called terms with A being the antecedent, the proportion expressing the equality of the ratios A, B and C, D is written A, B = C, D or A, B, C, D. This latter form, when spoken or written in the English language, is expressed as A is to B as C is to D. A, B, C and D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means, the equality of three or more proportions is called a continued proportion. Ratios are sometimes used three or more terms. The ratio of the dimensions of a two by four that is ten inches long is 2,4,10, a good concrete mix is sometimes quoted as 1,2,4 for the ratio of cement to sand to gravel. It is impossible to trace the origin of the concept of ratio because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society, however, it is possible to trace the origin of the word ratio to the Ancient Greek λόγος. Early translators rendered this into Latin as ratio, a more modern interpretation of Euclids meaning is more akin to computation or reckoning. Medieval writers used the word to indicate ratio and proportionalitas for the equality of ratios, Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers, the discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables, the existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios
15.
Area
–
Area is the quantity that expresses the extent of a two-dimensional figure or shape, or planar lamina, in the plane. Surface area is its analog on the surface of a three-dimensional object. It is the analog of the length of a curve or the volume of a solid. The area of a shape can be measured by comparing the shape to squares of a fixed size, in the International System of Units, the standard unit of area is the square metre, which is the area of a square whose sides are one metre long. A shape with an area of three square metres would have the area as three such squares. In mathematics, the square is defined to have area one. There are several formulas for the areas of simple shapes such as triangles, rectangles. Using these formulas, the area of any polygon can be found by dividing the polygon into triangles, for shapes with curved boundary, calculus is usually required to compute the area. Indeed, the problem of determining the area of plane figures was a motivation for the historical development of calculus. For a solid such as a sphere, cone, or cylinder. Formulas for the areas of simple shapes were computed by the ancient Greeks. Area plays an important role in modern mathematics, in addition to its obvious importance in geometry and calculus, area is related to the definition of determinants in linear algebra, and is a basic property of surfaces in differential geometry. In analysis, the area of a subset of the plane is defined using Lebesgue measure, in general, area in higher mathematics is seen as a special case of volume for two-dimensional regions. Area can be defined through the use of axioms, defining it as a function of a collection of certain plane figures to the set of real numbers and it can be proved that such a function exists. An approach to defining what is meant by area is through axioms, area can be defined as a function from a collection M of special kind of plane figures to the set of real numbers which satisfies the following properties, For all S in M, a ≥0. If S and T are in M then so are S ∪ T and S ∩ T, if S and T are in M with S ⊆ T then T − S is in M and a = a − a. If a set S is in M and S is congruent to T then T is also in M, every rectangle R is in M. If the rectangle has length h and breadth k then a = hk, let Q be a set enclosed between two step regions S and T
16.
Absolute difference
–
The absolute difference of two real numbers x, y is given by |x − y|, the absolute value of their difference. It describes the distance on the line between the points corresponding to x and y. It is a case of the Lp distance for all 1 ≤ p ≤ ∞ and is the standard metric used for both the set of rational numbers Q and their completion, the set of real numbers R. As with any metric, the metric properties hold, |x − y| ≥0, |x − y| =0 if and only if x = y. |x − z| ≤ |x − y| + |y − z|, in the case of the difference, equality holds if. By contrast, simple subtraction is not non-negative or commutative, but it does obey the second and fourth properties above, since x − y =0 if and only if x = y, and x − z = +. The absolute difference is used to other quantities including the relative difference, the L1 norm used in taxicab geometry. This follows since |x − y|2 =2 and squaring is monotonic on the nonnegative reals, Absolute deviation Weisstein, Eric W. Absolute Difference
17.
Consistent estimator
–
In practice one constructs an estimator as a function of an available sample of size n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimates indexed by n, if the sequence of estimates can be mathematically shown to converge in probability to the true value θ0, it is called a consistent estimator, otherwise the estimator is said to be inconsistent. Consistency as defined here is referred to as weak consistency. When we replace convergence in probability with almost sure convergence, then the estimator is said to be strongly consistent, consistency is related to bias, see bias versus consistency. Loosely speaking, an estimator Tn of parameter θ is said to be consistent, if it converges in probability to the value of the parameter. A more rigorous definition takes into account the fact that θ is actually unknown, Suppose is a family of distributions, and Xθ = is an infinite sample from the distribution pθ. Let be a sequence of estimators for some parameter g, usually Tn will be based on the first n observations of a sample. Then this sequence is said to be consistent if plim n → ∞ T n = g and this definition uses g instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example we estimate the parameter of the model. To estimate μ based on the first n observations, one can use the sample mean and this defines a sequence of estimators, indexed by the sample size n. From the properties of the distribution, we know the sampling distribution of this statistic, Tn is itself normally distributed, with mean μ. Equivalently, / has a normal distribution, Pr = Pr =2 →0 as n tends to infinity. Therefore, the sequence Tn of sample means is consistent for the population mean μ, the notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Bias is related to consistency as follows, a sequence of estimators is consistent if and only if it converges to a value and the bias converges to zero. Consistent estimators are convergent and asymptotically unbiased, individual estimators in the sequence may be biased, conversely, if the sequence does not converge to a value, then it is not consistent, regardless of whether the estimators in the sequence are biased or not. An estimator can be unbiased but not consistent, for example, for an iid sample one can use T = x 1 as the estimator of the mean E. Note that here the sampling distribution of T is the same as the distribution, so E = E and it is unbiased
18.
Discrete probability distribution
–
For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. In more technical terms, the probability distribution is a description of a phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey, a probability distribution is defined in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of numbers or a higher-dimensional vector space, or it may be a list of non-numerical values, for example. Probability distributions are divided into two classes. A discrete probability distribution can be encoded by a discrete list of the probabilities of the outcomes, on the other hand, a continuous probability distribution is typically described by probability density functions. The normal distribution represents a commonly encountered continuous probability distribution, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is the set of numbers is called univariate. Important and commonly encountered univariate probability distributions include the distribution, the hypergeometric distribution. The multivariate normal distribution is a commonly encountered multivariate distribution, to define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. For example, the probability that an object weighs exactly 500 g is zero. Continuous probability distributions can be described in several ways, the cumulative distribution function is the antiderivative of the probability density function provided that the latter function exists. As probability theory is used in diverse applications, terminology is not uniform. The following terms are used for probability distribution functions, Distribution. Probability distribution, is a table that displays the probabilities of outcomes in a sample. Could be called a frequency distribution table, where all occurrences of outcomes sum to 1. Distribution function, is a form of frequency distribution table. Probability distribution function, is a form of probability distribution table
19.
Probability density function
–
In a more precise sense, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. The probability density function is everywhere, and its integral over the entire space is equal to one. The terms probability distribution function and probability function have also sometimes used to denote the probability density function. However, this use is not standard among probabilists and statisticians, further confusion of terminology exists because density function has also been used for what is here called the probability mass function. In general though, the PMF is used in the context of random variables. Suppose a species of bacteria typically lives 4 to 6 hours, what is the probability that a bacterium lives exactly 5 hours. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.0000000000, instead we might ask, What is the probability that the bacterium dies between 5 hours and 5.01 hours. Lets say the answer is 0.02, next, What is the probability that the bacterium dies between 5 hours and 5.001 hours. The answer is probably around 0.002, since this is 1/10th of the previous interval, the probability that the bacterium dies between 5 hours and 5.0001 hours is probably about 0.0002, and so on. In these three examples, the ratio / is approximately constant, and equal to 2 per hour, for example, there is 0.02 probability of dying in the 0. 01-hour interval between 5 and 5.01 hours, and =2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours, therefore, in response to the question What is the probability that the bacterium dies at 5 hours. A literally correct but unhelpful answer is 0, but an answer can be written as dt. This is the probability that the bacterium dies within a window of time around 5 hours. For example, the probability that it lives longer than 5 hours, there is a probability density function f with f =2 hour−1. The integral of f over any window of time is the probability that the dies in that window. A probability density function is most commonly associated with absolutely continuous univariate distributions, a random variable X has density fX, where fX is a non-negative Lebesgue-integrable function, if, Pr = ∫ a b f X d x. That is, f is any function with the property that. In the continuous univariate case above, the measure is the Lebesgue measure
20.
Cumulative distribution function
–
In the case of a continuous distribution, it gives the area under the probability density function from minus infinity to x. Cumulative distribution functions are used to specify the distribution of multivariate random variables. The probability that X lies in the semi-closed interval (a, b], in the definition above, the less than or equal to sign, ≤, is a convention, not a universally used one, but is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention, moreover, important formulas like Paul Lévys inversion formula for the characteristic function also rely on the less than or equal formulation. If treating several random variables X, Y. etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital F for a distribution function, in contrast to the lower-case f used for probability density functions. This applies when discussing general distributions, some specific distributions have their own conventional notation, the CDF of a continuous random variable X can be expressed as the integral of its probability density function ƒX as follows, F X = ∫ − ∞ x f X d t. In the case of a random variable X which has distribution having a discrete component at a value b, P = F X − lim x → b − F X. If FX is continuous at b, this equals zero and there is no discrete component at b, every cumulative distribution function F is non-decreasing and right-continuous, which makes it a càdlàg function. Furthermore, lim x → − ∞ F =0, lim x → + ∞ F =1, the function f is equal to the derivative of F almost everywhere, and it is called the probability density function of the distribution of X. As an example, suppose X is uniformly distributed on the unit interval, then the CDF of X is given by F = {0, x <0 x,0 ≤ x <11, x ≥1. Suppose instead that X takes only the discrete values 0 and 1, then the CDF of X is given by F = {0, x <01 /2,0 ≤ x <11, x ≥1. Sometimes, it is useful to study the question and ask how often the random variable is above a particular level. This is called the cumulative distribution function or simply the tail distribution or exceedance. This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, in survival analysis, F ¯ is called the survival function and denoted S, while the term reliability function is common in engineering. Properties For a non-negative continuous random variable having an expectation, Markovs inequality states that F ¯ ≤ E x, as x → ∞, F ¯ →0, and in fact F ¯ = o provided that E is finite. This form of illustration emphasises the median and dispersion of the distribution or of the empirical results, if the CDF F is strictly increasing and continuous then F −1, p ∈, is the unique real number x such that F = p
21.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
22.
Integration by parts
–
It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be derived in one line simply by integrating the product rule of differentiation, more general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts, the theorem can be derived as follows. Suppose u and v are two differentiable functions. The product rule states, d d x = v d d x + u d d x and it is not actually necessary for u and v to be continuously differentiable. Integration by parts works if u is continuous and the function designated v is Lebesgue integrable. This is only if we choose v = − exp . One can also come up with similar examples in which u and v are not continuously differentiable. This visualisation also explains why integration by parts may help find the integral of an inverse function f−1 when the integral of the f is known. Indeed, the x and y are inverses, and the integral ∫x dy may be calculated as above from knowing the integral ∫y dx. The following form is useful in illustrating the best strategy to take, as a simple example, consider, ∫ ln x 2 d x. Since the derivative of ln is 1/x, one makes part u, since the antiderivative of 1/x2 is -1/x, the formula now yields, ∫ ln x 2 d x = − ln x − ∫ d x. The antiderivative of −1/x2 can be found with the rule and is 1/x. Alternatively, one may choose u and v such that the product u simplifies due to cancellation, for example, suppose one wishes to integrate, ∫ sec 2 ⋅ ln d x. The integrand simplifies to 1, so the antiderivative is x, finding a simplifying combination frequently involves experimentation. Some other special techniques are demonstrated in the examples below, exponentials and trigonometric functions An example commonly used to examine the workings of integration by parts is I = ∫ e x cos d x. Here, integration by parts is performed twice, then, ∫ e x sin d x = e x sin − ∫ e x cos d x. Putting these together, ∫ e x cos d x = e x cos + e x sin − ∫ e x cos d x, the same integral shows up on both sides of this equation
23.
Quantile function
–
It is also called the percent-point function or inverse cumulative distribution function. d. f would fall p percent of the time. In terms of the distribution function F, the quantile function Q returns the value x such that F X, = Pr = p, another way to express the quantile function is Q = inf for a probability 0 < p <1. Here we capture the fact that the function returns the minimum value of x from amongst all those values whose c. d. f value exceeds p. In this case we need to use the more complicated formula above, for example, the cumulative distribution function of Exponential is F = {1 − e − λ x x ≥0,0 x <0. The quantile function for Exponential is derived by finding the value of Q for which 1 − e − λ Q = p, Q = − ln λ, for 0 ≤ p <1. The quartiles are therefore, first quartile ln / λ median ln / λ third quartile ln / λ, Quantile functions are used in both statistical applications and Monte Carlo methods. The quantile function, Q, of a probability distribution is the inverse of its distribution function F. The derivative of the function, namely the quantile density function, is yet another way of prescribing a probability distribution. It is the reciprocal of the pdf composed with the quantile function, for statistical applications, users need to know key percentage points of a given distribution. Before the popularization of computers, it was not uncommon for books to have appendices with statistical tables sampling the quantile function, statistical applications of quantile functions are discussed extensively by Gilchrist. Monte-Carlo simulations employ quantile functions to produce non-uniform random or pseudorandom numbers for use in diverse types of simulation calculations, a sample from a given distribution may be obtained in principle by applying its quantile function to a sample from a uniform distribution. When the cdf itself has an expression, one can always use a numerical root-finding algorithm such as the bisection method to invert the cdf. Other algorithms to evaluate quantile functions are given in the Numerical Recipes series of books, algorithms for common distributions are built into many statistical software packages. Quantile functions may also be characterized as solutions of non-linear ordinary, the ordinary differential equations for the cases of the normal, Student, beta and gamma distributions have been given and solved. The normal distribution is perhaps the most important case, unfortunately, this function has no closed-form representation using basic algebraic functions, as a result, approximate representations are usually used. Thorough composite rational and polynomial approximations have been given by Wichura, non-composite rational approximations have been developed by Shaw. A non-linear ordinary differential equation for the normal quantile, w and it is d 2 w d p 2 = w 2 with the centre conditions w =0, w ′ =2 π. This equation may be solved by methods, including the classical power series approach
24.
Lognormal distribution
–
In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln has a normal distribution, likewise, if Y has a normal distribution, then X = exp has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values, the distribution is occasionally referred to as the Galton distribution or Galtons distribution, after Francis Galton. The log-normal distribution also has associated with other names, such as McAlister, Gibrat. A log-normal process is the realization of the multiplicative product of many independent random variables. This is justified by considering the limit theorem in the log domain. The log-normal distribution is the maximum entropy probability distribution for a random variate X for which the mean and this relationship is true regardless of the base of the logarithmic or exponential function. If log a is normally distributed, then so is log b , likewise, if e X is log-normally distributed, then so is a X, where a is a positive number ≠1. On a logarithmic scale, μ and σ can be called the location parameter, in contrast, the mean, standard deviation, and variance of the non-logarithmized sample values are respectively denoted m, s. d. and v in this article. The two sets of parameters can be related as μ = ln , σ = ln , a random positive variable x is log-normally distributed if the logarithm of x is normally distributed, N =1 σ2 π exp . A change of variables must conserve differential probability, All moments of the log-normal distribution exist and E = e n μ + n 2 σ22 This can be derived by letting z = ln − σ within the integral. However, the expected value E is not defined for any value of the argument t as the defining integral diverges. In consequence the moment generating function is not defined, the last is related to the fact that the lognormal distribution is not uniquely determined by its moments. In consequence, the function of the log-normal distribution cannot be represented as an infinite convergent series. In particular, its Taylor formal series diverges, ∑ n =0 ∞ n n, a relatively simple approximating formula is available in closed form and given by φ ≈ exp 1 + W where W is the Lambert W function. This approximation is derived via a method but it stays sharp all over the domain of convergence of φ. The geometric mean of the distribution is G M = e μ. By analogy with the statistics, one can define a geometric variance, G V a r = e σ2
25.
Error function
–
In mathematics, the error function is a special function of sigmoid shape that occurs in probability, statistics, and partial differential equations describing diffusion. It is defined as, erf =1 π ∫ − x x e − t 2 d t =2 π ∫0 x e − t 2 d t. The error function is used in measurement theory, and its use in branches of mathematics is typically unrelated to the characterization of measurement errors. In statistics, it is common to have a variable Y, the error is then defined as ε = Y ^ − Y. This makes the error a normally distributed random variable with mean 0 and some variance σ2 and this is true for any random variable with distribution N, but the application to error variables is how the error function got its name. The previous paragraph can be generalized to any variance, given a variable ε ∼ N and this is used in statistics to predict behavior of any sample with respect to the population mean. This usage is similar to the Q-function, which in fact can be written in terms of the error function, another form of erfc for non-negative x is known as Craigs formula, erfc =2 π ∫0 π /2 exp d θ. The imaginary error function, denoted erfi, is defined as erfi = − i erf =2 π ∫0 x e t 2 d t =2 π e x 2 D, where D is the Dawson function. Despite the name imaginary error function, erfi is real when x is real, the error function is related to the cumulative distribution Φ, the integral of the standard normal distribution, by Φ =12 +12 erf =12 erfc . The property erf = − erf means that the function is an odd function. This directly results from the fact that the integrand e − t 2 is an even function, for any complex number z, erf = erf ¯ where z ¯ is the complex conjugate of z. The integrand ƒ = exp and ƒ = erf are shown in the complex z-plane in figures 2 and 3, level of Im =0 is shown with a thick green line. Negative integer values of Im are shown with red lines. Positive integer values of Im are shown with blue lines. Intermediate levels of Im = constant are shown with thin green lines, intermediate levels of Re = constant are shown with thin red lines for negative values and with thin blue lines for positive values. The error function at +∞ is exactly 1, at the real axis, erf approaches unity at z → +∞ and −1 at z → −∞. At the imaginary axis, it tends to ±i∞, the error function is an entire function, it has no singularities and its Taylor expansion always converges. =2 π which holds for complex number z
26.
Dirac delta function
–
It was introduced by theoretical physicist Paul Dirac. From a purely mathematical viewpoint, the Dirac delta is not strictly a function, because any extended-real function that is equal to zero everywhere, the delta function only makes sense as a mathematical object when it appears inside an integral. From this perspective the Dirac delta can usually be manipulated as though it were a function, the formal rules obeyed by this function are part of the operational calculus, a standard tool kit of physics and engineering. Formally, the function must be defined as the distribution that corresponds to a probability measure supported at the origin. In many applications, the Dirac delta is regarded as a kind of limit of a sequence of functions having a spike at the origin. The approximating functions of the sequence are thus approximate or nascent delta functions, in the context of signal processing the delta function is often referred to as the unit impulse symbol. Its discrete analog is the Kronecker delta function, which is defined on a discrete domain. The graph of the function is usually thought of as following the whole x-axis. Despite its name, the function is not truly a function. For example, the objects f = δ and g =0 are equal everywhere except at x =0 yet have integrals that are different. According to Lebesgue integration theory, if f and g are functions such that f = g almost everywhere, then f is integrable if and only if g is integrable, rigorous treatment of the Dirac delta requires measure theory or the theory of distributions. The Dirac delta is used to model a tall narrow spike function, for example, to calculate the dynamics of a baseball being hit by a bat, one can approximate the force of the bat hitting the baseball by a delta function. Later, Augustin Cauchy expressed the theorem using exponentials, f =12 π ∫ − ∞ ∞ e i p x d p. Cauchy pointed out that in some circumstances the order of integration in this result was significant. A rigorous interpretation of the form and the various limitations upon the function f necessary for its application extended over several centuries. Namely, it is necessary that these functions decrease sufficiently rapidly to zero in order to ensure the existence of the Fourier integral, for example, the Fourier transform of such simple functions as polynomials does not exist in the classical sense. The extension of the classical Fourier transformation to distributions considerably enlarged the class of functions that could be transformed, and leading to the formal development of the Dirac delta function. An infinitesimal formula for a tall, unit impulse delta function explicitly appears in an 1827 text of Augustin Louis Cauchy. Siméon Denis Poisson considered the issue in connection with the study of propagation as did Gustav Kirchhoff somewhat later
27.
Uniform distribution (continuous)
–
The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U and it is the maximum entropy probability distribution for a random variate X under no constraint other than that it is contained in the distributions support. Sometimes they are chosen to be zero, and sometimes chosen to be 1/, the latter is appropriate in the context of estimation by the method of maximum likelihood. e. Except on a set of points with zero measure, also, it is consistent with the sign function which has no such ambiguity. For a random variable following this distribution, the value is then m1 = /2. For n ≥2, the nth cumulant of the distribution on the interval is bn/n. The mean of the distribution is, E =12, the variance is, V =1122 Let X1. Let X be the kth order statistic from this sample, then the probability distribution of X is a Beta distribution with parameters k and n − k +1. The expected value is E = k n +1 and this fact is useful when making Q–Q plots. The variances are V = k 2, to see this, if X ~ U and is a subinterval of with fixed d >0, then P = ∫ x x + d d y b − a = d b − a which is independent of x. This fact motivates the distributions name and this distribution can be generalized to more complicated sets than intervals. Restricting a =0 and b =1, the resulting distribution U is called a uniform distribution. One interesting property of the uniform distribution is that if u1 has a standard uniform distribution. This property can be used for generating antithetic variates, among other things, If X has a standard uniform distribution, then by the inverse transform sampling method, Y = − λ−1 ln has an exponential distribution with parameter λ. If X has a uniform distribution, then Y = Xn has a beta distribution with parameters. As such, If X has a uniform distribution, then Y = X is also a special case of the beta distribution with parameters. The Irwin–Hall distribution is the sum of n i. i. d, the sum of two independent, equally distributed, uniform distributions yields a symmetric triangular distribution. Uniform random variables also has a distribution, although not symmetric
28.
Exponential distribution
–
It is a particular case of the gamma distribution. It is the analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson processes, the probability density function of an exponential distribution is f = { λ e − λ x x ≥0,0 x <0. Alternatively, this can be defined using the right-continuous Heaviside step function, H where H=1, f = λ e − λ x H Here λ >0 is the parameter of the distribution, the distribution is supported on the interval [0, ∞). If a random variable X has this distribution, we write X ~ Exp, the exponential distribution exhibits infinite divisibility. The cumulative distribution function is given by F = {1 − e − λ x x ≥0,0 x <0. Where β >0 is mean, standard deviation, and scale parameter of the distribution and that is to say, the expected duration of survival of the system is β units of time. The parametrization involving the rate parameter arises in the context of events arriving at a rate λ, the alternative specification is sometimes more convenient than the one given above, and some authors will use it as a standard definition. This alternative specification is not used here, unfortunately this gives rise to a notational ambiguity. An example of this switch, reference uses λ for β. The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by E =1 λ = β, see above. In light of the examples given above, this sense, if you receive phone calls at an average rate of 2 per hour. The variance of X is given by Var =1 λ2 = β2, the moments of X, for n =1,2. are given by E = n. The median of X is given by m = ln λ < E , where ln refers to the natural logarithm. Thus the absolute difference between the mean and median is | E − m | =1 − ln λ <1 λ = standard deviation, an exponentially distributed random variable T obeys the relation Pr = Pr, ∀ s, t ≥0. The exponential distribution and the distribution are the only memoryless probability distributions. The exponential distribution is also necessarily the only continuous probability distribution that has a constant Failure rate. The quantile function for Exp is F −1 = − ln λ,0 ≤ p <1 The quartiles are therefore, first quartile, ln/λ median, ln/λ third quartile, ln/λ And as a consequence the interquartile range is ln/λ
29.
Log-normal distribution
–
In probability theory, a log-normal distribution is a continuous probability distribution of a random variable whose logarithm is normally distributed. Thus, if the random variable X is log-normally distributed, then Y = ln has a normal distribution, likewise, if Y has a normal distribution, then X = exp has a log-normal distribution. A random variable which is log-normally distributed takes only positive real values, the distribution is occasionally referred to as the Galton distribution or Galtons distribution, after Francis Galton. The log-normal distribution also has associated with other names, such as McAlister, Gibrat. A log-normal process is the realization of the multiplicative product of many independent random variables. This is justified by considering the limit theorem in the log domain. The log-normal distribution is the maximum entropy probability distribution for a random variate X for which the mean and this relationship is true regardless of the base of the logarithmic or exponential function. If log a is normally distributed, then so is log b , likewise, if e X is log-normally distributed, then so is a X, where a is a positive number ≠1. On a logarithmic scale, μ and σ can be called the location parameter, in contrast, the mean, standard deviation, and variance of the non-logarithmized sample values are respectively denoted m, s. d. and v in this article. The two sets of parameters can be related as μ = ln , σ = ln , a random positive variable x is log-normally distributed if the logarithm of x is normally distributed, N =1 σ2 π exp . A change of variables must conserve differential probability, All moments of the log-normal distribution exist and E = e n μ + n 2 σ22 This can be derived by letting z = ln − σ within the integral. However, the expected value E is not defined for any value of the argument t as the defining integral diverges. In consequence the moment generating function is not defined, the last is related to the fact that the lognormal distribution is not uniquely determined by its moments. In consequence, the function of the log-normal distribution cannot be represented as an infinite convergent series. In particular, its Taylor formal series diverges, ∑ n =0 ∞ n n, a relatively simple approximating formula is available in closed form and given by φ ≈ exp 1 + W where W is the Lambert W function. This approximation is derived via a method but it stays sharp all over the domain of convergence of φ. The geometric mean of the distribution is G M = e μ. By analogy with the statistics, one can define a geometric variance, G V a r = e σ2
30.
Pareto distribution
–
The Pareto Type I distribution is characterized by a scale parameter xm and a shape parameter α, which is known as the tail index. When this distribution is used to model the distribution of wealth, from the definition, the cumulative distribution function of a Pareto random variable with parameters α and xm is F X = {1 − α x ≥ x m,0 x < x m. It follows that the probability density function is f X = { α x m α x α +1 x ≥ x m,0 x < x m. When plotted on linear axes, the distribution assumes the familiar J-shaped curve which approaches each of the orthogonal axes asymptotically, all segments of the curve are self-similar. When plotted in a plot, the distribution is represented by a straight line. The expected value of a random variable following a Pareto distribution is E = { ∞ α ≤1, α x m α −1 α >1, the variance of a random variable following a Pareto distribution is V a r = for some x m >0. Suppose that for all n, the two random variables min and / min are independent, then the common distribution is a Pareto distribution. The geometric mean is G = x m exp , the harmonic mean is H = x m. There is a hierarchy of Pareto distributions known as Pareto Type I, II, III, IV, Pareto Type IV contains Pareto Type I–III as special cases. The Feller–Pareto distribution generalizes Pareto Type IV, the Pareto distribution hierarchy is summarized in the next table comparing the survival functions. When μ =0, the Pareto distribution Type II is also known as the Lomax distribution, in this section, the symbol xm, used before to indicate the minimum value of x, is replaced by σ. The shape parameter α is the index, μ is location, σ is scale. Some special cases of Pareto Type are P = P, P = P, P = P, the finiteness of the mean, and the existence and the finiteness of the variance depend on the tail index α. In particular, fractional δ-moments are finite for some δ >0, as shown in the table below, if W = μ + σ γ, σ >0, γ >0, then W has a Feller–Pareto distribution FP. If U1 ∼ Γ and U2 ∼ Γ are independent Gamma variables, another construction of a Feller–Pareto variable is W = μ + σ γ and we write W ~ FP. Special cases of the Feller–Pareto distribution are F P = P F P = P F P = P F P = P and he also used it to describe distribution of income. This idea is expressed more simply as the Pareto principle or the 80-20 rule which says that 20% of the population controls 80% of the wealth. This distribution is not limited to describing wealth or income, in hydrology the Pareto distribution is applied to extreme events such as annually maximum one-day rainfalls and river discharges
31.
Chi-squared distribution
–
In probability theory and statistics, the chi-squared distribution with k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. When it is being distinguished from the more general noncentral chi-squared distribution, many other statistical tests also use this distribution, like Friedmans analysis of variance by ranks. Zk are independent, standard normal variables, then the sum of their squares. This is usually denoted as Q ∼ χ2 or Q ∼ χ k 2, the chi-squared distribution has one parameter, k — a positive integer that specifies the number of degrees of freedom The chi-squared distribution is used primarily in hypothesis testing. Unlike more widely known such as the normal distribution and the exponential distribution. It arises in the following tests, among others. The primary reason that the distribution is used extensively in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as the t statistic in a t-test, for these hypothesis tests, as the sample size, n, increases, the sampling distribution of the test statistic approaches the normal distribution. Testing hypotheses using a distribution is well understood and relatively easy. The simplest chi-squared distribution is the square of a normal distribution. So wherever a normal distribution could be used for a hypothesis test, specifically, suppose that Z is a standard normal random variable, with mean =0 and variance =1. A sample drawn at random from Z is a sample from the shown in the graph of the standard normal distribution. Define a new random variable Q, to generate a random sample from Q, take a sample from Z and square the value. The distribution of the values is given by the random variable Q = Z2. The distribution of the random variable Q is an example of a chi-squared distribution, the subscript 1 indicates that this particular chi-squared distribution is constructed from only 1 standard normal distribution. A chi-squared distribution constructed by squaring a single normal distribution is said to have 1 degree of freedom. Just as extreme values of the distribution have low probability. An additional reason that the distribution is widely used is that it is a member of the class of likelihood ratio tests
32.
Gamma distribution
–
In probability theory and statistics, the gamma distribution is a two-parameter family of continuous probability distributions. The common exponential distribution and chi-squared distribution are special cases of the gamma distribution, there are three different parametrizations in common use, With a shape parameter k and a scale parameter θ. With a shape parameter α = k and a scale parameter β = 1/θ. With a shape parameter k and a mean parameter μ = k/β, in each of these three forms, both parameters are positive real numbers. The gamma distribution is the maximum entropy probability distribution for a random variable X for which E = kθ = α/β is fixed and greater than zero, and E = ψ + ln = ψ − ln is fixed. The parameterization with k and θ appears to be common in econometrics and certain other applied fields. For instance, in testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution. If k is an integer, then the distribution represents an Erlang distribution, i. e. the sum of k independent exponentially distributed random variables. The gamma distribution can be parameterized in terms of a shape parameter α = k, both parametrizations are common because either can be more convenient depending on the situation. The cumulative distribution function is the gamma function, F = ∫0 x f d u = γ Γ where γ is the lower incomplete gamma function. If α is an integer, the cumulative distribution function has the following series expansion. E − β x = e − β x ∑ i = α ∞ i i, here Γ is the gamma function evaluated at k. The cumulative distribution function is the gamma function, F = ∫0 x f d u = γ Γ where γ is the lower incomplete gamma function. It can also be expressed as follows, if k is a positive integer, I e − x / θ = e − x / θ ∑ i = k ∞1 i. I The skewness is equal to 2 / k, it only on the shape parameter. Unlike the mode and the mean which have readily calculable formulas based on the parameters, the median for this distribution is defined as the value ν such that 1 Γ θ k ∫0 ν x k −1 e − x / θ d x =12. A formula for approximating the median for any distribution, when the mean is known, has been derived based on the fact that the ratio μ/ is approximately a linear function of k when k ≥1. The approximation formula is ν ≈ μ3 k −0.83 k +0.2, K. P. Later, it was shown that λ is a convex function of m
33.
Weibull distribution
–
In probability theory and statistics, the Weibull distribution /ˈveɪbʊl/ is a continuous probability distribution. Its complementary cumulative distribution function is an exponential function. The Weibull distribution is related to a number of probability distributions, in particular. If the quantity X is a time-to-failure, the Weibull distribution gives a distribution for which the rate is proportional to a power of time. The shape parameter, k, is that power plus one and this happens if there is significant infant mortality, or defective items failing early and the failure rate decreasing over time as the defective items are weeded out of the population. This might suggest random external events are causing mortality, or failure, the Weibull distribution reduces to an exponential distribution, A value of k >1 indicates that the failure rate increases with time. This happens if there is a process, or parts that are more likely to fail as time goes on. In the context of the diffusion of innovations, this means positive word of mouth, the function is first concave, then convex with an inflexion point at / e 1 / k, k >1. In the field of science, the shape parameter k of a distribution of strengths is known as the Weibull modulus. In the context of diffusion of innovations, the Weibull distribution is a pure imitation/rejection model, in medical statistics a different parameterization is used. The shape parameter k is the same as above and the parameter is b = λ. For x ≥0 the hazard function is h = b k x k −1, a third parameterization is sometimes used. In this the shape parameter k is the same as above, the form of the density function of the Weibull distribution changes drastically with the value of k. For 0 < k <1, the density function tends to ∞ as x approaches zero from above and is strictly decreasing, for k =1, the density function tends to 1/λ as x approaches zero from above and is strictly decreasing. For k >1, the density function tends to zero as x approaches zero from above, increases until its mode, for k =2 the density has a finite positive slope at x =0. As k goes to infinity, the Weibull distribution converges to a Dirac delta distribution centered at x = λ, moreover, the skewness and coefficient of variation depend only on the shape parameter. The cumulative distribution function for the Weibull distribution is F =1 − e − k for x ≥0, the quantile function for the Weibull distribution is Q = λ1 / k for 0 ≤ p <1. The failure rate h is given by h = k λ k −1, the moment generating function of the logarithm of a Weibull distributed random variable is given by E = λ t Γ where Γ is the gamma function
34.
Beta distribution
–
The beta distribution has been applied to model the behavior of random variables limited to intervals of finite length in a wide variety of disciplines. In Bayesian inference, the distribution is the conjugate prior probability distribution for the Bernoulli, binomial, negative binomial. The beta distribution is a model for the random behavior of percentages. The beta function, B, is a constant to ensure that the total probability integrates to 1. In the above equations x is an observed value that actually occurred—of a random process X. L. Johnson. Several authors, including N. L. Johnson and S, the probability density function satisfies the differential equation f ′ = f x − x. The cumulative distribution function is F = B B = I x where B is the beta function. The mode of a Beta distributed random variable X with α, β >1 is the most likely value of the distribution, when both parameters are less than one, this is the anti-mode, the lowest point of the probability density curve. Letting α = β, the expression for the mode simplifies to 1/2, showing that for α = β >1 the mode, is at the center of the distribution, it is symmetric in those cases. See Shapes section in this article for a full list of mode cases, for several of these cases, the maximum value of the density function occurs at one or both ends. In some cases the value of the density function occurring at the end is finite, for example, in the case of α =2, β =1, the density function becomes a right-triangle distribution which is finite at both ends. In several other cases there is a singularity at one end, for example, in the case α = β = 1/2, the Beta distribution simplifies to become the arcsine distribution. There is debate among mathematicians about some of cases and whether the ends can be called modes or not. There is no general closed-form expression for the median of the distribution for arbitrary values of α and β. Closed-form expressions for particular values of the parameters α and β follow, For symmetric cases α = β, median = 1/2. For α =1 and β >0, median =1 −2 −1 β For α >0 and β =1, median =2 −1 α For α =3 and β =2, median =0.6142724318676105. The real solution to the quartic equation 1 − 8x3 + 6x4 =0, for α =2 and β =3, median =0.38572756813238945. When α, β ≥1, the error in this approximation is less than 4%
35.
Interpolation
–
In the mathematical field of numerical analysis, interpolation is a method of constructing new data points within the range of a discrete set of known data points. It is often required to interpolate the value of that function for a value of the independent variable. A different problem which is related to interpolation is the approximation of a complicated function by a simple function. Suppose the formula for some given function is known, but too complex to evaluate efficiently, a few known data points from the original function can be used to create an interpolation based on a simpler function. In the examples below if we consider x as a topological space, the classical results about interpolation of operators are the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other subsequent results, for example, suppose we have a table like this, which gives some values of an unknown function f. Interpolation provides a means of estimating the function at intermediate points, there are many different interpolation methods, some of which are described below. Some of the concerns to take into account when choosing an appropriate algorithm are, how many data points are needed. The simplest interpolation method is to locate the nearest data value, one of the simplest methods is linear interpolation. Consider the above example of estimating f, since 2.5 is midway between 2 and 3, it is reasonable to take f midway between f =0.9093 and f =0.1411, which yields 0.5252. Another disadvantage is that the interpolant is not differentiable at the point xk, the following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate by g, then the linear interpolation error is | f − g | ≤ C2 where C =18 max r ∈ | g ″ |. In words, the error is proportional to the square of the distance between the data points, the error in some other methods, including polynomial interpolation and spline interpolation, is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants, polynomial interpolation is a generalization of linear interpolation. Note that the interpolant is a linear function. We now replace this interpolant with a polynomial of higher degree, consider again the problem given above. The following sixth degree polynomial goes through all the seven points, substituting x =2.5, we find that f =0.5965. Generally, if we have n points, there is exactly one polynomial of degree at most n−1 going through all the data points
36.
Trapezoidal rule
–
In mathematics, and more specifically in numerical analysis, the trapezoidal rule is a technique for approximating the definite integral ∫ a b f d x. The trapezoidal rule works by approximating the region under the graph of the function f as a trapezoid and it follows that ∫ a b f d x ≈. A2016 paper reports that the rule was in use in Babylon before 50 BC for integrating the velocity of Jupiter along the ecliptic. The trapezoidal rule is one of a family of formulas for numerical integration called Newton–Cotes formulas, however for various classes of rougher functions, the trapezoidal rule has faster convergence in general than Simpsons rule. Moreover, the trapezoidal rule tends to become extremely accurate when periodic functions are integrated over their periods, for a domain discretized into N equally spaced panels, or N+1 grid points a = x1 < x2 <. < xN+1 = b, where the spacing is h = N the approximation to the integral becomes ∫ a b f d x ≈ h 2 ∑ k =1 N = b − a 2 N. When the grid spacing is non-uniform, one can use the formula ∫ a b f d x ≈12 ∑ k =1 N and this can also be seen from the geometric picture, the trapezoids include all of the area under the curve and extend over it. Similarly, a concave-down function yields an underestimate because area is unaccounted for under the curve, if the interval of the integral being approximated includes an inflection point, the error is harder to identify. Further terms in this error estimate are given by the Euler–Maclaurin summation formula and it is argued that the speed of convergence of the trapezoidal rule reflects and can be used as a definition of classes of smoothness of the functions. The trapezoidal rule often converges very quickly for periodic functions, in the error formula above, f = f, and only the O term remains. More detailed analysis can be found in, for various classes of functions that are not twice-differentiable, the trapezoidal rule has sharper bounds than Simpsons rule
37.
Numerical integration
–
This article focuses on calculation of definite integrals. The term numerical quadrature is more or less a synonym for numerical integration, Some authors refer to numerical integration over more than one dimension as cubature, others take quadrature to include higher-dimensional integration. The basic problem in numerical integration is to compute an approximate solution to a definite integral ∫ a b f d x to a degree of accuracy. If f is a smooth function integrated over a number of dimensions. The term numerical integration first appears in 1915 in the publication A Course in Interpolation, Quadrature is a historical mathematical term that means calculating area. Quadrature problems have served as one of the sources of mathematical analysis. Mathematicians of Ancient Greece, according to the Pythagorean doctrine, understood calculation of area as the process of constructing geometrically a square having the same area and that is why the process was named quadrature. For example, a quadrature of the circle, Lune of Hippocrates and this construction must be performed only by means of compass and straightedge. The ancient Babylonians used the trapezoidal rule to integrate the motion of Jupiter along the ecliptic, for a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side x = a b. For this purpose it is possible to use the fact, if we draw the circle with the sum of a and b as the diameter. The similar geometrical construction solves a problem of a quadrature for a parallelogram, problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge had been proved in the 19th century to be impossible, nevertheless, for some figures a quadrature can be performed. The quadratures of a surface and a parabola segment done by Archimedes became the highest achievement of the antique analysis. The area of the surface of a sphere is equal to quadruple the area of a circle of this sphere. The area of a segment of the cut from it by a straight line is 4/3 the area of the triangle inscribed in this segment. For the proof of the results Archimedes used the Method of exhaustion of Eudoxus, in medieval Europe the quadrature meant calculation of area by any method. More often the Method of indivisibles was used, it was less rigorous, john Wallis algebrised this method, he wrote in his Arithmetica Infinitorum series that we now call the definite integral, and he calculated their values. Isaac Barrow and James Gregory made further progress, quadratures for some algebraic curves, christiaan Huygens successfully performed a quadrature of some Solids of revolution
38.
Simpson's rule
–
In numerical analysis, Simpsons rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the approximation, ∫ a b f d x ≈ b − a 6. For unequally spaced points, see Cartwright, Simpsons rule also corresponds to the three-point Newton-Cotes quadrature rule. The method is credited to the mathematician Thomas Simpson of Leicestershire, kepler used similar formulas over 100 years prior. For this reason the method is sometimes called Keplers rule, or Keplersche Fassregel in German, Simpsons rule can be derived in various ways. One derivation replaces the integrand f by the quadratic polynomial P which takes the values as f at the end points a and b. One can use Lagrange polynomial interpolation to find an expression for this polynomial, an easy integration by substitution shows that ∫ a b P d x = b − a 6. This calculation can be carried out more easily if one first observes that there is no loss of generality in assuming that a = −1 and b =1. Another derivation constructs Simpsons rule from two simpler approximations, the midpoint rule M = f and the trapezoidal rule T =12. The errors in these approximations are −1243 f ″ + O and 1123 f ″ + O, respectively, the two O terms are not equal, see Big O notation for more details. It follows from the formulas for the errors of the midpoint. This weighted average is exactly Simpsons rule, using another approximation, it is possible to take a suitable weighted average and eliminate another error term. The third derivation starts from the ansatz 1 b − a ∫ a b f d x ≈ α f + β f + γ f, the coefficients α, β and γ can be fixed by requiring that this approximation be exact for all quadratic polynomials. The error in approximating an integral by Simpsons rule is 1905 | f |, the error is asymptotically proportional to 5. However, the above derivations suggest an error proportional to 4, Simpsons rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval. If the interval of integration is in some small, then Simpsons rule will provide an adequate approximation to the exact integral. By small, what we mean is that the function being integrated is relatively smooth over the interval. For such a function, a smooth quadratic interpolant like the one used in Simpsons rule will give good results, however, it is often the case that the function we are trying to integrate is not smooth over the interval
39.
Resampling (statistics)
–
It may also be used for constructing hypothesis tests. In this context, the bootstrap is used to replace sequentially empirical weighted probability measures by empirical measures, the bootstrap allows to replace the samples with low weights by copies of the samples with high weights. Jackknifing, which is similar to bootstrapping, is used in statistical inference to estimate the bias and standard error of a statistic, historically this method preceded the invention of the bootstrap with Quenouille inventing this method in 1949 and Tukey extending it in 1958. This method was foreshadowed by Mahalanobis who in 1946 suggested repeated estimates of the statistic of interest with half the sample chosen at random and he coined the name interpenetrating samples for this method. Quenouille invented this method with the intention of reducing the bias of the sample estimate, the basic idea behind the jackknife variance estimator lies in systematically recomputing the statistic estimate, leaving out one or more observations at a time from the sample set. From this new set of replicates of the statistic, an estimate for the bias, instead of using the jackknife to estimate the variance, it may instead be applied to the log of the variance. This transformation may result in better estimates particularly when the distribution of the variance itself may be non normal, for many statistical parameters the jackknife estimate of variance tends asymptotically to the true value almost surely. In technical terms one says that the estimate is consistent. It is not consistent for the sample median, in the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with two degrees of freedom. The jackknife, like the original bootstrap, is dependent on the independence of the data, extensions of the jackknife to allow for dependence in the data have been proposed. Another extension is the method used in association with Poisson sampling. Both methods, the bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as an approximation of it. Both yield similar results, which is why each can be seen as approximation to the other. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing. On the other hand, when this feature is not crucial and it is of interest not to have a number but just an idea of its distribution. Whether to use the bootstrap or the jackknife may depend more on operational aspects than on statistical concerns of a survey, the jackknife, originally used for bias reduction, is more of a specialized method and only estimates the variance of the point estimator. This can be enough for basic statistical inference, the bootstrap, on the other hand, first estimates the whole distribution and then computes the variance from that
40.
Normal distribution
–
In probability theory, the normal distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are used in the natural and social sciences to represent real-valued random variables whose distributions are not known. The normal distribution is useful because of the limit theorem. Physical quantities that are expected to be the sum of independent processes often have distributions that are nearly normal. Moreover, many results and methods can be derived analytically in explicit form when the relevant variables are normally distributed, the normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped, the probability density of the normal distribution is, f =12 π σ2 e −22 σ2 Where, μ is mean or expectation of the distribution. σ is standard deviation σ2 is variance A random variable with a Gaussian distribution is said to be distributed and is called a normal deviate. The simplest case of a distribution is known as the standard normal distribution. The factor 1 /2 in the exponent ensures that the distribution has unit variance and this function is symmetric around x =0, where it attains its maximum value 1 /2 π and has inflection points at x = +1 and x = −1. Authors may differ also on which normal distribution should be called the standard one, the probability density must be scaled by 1 / σ so that the integral is still 1. If Z is a normal deviate, then X = Zσ + μ will have a normal distribution with expected value μ. Conversely, if X is a normal deviate, then Z = /σ will have a standard normal distribution. Every normal distribution is the exponential of a function, f = e a x 2 + b x + c where a is negative. In this form, the mean value μ is −b/, for the standard normal distribution, a is −1/2, b is zero, and c is − ln /2. The standard Gaussian distribution is denoted with the Greek letter ϕ. The alternative form of the Greek phi letter, φ, is used quite often. The normal distribution is often denoted by N. Thus when a random variable X is distributed normally with mean μ and variance σ2, some authors advocate using the precision τ as the parameter defining the width of the distribution, instead of the deviation σ or the variance σ2
41.
Shlomo Yitzhaki (economics)
–
Shlomo Yitzhaki is the Sam M. Cohodas Professor Emeritus of Agricultural Economics at the Hebrew University of Jerusalem. In 2002-2012 he served as the statistician of the Israeli Central Bureau of Statistics. Yitzhaki earned his Ph. D. in economics from the Hebrew University in 1976 and he spent a year as a visiting scholar at Harvard University, and then he returned to Jerusalem as a lecturer in 1977. In 1981–1982 he worked as a research economist at the National Bureau of Economic Research, in 1982 he returned to academia as a senior lecturer at Hebrew U. where he has remained ever since. He joined the faculty as a professor in 1990. In 2008 he was granted emeritus status, Yitzhaki first consulted as an economist at the World Bank in 1986, and was appointed director of the Central Bureau of Statistics in 2002. He represents Israel at the International Statistical Institute and he has also consulted with the governments of many developing nations and is considered a world-class expert regarding the design of tax systems. In 2008 he chaired the Yitzhaki Committee examining the rise of poverty in Israel, the following partial list of publications is largely taken from the Hebrew University Faculty Directory
42.
Angus Deaton
–
Sir Angus Stewart Deaton, FBA is a British and Scottish-American economist. In 2015, he was awarded the Nobel Memorial Prize in Economic Sciences for his analysis of consumption, poverty, Deaton was born in Edinburgh, Scotland, and educated at Hawick High School and as a foundation scholar at Fettes College. He earned his B. A. M. A. in 1976 Deaton took up post at the University of Bristol as Professor of Econometrics. During this period, he completed a significant portion of his most influential work, in 1978, he became the first ever recipient of the Frisch Medal, an award given by the Econometric Society every two years to an applied paper published within the past five years in Econometrica. In 1980, his paper on how demand for consumption goods depends on prices. This paper has since been hailed as one of the twenty most influential articles published in the journal in its first hundred years, in 1983, he left the University of Bristol for Princeton University. He holds both British and American citizenship, in October 2015 it was announced that Deaton had won that years Nobel Memorial Prize in Economic Sciences. The BBC reported that Deaton was delighted and that he described himself as someone whos concerned with the poor of the world and how people behave, by linking detailed individual choices and aggregate outcomes, his research has helped transform the fields of microeconomics, macroeconomics, and development economics. New York University economist William Easterly said, What was impressive about this Nobel is how different fields Angus has contributed to. Deatons first work to become known was the almost ideal demand system, as a consumer demand model it provides a first order approximation to any demand system which satisfies the axioms of order. In 1978 Deaton became the first recipient of the Frisch Medal, Deaton is a Fellow of the Econometric Society, a Fellow of the British Academy, and a Fellow of the American Academy of Arts and Sciences. In April 2014, he was elected to the American Philosophical Society, the following year, in April 2015, Deaton also was elected a member of the National Academy of Sciences. He holds honorary degrees from the University of Rome, Tor Vergata, University College London, the University of St. Andrews, in 2007, he was elected president of the American Economic Association. Deaton is also the author of Letters from America, a popular feature in the Royal Economic Society Newsletter. He was knighted in the 2016 Birthday Honours for services to research in economics, Deaton has two children, born in 1970 and 1971. He is married to Anne Case, Alexander Stewart 1886 Professor of Economics and Public Affairs at Princeton Universitys Woodrow Wilson School of Public, the couples recreational activities include the opera and trout fishing. The Analysis of Household Surveys, A Microeconometric Approach to Development Policy, baltimore, Johns Hopkins University Press for the World Bank. The Great Escape, Health, Wealth, and the Origins of Inequality, Angus Deatons website Angus Deaton Quotes With Pictures
43.
List of countries by income equality
–
This is a list of countries or dependencies by income inequality metrics, including Gini coefficients. The Gini coefficient is a number between 0 and 1, where 0 corresponds with perfect equality and 1 corresponds with perfect inequality, income distribution can vary greatly from wealth distribution in a country. Income from black market activity is not included and is the subject of current economic research. Click sorting buttons to sort alphabetically or numerically, can sort in ascending or descending order. The row number column on the left sorts independently from the columns to the right of it. CIA and this book released with two titles, depending on country of publication. However, the ISBN remains the same, global Peace Index Map of Gini data for 2007–2010 Shadow economies all over the world, new estimates for 162 countries from 1999 to 2007. Friedrich Schneider, Andreas Buehn, Claudio E. Montenegro
44.
Welfare states
–
The welfare state is a concept of government in which the state plays a key role in the protection and promotion of the social and economic well-being of its citizens. It is based on the principles of equality of opportunity, equitable distribution of wealth, the general term may cover a variety of forms of economic and social organization. Marshall described the modern state as a distinctive combination of democracy, welfare. Esping-Andersen classified the most developed welfare state systems into three categories, Social Democratic, Conservative, and Liberal, the welfare state involves a transfer of funds from the state, to the services provided, as well as directly to individuals. It is funded through redistributionist taxation and is referred to as a type of mixed economy. Such taxation usually includes an income tax for people with higher incomes. Proponents argue that this helps reduce the gap between the rich and poor. The German term Sozialstaat has been used since 1870 to describe state support programs devised by German Sozialpolitiker, the literal English equivalent social state didnt catch on in Anglophone countries. However, during the Second World War, Anglican Archbishop William Temple, author of the book Christianity and the Social Order, popularized the concept using the phrase welfare state. Bishop Temples use of state has been connected to Benjamin Disraelis 1845 novel Sybil, or the Two Nations, which speaks of the only duty of power. In Germany, the term Wohlfahrtsstaat, a translation of the English welfare state, is used to describe Swedens social insurance arrangements. The Italian term stato sociale reproduces the original German term, spanish and many other languages employ an analogous term, estado del bienestar – literally, state of well-being. In Brazil, the concept is referred to as previdência social, in French, welfare state is translated into LÉtat-providence. Modern welfare programs are distinguished from earlier forms of poverty relief by their universal. The institution of social insurance in Germany under Bismarck was an influential template, some schemes were based largely in the development of autonomous, mutualist provision of benefits. Others were founded on state provision, in an influential essay, Citizenship and Social Class, British sociologist T. H. Examples of such states are Germany, all of the Nordic countries, the Netherlands, France, Uruguay and New Zealand, since that time, the term welfare state applies only to states where social rights are accompanied by civil and political rights. Changed attitudes in reaction to the worldwide Great Depression, which brought unemployment, during the Great Depression, the welfare state was seen as a middle way between the extremes of communism on the left and unregulated laissez-faire capitalism on the right
45.
Social welfare
–
For conceptual models of social well-being, see Social welfare function. Welfare is the provision of a level of well-being and social support for citizens without current means to support basic needs. The welfare state expands on this concept to include such as universal healthcare. In the Roman Empire, the first emperor Augustus provided the Cura Annonae or grain dole for citizens who could not afford to buy food every month, Social welfare was enlarged by the Emperor Trajan. Trajans program brought acclaim from many, including Pliny the Younger, the Song dynasty government supported multiple programs which could be classified as social welfare, including the establishment of retirement homes, public clinics, and paupers graveyards. According to economist Robert Henry Nelson, The medieval Roman Catholic Church operated a far-reaching, early welfare programs in Europe included the English Poor Law of 1601, which gave parishes the responsibility for providing welfare payments to the poor. This system was modified by the 19th-century Poor Law Amendment Act. It was predominantly in the late 19th and early 20th centuries that a system of state welfare provision was introduced in many countries. Otto von Bismarck, Chancellor of Germany, introduced one of the first welfare systems for the working classes, in Great Britain the Liberal government of Henry Campbell-Bannerman and David Lloyd George introduced the National Insurance system in 1911, a system later expanded by Clement Attlee. The United States inherited Englands poor house laws and has had a form of welfare since before it won its independence. Modern welfare states include Germany, France, the Netherlands, as well as the Nordic countries, such as Iceland, Sweden, Norway, Denmark, esping-Andersen classified the most developed welfare state systems into three categories, Social Democratic, Conservative, and Liberal. In the Islamic world, Zakat, one of the Five Pillars of Islam, has collected by the government since the time of the Rashidun caliph Umar in the 7th century. The taxes were used to provide income for the needy, including the poor, elderly, orphans, widows, according to the Islamic jurist Al-Ghazali, the government was also expected to store up food supplies in every region in case a disaster or famine occurred. Welfare can take a variety of forms, such as payments, subsidies and vouchers. A persons eligibility for welfare may also be constrained by means testing or other conditions, Welfare is provided by governments or their agencies, by private organizations, or a combination of both. Funding for welfare usually comes from government revenue, but when dealing with charities or NGOs. Some countries run conditional cash transfer welfare programs where payment is conditional on behaviour of the recipients, the 1890s economic depression and the rise of the trade unions and the Labor parties during this period led to a movement for welfare reform. In 1900, the states of New South Wales and Victoria enacted legislation introducing non-contributory pensions for those aged 65, a national invalid disability pension was started in 1910, and a national maternity allowance was introduced in 1912
46.
Gini coefficient
–
The Gini coefficient is a measure of statistical dispersion intended to represent the income or wealth distribution of a nations residents, and is the most commonly used measure of inequality. It was developed by the Italian statistician and sociologist Corrado Gini, the Gini coefficient measures the inequality among values of a frequency distribution. A Gini coefficient of zero expresses perfect equality, where all values are the same, a Gini coefficient of 1 expresses maximal inequality among values. However, a greater than one may occur if some persons represent negative contribution to the total. For larger groups, values close to or above 1 are very unlikely in practice, the exception to this is in the redistribution of wealth resulting in a minimum income for all people. When the population is sorted, if their distribution were to approximate a well known function. The Gini coefficient was proposed by Gini as a measure of inequality of income or wealth, the global income Gini coefficient in 2005 has been estimated to be between 0.61 and 0.68 by various sources. There are some issues in interpreting a Gini coefficient, the same value may result from many different distribution curves. The demographic structure should be taken into account, Countries with an aging population, or with a baby boom, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remains constant. Scholars have devised over a dozen variants of the Gini coefficient, the line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area lies between the line of equality and the Lorenz curve over the total area under the line of equality. It is also equal to 2A and to 1 - 2B due to the fact that A + B =0.5. If all people have non-negative income, the Gini coefficient can theoretically range from 0 to 1, in practice, both extreme values are not quite reached. If negative values are possible, then the Gini coefficient could theoretically be more than 1, normally the mean is assumed positive, which rules out a Gini coefficient less than zero. An alternative approach would be to consider the Gini coefficient as half of the mean absolute difference. The effects of income policy due to redistribution can be seen in the linear relationships. An informative simplified case just distinguishes two levels of income, low and high, if the high income group is u % of the population and earns a fraction f % of all income, then the Gini coefficient is f − u. An actual more graded distribution with these same values u and f will always have a higher Gini coefficient than f − u, the proverbial case where the richest 20% have 80% of all income would lead to an income Gini coefficient of at least 60%
47.
BRIC
–
In economics, BRIC is a grouping acronym that refers to the countries of Brazil, Russia, India and China, which are all deemed to be at a similar stage of newly advanced economic development. It is typically rendered as the BRICs or the BRIC countries or the BRIC economies or alternatively as the Big Four, a related acronym, BRICS, adds South Africa. There are arguments that Indonesia should be included into grouping, effectively turning it into BRIIC or BRIICS, South Africa began efforts to join the BRIC grouping and on December 24,2010 South Africa was invited to join BRICS. The aim of BRIC is establishment of an equitable, democratic and multi polar world order, Jim ONeill, told the summit that South Africa, at a population of under 50 million people, was just too small as an economy to join the BRIC ranks. But future of BRIC as a group is questionable. In 2012, a book with title Breakout Nations mentioned that it is hard to sustain growth for more than a decade. And in 2015, only India can lure global investors, China struggles with its slow down, while Brazil, the economic potential of Brazil, Russia, India and China is such that they could become among the four most dominant economies by the year 2050. The thesis was proposed by Jim ONeill, global economist at Goldman Sachs and these countries encompass over 25% of the worlds land coverage and 40% of the worlds population and hold a combined GDP of $20 trillion. On almost every scale, they would be the largest entity on the global stage and these four countries are among the biggest and fastest-growing emerging markets. The BRIC thesis recognizes that Brazil, Russia, India and China have changed their systems to embrace global capitalism. Of the four countries, Brazil remains the only polity that has the capacity to continue all elements, meaning manufacturing, services, cooperation is thus hypothesized to be a logical next step among the BRICs because Brazil and Russia together form the logical commodity suppliers. In 2016, an economist from Australia predicted that in 2050, based on Gross Domestic Product per capita spending, China will be the first and followed by India, Indonesia which nowadays does not belong to BRIC countries will jump from 9th position to 4th position. And Brazil will be in fifth position and it is due to the global economic center is shifting from the Atlantic to the Asia Pacific region. The Goldman Sachs global economics team released a report to its initial BRIC study in 2004. The report states that in BRIC nations, the number of people with an income over a threshold of $3,000 will double in number within three years and reach 800 million people within a decade. This predicts a massive rise in the size of the class in these nations. In 2025, it is calculated that the number of people in BRIC nations earning over $15,000 may reach over 200 million people and this indicates that a huge pickup in demand will not be restricted to basic goods but impact higher-priced goods as well. According to the report, first China and then a decade later India will begin to dominate the world economy, the report also highlights Indias inefficient energy consumption and mentions the dramatic under-representation of these economies in the global capital markets
48.
Globalization
–
Globalization or globalisation is the action or procedure of international integration arising from the interchange of world views, products, ideas, and other aspects of culture. Advances in transportation and in telecommunications infrastructure have been factors in globalization, generating further interdependence of economic. Large-scale globalization began in the 1820s, in the late 19th century and early 20th century, the connectivity of the worlds economies and cultures grew very quickly. The term globalization is recent, only establishing its current meaning in the 1970s, further, environmental challenges such as global warming, cross-boundary water and air pollution, and overfishing of the ocean are linked with globalization. Globalizing processes affect and are affected by business and work organization, economics, socio-cultural resources, academic literature commonly subdivides globalization into three major areas, economic globalization, cultural globalization, and political globalization. The term globalization is derived from the word globalize, which refers to the emergence of a network of economic systems. One of the earliest known usages of the term as a noun was in a 1930 publication entitled Towards New Education, a related term, corporate giants, was coined by Charles Taze Russell in 1897 to refer to the largely national trusts and other large enterprises of the time. By the 1960s, both began to be used as synonyms by economists and other social scientists. Economist Theodore Levitt is widely credited with coining the term in an article entitled Globalization of Markets, However, the term globalization was in use well before this and had been used by other scholars as early as 1981. Levitt can be credited with popularizing the term and bringing it into the mainstream audience in the later half of the 1980s. Due to the complexity of the concept, research projects, articles, sociologists Martin Albrow and Elizabeth King define globalization as all those processes by which the people of the world are incorporated into a single world society. Globalization can be located on a continuum with the local, national and regional, without reference to such expansive spatial connections, there can be no clear or coherent formulation of this term. A satisfactory definition of globalization must capture each of these elements, extensity, intensity, velocity and it pertains to the increasing ease with which somebody on one side of the world can interact, to mutual benefit, with somebody on the other side of the world. The ideological dimension, according to Steger, is filled with a range of norms, claims, beliefs and they have also argued that four different forms of globalization can be distinguished that complement and cut across the solely empirical dimensions. According to James, the oldest dominant form of globalization is embodied globalization, a second form is agency-extended globalization, the circulation of agents of different institutions, organizations, and polities, including imperial agents. Object-extended globalization, a form, is the movement of commodities. He calls the transmission of ideas, images, knowledge, and information across world-space disembodied globalization and he asserted that the pace of globalization was quickening and that its impact on business organization and practice would continue to grow. Economist Takis Fotopoulos defined economic globalization as the opening and deregulation of commodity, capital and he used political globalization to refer to the emergence of a transnational elite and a phasing out of the nation-state
49.
Progressive tax
–
A progressive tax is a tax in which the tax rate increases as the taxable amount increases. The term progressive refers to the way the tax rate progresses from low to high, the term can be applied to individual taxes or to a tax system as a whole, a year, multi-year, or lifetime. Progressive taxes are imposed in an attempt to reduce the tax incidence of people with an ability to pay. The opposite of a tax is a regressive tax, where the relative tax rate or burden decreases as an individuals ability to pay increases. The term is applied in reference to personal income taxes. It can also apply to adjustments of the tax base by using tax exemptions, tax credits, Progressive taxation has also been positively associated with happiness, the subjective well-being of nations and citizen satisfaction with public goods, such as education and transportation. In the early days of the Roman Republic, public taxes consisted of assessments on owned wealth, the tax rate under normal circumstances was 1% of property value, and could sometimes climb as high as 3% in situations such as war. These taxes were levied against land, homes and other estate, slaves, animals, personal items. By 167 BC, Rome no longer needed to levy a tax against its citizens in the Italian peninsula, due to the riches acquired from conquered provinces. The first modern income tax was introduced in Britain by Prime Minister William Pitt the Younger in his budget of December 1798, to pay for weapons and equipment for the French Revolutionary War. Pitts new graduated income tax began at a levy of 2 old pence in the pound on incomes over £60, Pitt hoped that the new income tax would raise £10 million, but actual receipts for 1799 totalled just over £6 million. Pitts income tax was levied from 1799 to 1802, when it was abolished by Henry Addington during the Peace of Amiens, Addington had taken over as prime minister in 1801, after Pitts resignation over Catholic Emancipation. The income tax was reintroduced by Addington in 1803 when hostilities recommenced, the United Kingdom income tax was reintroduced by Sir Robert Peel in the Income Tax Act 1842. Peel, as a Conservative, had opposed income tax in the 1841 general election, the new income tax, based on Addingtons model, was imposed on incomes above £150. Although this measure was intended to be temporary, it soon became a fixture of the British taxation system. A committee was formed in 1851 under Joseph Hume to investigate the matter, despite the vociferous objection, William Gladstone, Chancellor of the Exchequer from 1852, kept the progressive income tax, and extended it to cover the costs of the Crimean War. By the 1860s, the tax had become a grudgingly accepted element of the English fiscal system. In the United States, the first progressive income tax was established by the Revenue Act of 1862 and this was signed into law by President Abraham Lincoln and repealed the flat tax, which had been brought in under the Revenue Act of 1861
50.
Amartya Sen
–
Amartya Kumar Sen is an Indian economist and philosopher, who since 1972 has taught and worked in the United Kingdom and the United States. He is currently the Thomas W. Lamont University Professor at Harvard University and he was awarded the Nobel Memorial Prize in Economic Sciences in 1998 and Indias Bharat Ratna in 1999 for his work in welfare economics. Sen was born in a Bengali Baidya family in Manikganj, to Ashutosh Sen, Sens family was from Wari and Manikganj, Dhaka, both in present-day Bangladesh. Sens mother Amita Sen was the daughter of Kshiti Mohan Sen and he served as the Vice Chancellor of Visva-Bharati University for some years. Sen began his education at St Gregorys School in Dhaka in 1940. From fall 1941, Sen studied at Patha Bhavana, Santiniketan, the school had many progressive features, at the school, any focus on examinations or competitive testing was deeply frowned upon. In addition, the school stressed cultural diversity, and embraced influences from the rest of the world. In 1951, he went to Presidency College, Kolkata, where he earned a B. A. in Economics with First Class, with a minor in Mathematics, while at Presidency, Sen was diagnosed with oral cancer, and given a 15% chance of living five years. With radiation treatment, he survived, and in 1953 he moved to Trinity College, Cambridge and he was elected President of the Cambridge Majlis. While Sen was officially a Ph and he served in that position, starting the new Economics Department, during 1956 to 1958. Meanwhile, Sen was elected to a Prize Fellowship at Trinity College and his interest in philosophy, however, dates back to his college days at Presidency, where he read books on philosophy and debated philosophical themes. In Cambridge, there were major debates between supporters of Keynesian economics on the one hand, and the neo-classical economists skeptical of Keynes, quentin Skinner notes that Sen was a member of the secret society Cambridge Apostles during his time at Cambridge. Sens work on Choice of Technique complemented that of Maurice Dobb, in other words, workers were expected to demand no improvement in their standard of living despite having become more productive. Sens papers in the late 1960s and early 1970s helped develop the theory of social choice, which first came to prominence in the work by the American economist Kenneth Arrow. Sen also argued that the Bengal famine was caused by an economic boom that raised food prices. Sens interest in famine stemmed from personal experience, as a nine-year-old boy, he witnessed the Bengal famine of 1943, in which three million people perished. This staggering loss of life was unnecessary, Sen later concluded, in Poverty and Famines, Sen revealed that in many cases of famine, food supplies were not significantly reduced. In Bengal, for example, food production, while down on the year, was higher than in previous non-famine years
51.
Anthony Shorrocks
–
Anthony F. Shorrocks, is a British development economist. Between January 2001 and April 2009 he was Director of UNU-WIDER, prior to that he was Professor at the London School of Economics and before that he worked at the University of Essex. He has also had visiting appointments in the US, Canada, Italy. He has many publications in leading journals on income and wealth distribution, inequality, poverty. His first degree was a B. Sc. in Mathematics from the University of Sussex and this was followed by a Masters in Economics from Brown University. He took his Ph. D. in Economics at the London School of Economics in 1973, in 1978, he introduced a measure based on income Gini coefficients to estimate income mobility. This measure, generalized by Maasoumi and Zandvakili, is now referred to as Shorrocks index. He has been elected to be a Fellow of the Econometric Society, Oxford New York, Clarendon Press Oxford University Press. Shorrocks, A. F. van der Hoeven, Rolph, tokyo New York, United Nations University Press. Shorrocks, A. F. van der Hoeven, Rolph, growth, inequality, and poverty, prospects for pro-poor economic development. Oxford New York, Oxford University Press, advancing development, core themes in global economics. Houndmills, Basingstoke Hampshire New York, Palgrave Macmillan in association with the United Nations University World Institute for Development Economics Research, journal of Economic Geography, special issue, spatial inequality and development. Profile page for Anthony Shorrocks at UNU-WIDER
52.
Shorrocks index
–
Anthony F. Shorrocks, is a British development economist. Between January 2001 and April 2009 he was Director of UNU-WIDER, prior to that he was Professor at the London School of Economics and before that he worked at the University of Essex. He has also had visiting appointments in the US, Canada, Italy. He has many publications in leading journals on income and wealth distribution, inequality, poverty. His first degree was a B. Sc. in Mathematics from the University of Sussex and this was followed by a Masters in Economics from Brown University. He took his Ph. D. in Economics at the London School of Economics in 1973, in 1978, he introduced a measure based on income Gini coefficients to estimate income mobility. This measure, generalized by Maasoumi and Zandvakili, is now referred to as Shorrocks index. He has been elected to be a Fellow of the Econometric Society, Oxford New York, Clarendon Press Oxford University Press. Shorrocks, A. F. van der Hoeven, Rolph, tokyo New York, United Nations University Press. Shorrocks, A. F. van der Hoeven, Rolph, growth, inequality, and poverty, prospects for pro-poor economic development. Oxford New York, Oxford University Press, advancing development, core themes in global economics. Houndmills, Basingstoke Hampshire New York, Palgrave Macmillan in association with the United Nations University World Institute for Development Economics Research, journal of Economic Geography, special issue, spatial inequality and development. Profile page for Anthony Shorrocks at UNU-WIDER
53.
Ratio analysis
–
In mathematics, a ratio is a relationship between two numbers indicating how many times the first number contains the second. For example, if a bowl of fruit contains eight oranges and six lemons, thus, a ratio can be a fraction as opposed to a whole number. Also, in example the ratio of lemons to oranges is 6,8. The numbers compared in a ratio can be any quantities of a kind, such as objects, persons, lengths. A ratio is written a to b or a, b, when the two quantities have the same units, as is often the case, their ratio is a dimensionless number. A rate is a quotient of variables having different units, but in many applications, the word ratio is often used instead for this more general notion as well. The numbers A and B are sometimes called terms with A being the antecedent, the proportion expressing the equality of the ratios A, B and C, D is written A, B = C, D or A, B, C, D. This latter form, when spoken or written in the English language, is expressed as A is to B as C is to D. A, B, C and D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means, the equality of three or more proportions is called a continued proportion. Ratios are sometimes used three or more terms. The ratio of the dimensions of a two by four that is ten inches long is 2,4,10, a good concrete mix is sometimes quoted as 1,2,4 for the ratio of cement to sand to gravel. It is impossible to trace the origin of the concept of ratio because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society, however, it is possible to trace the origin of the word ratio to the Ancient Greek λόγος. Early translators rendered this into Latin as ratio, a more modern interpretation of Euclids meaning is more akin to computation or reckoning. Medieval writers used the word to indicate ratio and proportionalitas for the equality of ratios, Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers, the discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus of Cnidus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables, the existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios
54.
Gross domestic product
–
Gross Domestic Product is a monetary measure of the market value of all final goods and services produced in a period. Nominal GDP estimates are used to determine the economic performance of a whole country or region. The OECD defines GDP as a measure of production equal to the sum of the gross values added of all resident and institutional units engaged in production. ”An IMF publication states that GDP measures the monetary value of final goods and services - that is. Total GDP can also be broken down into the contribution of industry or sector of the economy. The ratio of GDP to the population of the region is the per capita GDP. William Petty came up with a concept of GDP to defend landlords against unfair taxation during warfare between the Dutch and the English between 1652 and 1674. Charles Davenant developed the method further in 1695, the modern concept of GDP was first developed by Simon Kuznets for a US Congress report in 1934. In this report, Kuznets warned against its use as a measure of welfare, after the Bretton Woods conference in 1944, GDP became the main tool for measuring a countrys economy. The switch from GNP to GDP in the US was in 1991, the history of the concept of GDP should be distinguished from the history of changes in ways of estimating it. The value added by firms is relatively easy to calculate from their accounts, but the value added by the sector, by financial industries. GDP can be determined in three ways, all of which should, in principle, give the same result and they are the production approach, the income approach, or the expenditure approach. The most direct of the three is the approach, which sums the outputs of every class of enterprise to arrive at the total. The income approach works on the principle that the incomes of the factors must be equal to the value of their product. This approach mirrors the OECD definition given above, deduct intermediate consumption from gross value to obtain the gross value added. Gross value added = gross value of output – value of intermediate consumption, value of output = value of the total sales of goods and services plus value of changes in the inventories. The sum of the value added in the various economic activities is known as GDP at factor cost. GDP at factor cost plus indirect taxes less subsidies on products = GDP at producer price, for measuring output of domestic product, economic activities are classified into various sectors. Subtracting each sectors intermediate consumption from gross output gives the GDP at factor cost, adding indirect tax minus subsidies in GDP at factor cost gives the GDP at producer prices
55.
Extended family
–
An extended family is a family that extends beyond the nuclear family, consisting of parents, aunts, uncles, and cousins, all living nearby or in the same household. An example is a couple that lives with either the husband or the wifes parents. The family changes from immediate household to extended household, in some circumstances, the extended family comes to live either with or in place of a member of the immediate family. These families include, in one household, near relatives in addition to an immediate family, an example would be an elderly parent who moves in with his or her children due to old age. However, it may refer to a family unit in which several generations live together within a single household. In some cultures, the term is used synonymously with consanguineous family, in these cases, the child who cares for the parents usually receives the house in addition to his or her own share of land and moveable property. In an extended family, parents and their childrens families may often live under a single roof and this type of joint family often includes multiple generations in the family. From culture to culture, the variance of the term may have different meanings, for instance, in India, the family is a patriarchal society, with the sons families often staying in the same house. In the joint family set-up, the workload is shared among the members, the roles of women are often restricted to housewives and this usually involves cooking, cleaning, and organizing for the entire family. The patriarch of the family lays down the rules and arbitrates disputes, other senior members of the household babysit infants in case their mother is working. They are also responsible in teaching the children their mother tongue, manners. Grandparents often take the roles because they have the most experience with parenting and maintaining a household. The second most popular is a grandparent moving in with a childs family, usually for care-giving reasons. She noted that 2.5 million grandparents say they are responsible for the needs of the grandchild living with them. The house often has a reception area and a common kitchen. Each family has their own bedroom, the members of the household also look after each other when a member is ill. Particularly in working-class communities, grown children tend to establish their own households within the general area as their parents, aunts, uncles. These extended family members tend to gather often for events and to feel responsible for helping and supporting one another
56.
Nuclear family
–
¡Uno. is the ninth studio album by American punk rock band Green Day, released on September 21,2012, by Reprise Records. It is the first of three albums in the ¡Uno, ¡Tré. trilogy, a series of studio albums released from September 2012 to December 2012. Green Day recorded the album from February to June 2012 at Jingletown Studios in Oakland and this is the first album to feature longtime touring guitarist Jason White as an official member, making the band a quartet. Artwork of the album was revealed in a video uploaded to YouTube and the track list of the album, the first single from the album, titled Oh Love, was released on July 16,2012. The second single Kill the DJ was released on European iTunes Stores on August 14,2012. The third single Let Yourself Go was released on the US iTunes Store on September 5,2012, a music video for Stay the Night was released on Rolling Stone and their YouTube channel on September 24,2012. The song Rusty James is based on the character Rusty-James from the novel Rumble Fish, ¡Uno. received generally positive reviews from music critics. It debuted at two on the US Billboard 200 with first-week sales of 139,000 copies. The album also reached the top 10 of charts in other countries. In February 2012, Billie Joe Armstrong announced that the band was in the studio, in the statement, he said, We are at the most prolific and creative time in our lives. This is the best music ever written, and the songs just keep coming. Instead of making one album, we are making a three album trilogy, every song has the power and energy that represents Green Day on all emotional levels. We are going epic as fuck, the band started work by rehearsing every other day and making songs. They recorded the album at Jingletown Studios in Oakland, California, the band recorded 37 songs and initially thought of making a double album. Armstrong suggested making a trilogy of albums like Van Halens Van Halen I, Van Halen II and he stated in an interview, The songs just kept coming, kept coming. Id go, Maybe a double album, and one day, I sprung it on the others, Instead of Van Halen I, II and III, what if its Green Day I, II and III and we all have our faces on each cover. In an interview to Rolling Stone, Armstrong stated that the theme of ¡Uno. would be different from that of 21st Century Breakdown and American Idiot, and would not be a third rock opera. He also added that music on the record would be punchier and he also stated that a few songs on the album would also sound like garage rock and dance music
57.
Egalitarianism
–
Egalitarianism – or equalitarianism – is a trend of thought that favors equality for all people. Egalitarian doctrines maintain that all humans are equal in fundamental worth or social status, some sources define egalitarianism as the point of view that equality reflects the natural state of humanity. Common forms of egalitarianism include political and philosophical, the 14th Amendment to the United States Constitution, as the rest of the Constitution, in its operative language uses the term person, stating, for example, that. Nor shall any State deprive any person of life, liberty, or property, without due process of law, an example of this form is the Tunisian Constitution of 2014 which provides that men and women shall be equal in their rights and duties. The motto Liberté, égalité, fraternité was used during the French Revolution and is used as an official motto of the French government. The 1789 Rights of Man and of the Citizen French Constitution is framed also with this basis in rights of men. This was satirized by Olympe de Gouges during this time with her Declaration of the Rights of Woman and the Female Citizen. The Declaration of Independence of the United States is an example of an assertion of equality of men, John Locke is sometimes considered the founder of this form. Many state constitutions in the US also use rights of man rather than rights of person. See, e. g. the Kentucky State Constitution, at a cultural level, egalitarian theories have developed in sophistication and acceptance during the past two hundred years. Several egalitarian ideas enjoy wide support among intellectuals and in the populations of many countries. Whether any of these ideas have been implemented in practice, however. A position of opposition to egalitarianism is antiegalitarianism, although the economist Karl Marx is sometimes mistaken to be an egalitarian, Marx eschewed normative theorizing on moral principles altogether. Marx did, however, have a theory of the evolution of moral principles in relation to economic systems. The American economist John Roemer has put forth a new perspective of equality, Roemer concludes that egalitarians must reject socialism as it is classically defined in order for equality to be realized. Sikhism The Sikh faith was founded upon egalitarian principles, going beyond most faiths to provide equality not only based upon race, within the wide range of Christianity, there are dissenting views to this from opposing groups, some of which are Complementarians and Patriarchalists. There are also those who may say that, whilst the Bible encourages equality, it also encourages law and order and these ideas are considered by some to be contrary to the ideals of egalitarianism. Various Christian groups have attempted to hold to this view and develop Christian oriented communities, in Acts, chapter 4, members of the early Christian community sell their possessions, give the proceeds to a common fund overseen by the disciples, then take according to their need
58.
Informal sector
–
The informal sector, informal economy, or grey economy is the part of an economy that is neither taxed, nor monitored by any form of government. Unlike the formal economy, activities of the economy are not included in the gross national product. The informal sector can be described as a market in labour. Other concepts which can be characterized as informal sector can include the market, agorism. Associated idioms include under the table, off the books and working for cash, although the informal sector makes up a significant portion of the economies in developing countries it is often stigmatized as troublesome and unmanageable. However the informal sector provides critical economic opportunities for the poor and has been expanding rapidly since the 1960s, as such, integrating the informal economy into the formal sector is an important policy challenge. It was used to describe a type of employment that was viewed as falling outside of the industrial sector. An alternative definition uses job security as the measure of formality, defining participants in the economy as those who do not have employment security, work security and social security. ”While both of these definitions imply a lack of choice or agency in involvement with the informal economy. This may manifest as unreported employment, hidden from the state for tax, social security or labour law purposes, the term is also useful in describing and accounting for forms of shelter or living arrangements that are similarly unlawful, unregulated, or not afforded protection of the state. ‘Informal economy’ is increasingly replacing ‘informal sector’ as the descriptor for this activity. Informality, both in housing and livelihood generation has often seen as a social ill, and described either in terms of what participant’s lack. Workers who participate in the economy are typically classified as employed. The type of work makes up the informal economy is diverse, particularly in terms of capital invested, technology used. The spectrum ranges from self-employment or unpaid labor to street vendors, shoe shiners. On the higher end of the spectrum are upper-tier informal activities such as service or manufacturing businesses. The upper-tier informal activities have higher costs, which might include complicated licensing regulations. However, most workers in the sector, even those are self-employed or wage workers, do not have access to secure work, benefits, welfare protection. These features differ from businesses and employees in the sector which have regular hours of operation
59.
Subsistence farming
–
Subsistence agriculture is self-sufficiency farming in which the farmers focus on growing enough food to feed themselves and their families. The output is mostly for local requirements with little or no surplus trade, the typical subsistence farm has a range of crops and animals needed by the family to feed and clothe themselves during the year. Planting decisions are made principally with an eye toward what the family will need during the coming year, tony Waters writes, Subsistence peasants are people who grow what they eat, build their own houses, and live without regularly making purchases in the marketplace. Subsistence agriculture also emerged independently in Mexico where it was based on cultivation. Subsistence agriculture was the dominant mode of production in the world until recently, Subsistence horticulture may have developed independently in South East Asia and Papua New Guinea. Subsistence farming continues today in parts of rural Africa, and parts of Asia. Many of the items, as well as occasional services from physicians, veterinarians, blacksmiths. In Central and Eastern Europe subsistence and semi-subsistence agriculture reappeared within the economy since about 1990. In this type of agriculture, a patch of forest land is cleared by a combination of felling and burning, and crops are grown. After 2-3 years the fertility of the soil begins to decline, the land is abandoned, while the land is left fallow the forest regrows in the cleared area and soil fertility and biomass is restored. After a decade or more, the farmer may return to the first piece of land, shifting cultivation is called Dredd in India, Ladang in Indonesia and Milpa in Central America and Mexico. However, such farmers often recognize the value of such compost and they also may irrigate part of such fields if they are near a source of water. In some areas of tropical Africa, at least, such smaller fields may be ones in which crops are grown on raised beds, thus farmers practicing slash and burn agriculture are often much more sophisticated agriculturalists than the term slash and burn subsistence farmers suggest. In this type of farming people migrate along with their animals from one place to another in search of fodder for their animals, generally they rear cattle, sheep, goats, camels and/or yaks for milk, skin, meat and wool. This way of life is common in parts of central and western Asia, India, east and south-west Africa, examples are the nomadic Bhotiyas and Gujjars of the Himalayas. In Intensive subsistence agriculture, the farmer cultivates a small plot of land using simple tools, climate, with large number of days with sunshine and fertile soils permits growing of more than one crop annually on the same plot. Farmers use their land holdings to produce enough, for their local consumption. It results in more food being produced per acre compared to other subsistence patterns
60.
Barter
–
Barter is a system of exchange where goods or services are directly exchanged for other goods or services without using a medium of exchange, such as money. It is distinguishable from gift economies in many ways, one of them is that the exchange is immediate. It is usually bilateral, but may be multilateral and, in most developed countries, usually only exists parallel to monetary systems to a very limited extent. Barter, as a replacement for money as the method of exchange, is used in times of monetary crisis, examples include the Owenite socialists, the Cincinnati Time store, and more recently Ithaca HOURS and the LETS system. Adam Smith, the father of economics, sought to demonstrate that markets pre-existed the state. He argued that money was not the creation of governments, markets emerged, in his view, out of the division of labour, by which individuals began to specialize in specific crafts and hence had to depend on others for subsistence goods. These goods were first exchanged by barter, specialization depended on trade, but was hindered by the double coincidence of wants which barter requires, i. e. for the exchange to occur, each participant must want what the other has. To complete this hypothetical history, craftsmen would stockpile one particular good, be it salt or metal and this is the origin of money according to Smith. Money, as a universally desired medium of exchange, allows each half of the transaction to be separated, Barter is characterized in Adam Smiths The Wealth of Nations by a disparaging vocabulary, higgling, haggling, swapping, dickering. It has also characterized as negative reciprocity, or selfish profiteering. Anthropologists have argued, in contrast, that something resembling barter does occur in stateless societies it is almost always between strangers. Barter occurred between strangers, not fellow villagers, and hence cannot be used to explain the origin of money without the state. Since most people engaged in trade knew each other, exchange was fostered through the extension of credit, everyday exchange relations in such societies are characterized by generalized reciprocity, or a non-calculative familial communism where each takes according to their needs, and gives as they have. Barter is an option to those who cannot afford to store their small supply of wealth in money, the limitations of barter are often explained in terms of its inefficiencies in facilitating exchange in comparison to money. It is said that barter is inefficient because, There needs to be a coincidence of wants For barter to occur between two parties, both parties need to have what the other wants. Difficulty in storing wealth If a society relies exclusively on perishable goods, however, some barter economies rely on durable goods like pigs or cattle for this purpose. Other anthropologists have questioned whether barter is typically between total strangers, a form of known as silent trade. Silent trade, also called silent barter, dumb barter, or depot trade, is a method by which traders who cannot speak each language can trade without talking
61.
Theil Index
–
The Theil index is a statistic primarily used to measure economic inequality and other economic phenomena, though it has also been used to measure racial segregation. The basic Theil index TT is the same as redundancy in information theory which is the maximum entropy of the data minus the observed entropy. It is a case of the generalized entropy index. It can be viewed as a measure of redundancy, lack of diversity, isolation, segregation, inequality, non-randomness and it was proposed by econometrician Henri Theil at the Erasmus University Rotterdam. For a population of N agents each with characteristic x, the situation may be represented by the list xi where xi is the characteristic of agent i, for example, if the characteristic is income, then xi is the income of agent i. If one person has all the income, then TT gives the result ln N, which is maximum inequality. Dividing TT by ln N can normalize the equation to range from 0 to 1, the Theil index measures an entropic distance the population is away from the egalitarian state of everyone having the same income. The numerical result is in terms of negative entropy so that a number indicates more order that is further away from the complete equality. Formulating the index to represent negative entropy instead of entropy allows it to be a measure of inequality rather than equality, the Theil index is derived from Shannons measure of information entropy S, where entropy is a measure of randomness in a given set of information. In physics, k is Boltzmanns constant, in information theory, when information is given in binary digits, k =1 and the log base is 2. In physics and also in computation of Theil index, the logarithm is chosen as the logarithmic base. When p i is chosen to be income per person x i, it needs to be normalized by dividing by the population income. This is substituted into S Theil to give S max = ln N, so the Theil index gives a value in terms of an entropy that measures how far S Theil is away from the ideal S max. The index is a negative entropy in the sense that it gets smaller as the disorder gets larger, when x is in units of population/species, S Theil is a measure of biodiversity and is called the Shannon index. If the Theil index is used with x=population/species, it is a measure of inequality of population among a set of species, the Theil index measures what is called redundancy in information theory. It is the left over space that was not utilized to convey information. The Theil index is a measure of the redundancy of income in some individuals, redundancy in some individuals implies scarcity in others. One of the advantages of the Theil index is that it is a average of inequality within subgroups
62.
Information entropy
–
In information theory, systems are modeled by a transmitter, channel, and receiver. The transmitter produces messages that are sent through the channel, the channel modifies the message in some way. The receiver attempts to infer which message was sent, in this context, entropy is the expected value of the information contained in each message. Messages can be modeled by any flow of information, in a more technical sense, there are reasons to define information as the negative of the logarithm of the probability distribution of possible events or messages. The amount of information of every event forms a random variable whose expected value, units of entropy are the shannon, nat, or hartley, depending on the base of the logarithm used to define it, though the shannon is commonly referred to as a bit. The logarithm of the probability distribution is useful as a measure of entropy because it is additive for independent sources, for instance, the entropy of a coin toss is 1 shannon, whereas of m tosses it is m shannons. Generally, you need log2 bits to represent a variable that can take one of n if n is a power of 2. If these values are equally probable, the entropy is equal to the number of bits, equality between number of bits and shannons holds only while all outcomes are equally probable. If one of the events is more probable than others, observation of event is less informative. Conversely, rarer events provide more information when observed, since observation of less probable events occurs more rarely, the net effect is that the entropy received from non-uniformly distributed data is less than log2. Entropy is zero when one outcome is certain, Shannon entropy quantifies all these considerations exactly when a probability distribution of the source is known. The meaning of the events observed does not matter in the definition of entropy, generally, entropy refers to disorder or uncertainty. Shannon entropy was introduced by Claude E. Shannon in his 1948 paper A Mathematical Theory of Communication, Shannon entropy provides an absolute limit on the best possible average length of lossless encoding or compression of an information source. Entropy is a measure of unpredictability of the state, or equivalently, to get an intuitive understanding of these terms, consider the example of a political poll. Usually, such polls happen because the outcome of the poll is not already known, now, consider the case that the same poll is performed a second time shortly after the first poll. Now consider the example of a coin toss, assuming the probability of heads is the same as the probability of tails, then the entropy of the coin toss is as high as it could be. Such a coin toss has one shannon of entropy since there are two possible outcomes that occur with probability, and learning the actual outcome contains one shannon of information. Contrarily, a toss with a coin that has two heads and no tails has zero entropy since the coin will always come up heads
63.
Random distribution
–
For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. In more technical terms, the probability distribution is a description of a phenomenon in terms of the probabilities of events. Examples of random phenomena can include the results of an experiment or survey, a probability distribution is defined in terms of an underlying sample space, which is the set of all possible outcomes of the random phenomenon being observed. The sample space may be the set of numbers or a higher-dimensional vector space, or it may be a list of non-numerical values, for example. Probability distributions are divided into two classes. A discrete probability distribution can be encoded by a discrete list of the probabilities of the outcomes, on the other hand, a continuous probability distribution is typically described by probability density functions. The normal distribution represents a commonly encountered continuous probability distribution, more complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. A probability distribution whose sample space is the set of numbers is called univariate. Important and commonly encountered univariate probability distributions include the distribution, the hypergeometric distribution. The multivariate normal distribution is a commonly encountered multivariate distribution, to define probability distributions for the simplest cases, one needs to distinguish between discrete and continuous random variables. For example, the probability that an object weighs exactly 500 g is zero. Continuous probability distributions can be described in several ways, the cumulative distribution function is the antiderivative of the probability density function provided that the latter function exists. As probability theory is used in diverse applications, terminology is not uniform. The following terms are used for probability distribution functions, Distribution. Probability distribution, is a table that displays the probabilities of outcomes in a sample. Could be called a frequency distribution table, where all occurrences of outcomes sum to 1. Distribution function, is a form of frequency distribution table. Probability distribution function, is a form of probability distribution table
64.
Receiver operating characteristic
–
In statistics, a receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied. The curve is created by plotting the true positive rate against the positive rate at various threshold settings. The true-positive rate is known as sensitivity, recall or probability of detection in machine learning. The false-positive rate is known as the fall-out or probability of false alarm. The ROC curve is thus the sensitivity as a function of fall-out, ROC analysis provides tools to select possibly optimal models and to discard suboptimal ones independently from the cost context or the class distribution. ROC analysis is related in a direct and natural way to cost/benefit analysis of decision making. ROC analysis since then has used in medicine, radiology, biometrics. The ROC is also known as a relative operating characteristic curve, a classification model is a mapping of instances between certain classes/groups. The classifier or diagnosis result can be a value, in which case the classifier boundary between classes must be determined by a threshold value. Or it can be a class label, indicating one of the classes. Let us consider a two-class prediction problem, in which the outcomes are labeled either as positive or negative, there are four possible outcomes from a binary classifier. If the outcome from a prediction is p and the value is also p, then it is called a true positive. Conversely, a true negative has occurred when both the outcome and the actual value are n, and false negative is when the prediction outcome is n while the actual value is p. To get an example in a real-world problem, consider a diagnostic test that seeks to determine whether a person has a certain disease. A false positive in this case occurs when the tests positive. A false negative, on the hand, occurs when the person tests negative, suggesting they are healthy. Let us define an experiment from P positive instances and N negative instances for some condition, the four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows, The contingency table can derive several evaluation metrics. To draw a ROC curve, only the positive rate
65.
Biodiversity
–
Biodiversity, a contraction of biological diversity, generally refers to the variety and variability of life on Earth. One of the most widely used definitions defines it in terms of the variability within species and it is a measure of the variety of organisms present in different ecosystems. This can refer to genetic variation, ecosystem variation, or species variation within an area, biome, terrestrial biodiversity tends to be greater near the equator, which seems to be the result of the warm climate and high primary productivity. Biodiversity is not distributed evenly on Earth, and is richest in the tropics and these tropical forest ecosystems cover less than 10 per cent of earths surface, and contain about 90 percent of the worlds species. Marine biodiversity tends to be highest along coasts in the Western Pacific, there are latitudinal gradients in species diversity. Biodiversity generally tends to cluster in hotspots, and has been increasing through time, the number and variety of plants, animals and other organisms that exist is known as biodiversity. It is a component of nature and it ensures the survival of human species by providing food, fuel, shelter, medicines. The richness of biodiversity depends on the conditions and area of the region. All species of plants taken together are known as flora and about 300,000 species of plants are known to date, all species of animals taken together are known as fauna which includes birds, mammals, fish, reptiles, insects, crustaceans, molluscs, etc. Rapid environmental changes typically cause mass extinctions, more than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earths current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described. More recently, in May 2016, scientists reported that 1 trillion species are estimated to be on Earth currently with only one-thousandth of one percent described, the total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037 and weighs 50 billion tonnes. In comparison, the mass of the biosphere has been estimated to be as much as 4 TtC. In July 2016, scientists reported identifying a set of 355 genes from the Last Universal Common Ancestor of all living on Earth. The age of the Earth is about 4.54 billion years old, there are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a substance is graphite in 3.7 billion-year-old meta-sedimentary rocks discovered in Western Greenland. More recently, in 2015, remains of life were found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, If life arose relatively quickly on Earth, then it could be common in the universe
66.
Credit rating
–
A credit rating is an evaluation of the credit risk of a prospective debtor, predicting their ability to pay back the debt, and an implicit forecast of the likelihood of the debtor defaulting. A credit reporting – in distinction to a credit rating – is an evaluation of an individuals credit worthiness. A sovereign credit rating is the rating of a sovereign entity. The country risk rankings table shows the ten least-risky countries for investment as of January 2013, Ratings are further broken down into components including political risk, economic risk. Euromoneys bi-annual country risk index monitors the political and economic stability of 185 sovereign countries, results focus foremost on economics, specifically sovereign default risk or payment default risk for exporters. Best defines country risk as the risk that country-specific factors could affect an insurers ability to meet its financial obligations. A rating expresses the likelihood that the party will go into default within a given time-horizon,1 year or above. In the past institutional investors preferred to consider long-term ratings, nowadays, short-term ratings are commonly used. Credit ratings can address a corporations financial instruments i. e. debt security such as a bond, Ratings are assigned by credit rating agencies, the largest of which are Standard & Poors, Moodys and Fitch Ratings. They use letter designations such as A, B, C, higher grades are intended to represent a lower probability of default. However, some studies have estimated the risk and reward of bonds by rating. Over a longer period, it stated the order is by and large, another study in Journal of Finance calculated the additional interest rate or spread corporate bonds pay over that of riskless US Treasury bonds, according to the bonds rating. Looking at rated bonds for 1973–89, the authors found a AAA-rated bond paid 43 basis points over a US Treasury bond, a CCC-rated junk bond, on the other hand, paid over 7% more than a Treasury bond on average over that period. Different rating agencies may use variations of a combination of lower-case and upper-case letters. The Standard & Poors rating scale uses upper-case letters and pluses and minuses, the Moodys rating system uses numbers and lower-case letters as well as upper case. While Moodys, S&P and Fitch Ratings control approximately 95% of the ratings business. DBRSs long-term ratings scale is similar to Standard & Poors and Fitch Ratings with the words high. It goes as follows, from excellent to poor, AAA, AA, AA, AA, A, A, A, BBB, BBB, BBB, BB, BB, BB, B, B, B, CCC, CCC, CCC, CC, CC, CC, C, C, C and D
67.
Credit risk
–
A credit risk is the risk of default on a debt that may arise from a borrower failing to make required payments. In the first resort, the risk is that of the lender and includes lost principal and interest, disruption to cash flows, the loss may be complete or partial. In an efficient market, higher levels of credit risk will be associated with higher borrowing costs, because of this, measures of borrowing costs such as yield spreads can be used to infer credit risk levels based on assessments by market participants. Losses can arise in a number of circumstances, for example, A consumer may fail to make a payment due on a loan, credit card, line of credit. A company is unable to repay asset-secured fixed or floating charge debt, a business or consumer does not pay a trade invoice when due. A business does not pay an employees earned wages when due, a business or government bond issuer does not make a payment on a coupon or principal payment when due. An insolvent insurance company does not pay a policy obligation, an insolvent bank wont return funds to a depositor. A government grants bankruptcy protection to an insolvent consumer or business, the lender can also take out insurance against the risk or on-sell the debt to another company. In general, the higher the risk, the higher will be the interest rate that the debtor will be asked to pay on the debt, Credit risk mainly arises when borrowers are unable to pay due willingly or unwillingly. Concentration risk – The risk associated with any single exposure or group of exposures with the potential to produce large enough losses to threaten a banks core operations and it may arise in the form of single name concentration or industry concentration. Significant resources and sophisticated programs are used to analyze and manage risk, some companies run a credit risk department whose job is to assess the financial health of their customers, and extend credit accordingly. They may use in programs to advise on avoiding, reducing and transferring risk. They also use third party provided intelligence, companies like Standard & Poors, Moodys, Fitch Ratings, DBRS, Dun and Bradstreet, Bureau van Dijk and Rapid Ratings International provide such information for a fee. Most lenders employ their own models to rank potential and existing customers according to risk, with products such as unsecured personal loans or mortgages, lenders charge a higher price for higher risk customers and vice versa. With revolving products such as cards and overdrafts, risk is controlled through the setting of credit limits. Some products also require collateral, usually an asset that is pledged to secure the repayment of the loan, Credit scoring models also form part of the framework used by banks or lending institutions to grant credit to clients. Once this information has been reviewed by credit officers and credit committees. Sovereign credit risk is the risk of a government being unwilling or unable to meet its loan obligations, many countries have faced sovereign risk in the late-2000s global recession
68.
Economic inequality
–
Economic inequality is the difference found in various measures of economic well-being among individuals in a group, among groups in a population, or among countries. Economic inequality is sometimes called income inequality, wealth inequality, or the wealth gap, economists generally focus on economic disparity in three metrics, wealth, income, and consumption. The issue of inequality is relevant to notions of equity, equality of outcome. Economic inequality varies between societies, historical periods, economic structures and systems, the term can refer to cross-sectional distribution of income or wealth at any particular period, or to changes of income and wealth over longer periods of time. There are various numerical indices for measuring economic inequality, a widely used index is the Gini coefficient, but there are also many other methods. Some studies say economic inequality is a problem, for example too much inequality can be destructive. However, too much income equality is also destructive since it decreases the incentive for productivity, the first set of income distribution statistics for the United States covering the period from was published in 1952 by Simon Kuznets, Shares of Upper Income Groups in Income and Savings. It relied on US federal income tax returns and Kuznets’s own estimates of US national income, National Income, economists generally consider three metrics of economic dispersion, wealth, income, and consumption. A skilled professional may have low wealth and low income as student, low wealth and high earnings in the beginning of the career, peoples preferences determine whether they consume earnings immediately or defer consumption to the future. The distinction is important at the level of economy, There are economies with high income inequality. There are economies with low income inequality and high wealth inequality. There are different ways to measure income inequality and wealth inequality, Different choices lead to different results. g. Annual wages, including wages from part-time work or work during only part of the year, individual earnings inequality among all workers – Includes the self-employed. Individual earnings inequality among the entire working-age population – Includes those who are inactive, e. g. students, unemployed, early pensioners, household earnings inequality – Includes the earnings of all household members. Household market income inequality – Includes incomes from capital, savings, household disposable income inequality – Includes public cash transfers received and direct taxes paid. Household adjusted disposable income inequality – Includes publicly provided services, There are many challenges in comparing data between economies, or in a single economy in different years. Examples of challenges include, Data can be based on joint taxation of couples or individual taxation, the tax authorities generally only collect information on income that is potentially taxable. The precise definition of income varies from country to country
69.
Great Gatsby curve
–
The Great Gatsby curve is a chart plotting the relationship between inequality and intergenerational social immobility in several countries around the world. The curve was introduced in a 2012 speech by chairman of the Council of Economic Advisers Alan Krueger, the name was coined by former Council of Economic Advisers staff economist Judd Cramer, for which he was given a bottle of wine as a reward. The curve plots intergenerational income elasticity—i. e, the name of the curve refers, somewhat ironically, to Jay Gatsby, the character in F. Scott Fitzgeralds novel The Great Gatsby. Jay shows a degree of mobility, rising from being a bootlegger. Journalist Robert Lenzner and lawyer Nripendra Chakravarthy call it a very frightening curve that requires policy attention. S. has seen in the last 25 years. However, some argue that the apparent connection may arise as an artifact of heterogeneous variance in ability across nations and it was shown that the manner in which the intergenerational income elasticity is defined, is by design associated with inequality. e. The curve is an artifact of diversity and his quote, Germans are richer on average than Greeks, and that difference in income tends to persist from generation to generation. When people look at the Great Gatsby curve, they omit this fact, but it is not obvious that the political divisions that divide people are the right ones for economic analysis. We combine the persistently rich Connecticut with the persistently poor Mississippi, a blog by M. S. at The Economist replied to Mankiws counter-argument as follows, The argument over the Great Gatsby curve is an argument about whether Americas economy is fair. Amazingly, he seems unaware that this is the case hes just made. Economist Paul Krugman has also countered Mankiws arguments in his column, carter Price of the WCEG suggests the line to serfdom as an alternative name on the grounds that it may better convey the meaning of the correlation. Economy of the United States Socio-economic mobility in the United States The Great Divergence List of countries by income equality and Gini coefficient
70.
Kuznets curve
–
In economics, a Kuznets curve graphs the hypothesis that as an economy develops, market forces first increase and then decrease economic inequality. The hypothesis was first advanced by economist Simon Kuznets in the 1950s and 60s, the Kuznets curve implies that as a nation undergoes industrialization – and especially the mechanization of agriculture – the center of the nation’s economy will shift to the cities. As internal migration by farmers looking for better-paying jobs in urban hubs causes a significant rural-urban inequality gap, Kuznets believed that inequality would follow an inverted “U” shape as it rises and then falls again with the increase of income per-capita. Since 1991 the environmental Kuznets curve has become a feature in the technical literature of environmental policy. Comparing 20% to 20%, perfect equality is expressed as 1, Kuznets had two similar explanations for this historical phenomenon, workers migrated from agriculture to industry, and rural workers moved to urban jobs. In both explanations, inequality will decrease after 50% of the shift force switches over to the higher paying sector, critics of the Kuznets curve theory argue that its U-shape comes not from progression in the development of individual countries, but rather from historical differences between countries. For instance, many of the middle income countries used in Kuznets data set were in Latin America, when controlling for this variable, the U-shape of the curve tends to disappear. Regarding the empirical evidence, based on panels of countries or time series approaches. This neo-Malthusian model incorporating Kuznets work, yields a model of the relationships over time rather than just a curve. The East Asian miracle has been used to criticize the validity of the Kuznets curve theory. The rapid economic growth of eight East Asian countries—Japan, South Korea, Hong Kong, Taiwan, Singapore, Indonesia, Thailand, manufacturing and export grew quickly and powerfully. Yet simultaneously, life expectancy was found to increase and population living in absolute poverty decreased. This development process was contrary to the Kuznets curve theory and these factors increased the average citizen’s ability to consume and invest within the economy, further contributing to economic growth. Stiglitz highlights that the rates of growth provided the resources to promote equality. The EAM defies the Kuznets curve, which insists growth produces inequality, and this level is also similar to that of half of the first-tier NICs, the Mediterranean EU and the Anglophone OECD. As a result, about 80% of the population now live in countries with a Gini around 40. Palma goes on to note that, among countries, only those in Latin America. Instead of a Kuznets curve, he breaks income inequality into deciles which contain 10% of the population relating to income inequality