1.
Financial economics
–
Financial economics is the branch of economics characterized by a concentration on monetary activities, in which money of one type or another is likely to appear on both sides of a trade. Its concern is thus the interrelation of financial variables, such as prices, interest rates and shares and it has two main areas of focus, asset pricing and corporate finance, the first being the perspective of providers of capital and the second of users of capital. The subject is concerned with the allocation and deployment of economic resources and it is built on the foundations of microeconomics and decision theory. Financial econometrics is the branch of economics that uses econometric techniques to parameterise these relationships. Mathematical finance is related in that it will derive and extend the mathematical or numerical models suggested by financial economics, note though that the emphasis there is mathematical consistency, as opposed to compatibility with economic theory. Financial economics is usually taught at the level, see Master of Financial Economics. Recently, specialist undergraduate degrees are offered in the discipline, note that this article provides an overview and survey of the field, for derivations and more technical discussion, see the specific articles linked. As above, the discipline essentially explores how rational investors would apply decision theory to the problem of investment, the subject is thus built on the foundations of microeconomics and decision theory, and derives several key results for the application of decision making under uncertainty to the financial markets. Underlying all of economics are the concepts of present value. Its history is correspondingly early, Richard Witt discusses compound interest already in 1613, in his book Arithmeticall Questions, further developed by Johan de Witt and these ideas originate with Blaise Pascal and Pierre de Fermat. This decision method, however, fails to consider risk aversion, choice under uncertainty here, may then be characterized as the maximization of expected utility. The impetus for these ideas arise from various inconsistencies observed under the expected value framework, the development here originally due to Daniel Bernoulli, and later formalized by John von Neumann and Oskar Morgenstern. The concepts of arbitrage-free, rational, pricing and equilibrium are then coupled with the above to derive classical financial economics, Rational pricing is the assumption that asset prices will reflect the arbitrage-free price of the asset, as any deviation from this price will be arbitraged away. This assumption is useful in pricing fixed income securities, particularly bonds, intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and are therefore not in equilibrium. An arbitrage equilibrium is thus a precondition for a general economic equilibrium, the formal derivation will proceed by arbitrage arguments. All pricing models are then essentially variants of this, given specific assumptions and/or conditions and this approach is consistent with the above, but with the expectation based on the market as opposed to individual preferences. In general, this premium may be derived by the CAPM as will be seen under #Uncertainty, with the above relationship established, the further specialized Arrow–Debreu model may be derived. This important result suggests that, under certain conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy

2.
100-year flood
–
A one-hundred-year flood is a flood event that has a 1% probability of occurring in any given year. The 100-year flood is also referred to as the 1% flood, for river systems, the 100-year flood is generally expressed as a flowrate. Based on the expected 100-year flood flow rate, the water level can be mapped as an area of inundation. The resulting floodplain map is referred to as the 100-year floodplain, estimates of the 100-year flood flowrate and other streamflow statistics for any stream in the United States are available. Maps of the riverine or coastal 100-year floodplain may figure importantly in building permits, environmental regulations, a common misunderstanding exists that a 100-year flood is likely to occur only once in a 100-year period. In fact, there is approximately a 63. 4% chance of one or more 100-year floods occurring in any 100-year period, on the Danube River at Passau, Germany, the actual intervals between 100-year floods during 1501 to 2013 ranged from 37 to 192 years. The probability of exceedance Pe is also described as the natural, inherent, however, the expected value of the number of 100-year floods occurring in any 100-year period is 1. Ten-year floods have a 10% chance of occurring in any year, 500-year have a 0. 2% chance of occurring in any given year. The percent chance of an X-year flood occurring in a year can be calculated by dividing 100 by X. A similar analysis is applied to coastal flooding or rainfall data. The recurrence interval of a storm is rarely identical to that of a riverine flood, because of rainfall timing. The field of value theory was created to model rare events such as 100-year floods for the purposes of civil engineering. This theory is most commonly applied to the maximum or minimum observed stream flows of a given river, in desert areas where there are only ephemeral washes, this method is applied to the maximum observed rainfall over a given period of time. The extreme value analysis only considers the most extreme event observed in a given year. So, between the spring runoff and a heavy summer rain storm, whichever resulted in more runoff would be considered the extreme event. There are a number of assumptions which are made to complete the analysis determines the 100-year flood. First, the events observed in each year must be independent from year-to-year. In other words, the river flow rate from 1984 cannot be found to be significantly correlated with the observed flow rate in 1985

3.
Actuary
–
An actuary is a business professional who deals with the measurement and management of risk and uncertainty. The name of the profession is actuarial science. These risks can affect both sides of the sheet, and require asset management, liability management, and valuation skills. Actuaries provide assessments of financial security systems, with a focus on their complexity, their mathematics, Actuaries of the 21st century require analytical skills, business knowledge, and an understanding of human behavior and information systems to design and manage programs that control risk. The actual steps needed to become an actuary are usually country-specific, however, almost all processes share a rigorous schooling or examination structure, the profession has consistently been ranked as one of the most desirable. In various studies, being an actuary was ranked number one or two times since 2010. Actuaries use skills primarily in mathematics, particularly calculus-based probability and mathematical statistics, but also economics, computer science, finance, and business. Actuaries assemble and analyze data to estimate the probability and likely cost of the occurrence of an event such as death, sickness, injury, disability, most traditional actuarial disciplines fall into two main categories, life and non-life. Life actuaries, which include health and pension actuaries, primarily deal with mortality risk, morbidity risk, products prominent in their work include life insurance, annuities, pensions, short and long term disability insurance, health insurance, health savings accounts, and long-term care insurance. Non-life actuaries, also known as property and casualty or general insurance actuaries, Actuaries are also called upon for their expertise in enterprise risk management. This can involve dynamic financial analysis, stress testing, the formulation of corporate policy. Actuaries are also involved in areas of the financial services industry. On both the life and casualty sides, the function of actuaries is to calculate premiums. On the casualty side, this often involves quantifying the probability of a loss event, called the frequency. The amount of time that occurs before the event is important. On the life side, the analysis often involves quantifying how much a potential sum of money or a liability will be worth at different points in the future. Forecasting interest yields and currency movements also plays a role in determining future costs, Actuaries do not always attempt to predict aggregate future events. Often, their work may relate to determining the cost of financial liabilities that have occurred, called retrospective reinsurance

4.
Auto insurance risk selection
–
Auto insurance risk selection is the process by which vehicle insurers determine whether or not to insure an individual and what insurance premium to charge. Depending on the jurisdiction, the premium can be either mandated by the government or determined by the insurance company in accordance to a framework of regulations set by the government. Often, the insurer will have freedom to set the price on physical damage coverages than on mandatory liability coverages. When the premium is not mandated by the government, it is derived from the calculations of an actuary based on statistical data. The premium can vary depending on factors that are believed to affect the expected cost of future claims. Those factors can include the car characteristics, the selected, the profile of the driver. Such data results in a classification of the applicant to a broad actuarial class for which insurance rates are assigned based upon the experience of the insurer. Many factors are deemed relevant to such classification in a particular class or risk level, such as age, sex, marital status, location of residence. The current system of insurance creates groupings of vehicles and drivers based on the types of classifications. Vehicle, Age, manufacturer, model, and value, driver, Age, sex, marital status, driving record, violations, at fault accidents, and place of residence. Coverage, Types of losses covered, liability, uninsured or underinsured motorist, comprehensive, and collision, liability limits, and deductibles. A change to any of this information might result in a different premium being charged if the change resulted in a different actuarial class or risk level for that variable. For instance, a change in the age from 38 to 39 may not result in a different actuarial class because 38-. Current insurance rating systems also provide discounts and surcharges for some types of use of the vehicle, equipment on the vehicle, common surcharges and discounts include, Surcharges, Business use. Conventional rating systems are based on past realized losses and the past record of other drivers with similar characteristics. More recently, electronic systems have been introduced whereby the actual driving performance of a driver is monitored and communicated directly to the insurance company. The insurance company then assigns the driver to a class based on the monitored driving behavior. An individual, therefore, can be put into different risk classes from month to month depending upon how they drive

5.
Average high cost multiple
–
In unemployment insurance in the United States, the average high-cost multiple is a commonly used actuarial measure of Unemployment Trust Fund adequacy. Technically, AHCM is defined as reserve ratio divided by average cost rate of three high-cost years in the recent history. In this definition, cost rate for any duration of time is defined as benefit cost divided by total wages paid in covered employment for the same duration, usually expressed as a percentage. Intuitively, the AHCM provides an estimate of the length of time current reserve in the trust fund can pay out benefits at historically high payout rate, for example, if a states AHCM is 1. If the AHCM is 0.5, then the state is expected to be able to pay out six months of benefits when the a similar recession hits, as of December 31,2009, a state has a balance of $500 million in its UI trust fund. The total wages of its employment is $40 billion. The reserve ratio for this state on this day is $500/$40000 =1. 25%, historically, the state experienced three highest-cost years in 1991,2002, and 2009, when the cost rates were 1.50,1.80, and 3.00, respectively. The average high-cost rate for this state is therefore 2.10, thus, the average high-cost multiple is 1. 25/2.10 =0.595. The following chart shows US average high-cost multiple from 1957 to 2009, UI Data Summary ET Financial Data Handbook 394 State UI Trust Fund Calculator

6.
Extreme value theory
–
Extreme value theory or extreme value analysis is a branch of statistics dealing with the extreme deviations from the median of probability distributions. It seeks to assess, from a given ordered sample of a random variable. Extreme value analysis is used in many disciplines, such as structural engineering, finance, earth sciences, traffic prediction. For example, EVA might be used in the field of hydrology to estimate the probability of an unusually large flooding event, similarly, for the design of a breakwater, a coastal engineer would seek to estimate the 50-year wave and design the structure accordingly. Two approaches exist for practical extreme value analysis, the first method relies on deriving block maxima series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima, the second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold. This method is referred to as the Peak Over Threshold method. For AMS data, the analysis may partly rely on the results of the Fisher–Tippett–Gnedenko theorem, however, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the distributions for the minimum or the maximum of a very large collection of independent random variables from the same distribution. For POT data, the analysis may involve fitting two distributions, one for the number of events in a period considered and a second for the size of the exceedances. A common assumption for the first is the Poisson distribution, with the generalized Pareto distribution being used for the exceedances, a tail-fitting can be based on the Pickands–Balkema–de Haan theorem. Novak reserves the term “POT method” to the case where the threshold is non-random, pipeline failures due to pitting corrosion. Anomalous IT network traffic, prevent attackers from reaching important data The field of value theory was pioneered by Leonard Tippett. Tippett was employed by the British Cotton Industry Research Association, where he worked to make cotton thread stronger, in his studies, he realized that the strength of a thread was controlled by the strength of its weakest fibres. With the help of R. A. Fisher, Tippet obtained three asymptotic limits describing the distributions of extremes, emil Julius Gumbel codified this theory in his 1958 book Statistics of Extremes, including the Gumbel distributions that bear his name. A summary of important publications relating to extreme value theory can be found on the article List of publications in statistics. Let X1, …, X n be a sequence of independent and identically distributed variables with cumulative distribution function F, in theory, the exact distribution of the maximum can be derived, Pr = Pr = Pr ⋯ Pr = n. The associated indicator function I n = I is a Bernoulli process with a probability p = that depends on the magnitude z of the extreme event

7.
Copula (probability theory)
–
In probability theory and statistics, a copula is a multivariate probability distribution for which the marginal probability distribution of each variable is uniform. Copulas are used to describe the dependence between random variables and their name comes from the Latin for link or tie, similar but unrelated to grammatical copulas in linguistics. Copulas have been used widely in quantitative finance to model and minimize tail risk, Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulae separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence, some popular parametric copula models are outlined below. Suppose its marginals are continuous, i. e. the marginal CDFs F i = P are continuous functions, by applying the probability integral transform to each component, the random vector = has uniformly distributed marginals. The copula of is defined as the joint cumulative distribution function of, the importance of the above is that the reverse of these steps can be used to generate pseudo-random samples from general classes of multivariate probability distributions. That is, given a procedure to generate a sample from the copula distribution, the inverses F i −1 are unproblematic as the F i were assumed to be continuous. The above formula for the function can be rewritten to correspond to this as. Sklars theorem, named after Abe Sklar, provides the foundation for the application of copulas. Sklars theorem states that every multivariate cumulative distribution function H = P of a vector can be expressed in terms of its marginals F i = P. In case that the distribution has a density h, and this is available, it holds further that h = c ⋅ f 1 ⋅ ⋯ ⋅ f d. The theorem also states that, given H, the copula is unique on Ran × ⋯ × Ran and this implies that the copula is unique if the marginals F i are continuous. The converse is true, given a copula C, d →. The Fréchet–Hoeffding Theorem states that for any Copula C, d → and any ∈ d the following bounds hold, the function W is called lower Fréchet–Hoeffding bound and is defined as W = max. The function M is called upper Fréchet–Hoeffding bound and is defined as M = min, the upper bound is sharp, M is always a copula, it corresponds to comonotone random variables. In two dimensions, i. e. the bivariate case, the Fréchet–Hoeffding Theorem states max ≤ C ≤ min Several families of copulae have been described, the Gaussian copula is a distribution over the unit cube d. It is constructed from a normal distribution over R d by using the probability integral transform. While there is no simple formula for the copula function, C R Gauss, it can be upper or lower bounded

8.
Demography
–
Demography is the statistical study of populations, especially human beings. As a very general science, it can analyse any kind of dynamic living population, Demography encompasses the study of the size, structure, and distribution of these populations, and spatial or temporal changes in them in response to birth, migration, ageing, and death. Based on the research of the earth, earths population up to the year 2050 and 2100 can be estimated by demographers. Demographics are quantifiable characteristics of a given population, demographic analysis can cover whole societies or groups defined by criteria such as education, nationality, religion, and ethnicity. Educational institutions usually treat demography as a field of sociology, though there are a number of independent demography departments, demographic thoughts can be traced back to antiquity, and were present in many civilizations and cultures, like Ancient Greece, Ancient Rome, India and China. In ancient Greece, this can be found in the writings of Herodotus, Thucidides, Hippocrates, Epicurus, Protagoras, Polus, Plato and Aristotle. In Rome, writers and philosophers like Cicero, Seneca, Pliny the elder, Marcus Aurelius, Epictetus, Cato, in the Middle ages, Christian thinkers devoted much time in refuting the Classical ideas on demography. Important contributors to the field were William of Conches, Bartholomew of Lucca, William of Auvergne, William of Pagula, and Ibn Khaldun. One of the earliest demographic studies in the period was Natural and Political Observations Made upon the Bills of Mortality by John Graunt. Among the studys findings were that one third of the children in London died before their sixteenth birthday, mathematicians, such as Edmond Halley, developed the life table as the basis for life insurance mathematics. Richard Price was credited with the first textbook on life contingencies published in 1771, followed later by Augustus de Morgan, at the end of the 18th century, Thomas Robert Malthus concluded that, if unchecked, populations would be subject to exponential growth. He feared that population growth would tend to outstrip growth in production, leading to ever-increasing famine. He is seen as the father of ideas of overpopulation. Later, more sophisticated and realistic models were presented by Benjamin Gompertz, the period 1860-1910 can be characterized as a period of transition wherein demography emerged from statistics as a separate field of interest. There are two types of data collection—direct and indirect—with several different methods of each type, direct data comes from vital statistics registries that track all births and deaths as well as certain changes in legal status such as marriage, divorce, and migration. In developed countries with good registration systems, registry statistics are the best method for estimating the number of births and deaths, a census is the other common direct method of collecting demographic data. A census is conducted by a national government and attempts to enumerate every person in a country. Analyses are conducted after a census to estimate how much over or undercounting took place and these compare the sex ratios from the census data to those estimated from natural values and mortality data

9.
Coherent risk measure
–
A coherent risk measure is a function ϱ that satisfies properties of monotonicity, sub-additivity, homogeneity, and translational invariance. Consider a random outcome X viewed as an element of a linear space L of measurable functions, defined on an appropriate probability space. A functional ϱ, L → R ∪ is said to be coherent risk measure for L if it satisfies the properties, Normalized ϱ =0 That is. Monotonicity I f Z1, Z2 ∈ L a n d Z1 ≤ Z2 a. s. T h e n ϱ ≥ ϱ That is, if portfolio Z2 always has better values than portfolio Z1 under almost all scenarios then the risk of Z2 should be less than the risk of Z1. If Z1 is an in the call option on a stock. In financial risk management, monotonicity implies a portfolio with greater future returns has less risk. Sub-additivity I f Z1, Z2 ∈ L, t h e n ϱ ≤ ϱ + ϱ Indeed, in financial risk management, sub-additivity implies diversification is beneficial. Positive homogeneity I f α ≥0 a n d Z ∈ L, t h e n ϱ = α ϱ Loosely speaking, in financial risk management, positive homogeneity implies the risk of a position is proportional to its size. Translation invariance If A is a portfolio with guaranteed return a and Z ∈ L then ϱ = ϱ − a The portofolio A is just adding cash a to your portfolio Z. In particular, if a = ϱ then ϱ =0, in financial risk management, translation invariance implies that the addition of a sure amount of capital reduces the risk by the same amount. The notion of coherence has been subsequently relaxed and this function is called distortion function or Wang transform function. The dual distortion function is g ~ =1 − g, given a probability space, then for any random variable X and any distortion function g we can define a new probability measure Q such that for any A ∈ F it follows that Q = g. It is well known that value at risk is not a coherent risk measure as it does not respect the sub-additivity property, an immediate consequence is that value at risk might discourage diversification. Value at risk is, however, coherent, under the assumption of elliptically distributed losses when the value is a linear function of the asset prices. However, in case the value at risk becomes equivalent to a mean-variance approach where the risk of a portfolio is measured by the variance of the portfolios return. The Wang transform function for the Value at Risk is g =1 x ≥1 − α, the non-concavity of g proves the non coherence of this risk measure. However if we held a portfolio that consisted of 50% of each bond by value then the 95% VaR is 35% since the probability of at least one of the bonds defaulting is 7. 84% which exceeds 5%