1.
Altman Z-score
–
The Z-score formula for predicting bankruptcy was published in 1968 by Edward I. Altman, who was, at the time, an Assistant Professor of Finance at New York University, the formula may be used to predict the probability that a firm will go into bankruptcy within two years. Z-scores are used to predict corporate defaults and a control measure for the financial distress status of companies in academic studies. The Z-score uses multiple corporate income and balance sheet values to measure the health of a company. The Z-score is a combination of four or five common business ratios. Altman applied the method of discriminant analysis to a dataset of publicly held manufacturers. The original data sample consisted of 66 firms, half of which had filed for bankruptcy under Chapter 7, all businesses in the database were manufacturers, and small firms with assets of < $1 million were eliminated. The original Z-score formula was as follows, Z =1. 2X1 +1. 4X2 +3. 3X3 +0. 6X4 +1. 0X5, x1 = Working Capital / Total Assets. Measures liquid assets in relation to the size of the company, x2 = Retained Earnings / Total Assets. Measures profitability that reflects the age and earning power. X3 = Earnings Before Interest and Taxes / Total Assets, measures operating efficiency apart from tax and leveraging factors. It recognizes operating earnings as being important to long-term viability, x4 = Market Value of Equity / Book Value of Total Liabilities. Adds market dimension that can show up security price fluctuation as a red flag. X5 = Sales / Total Assets, standard measure for total asset turnover. Altman found that the profile for the bankrupt group fell at -0.25 avg. Altmans work built upon research by accounting researcher William Beaver and others, in the 1930s and on, Mervyn and others had collected matched samples and assessed that various accounting ratios appeared to be valuable in predicting bankruptcy. Altmans Z-score is a version of the discriminant analysis technique of R. A. Fisher. William Beavers work, published in 1966 and 1968, was the first to apply a statistical method, Beaver applied this method to evaluate the importance of each of several accounting ratios based on univariate analysis, using each accounting ratio one at a time

2.
Incremental capital-output ratio
–
The Incremental Capital-Output Ratio, is the ratio of investment to growth which is equal to 1 divided by the marginal product of capital. The higher the ICOR, the lower the productivity of capital or the efficiency of capital. The ICOR can be thought of as a measure of the inefficiency with which capital is used, in most countries the ICOR is in the neighborhood of 3. It is a topic discussed in Economic growth, reinhart, Carmen M. Regional and Global Capital Flows, Macroeconomic Causes and Consequences, 42–45

3.
Bias ratio
–
The bias ratio is an indicator used in finance to analyze the returns of investment portfolios, and in performing due diligence. This metric measures abnormalities in the distribution of returns that indicate the presence of bias in subjective pricing, the bias ratio measures how far the returns from an investment portfolio – e. g. one managed by a hedge fund – are from an unbiased distribution. Thus the bias ratio of an equity index will usually be close to 1. However, if a fund smooths its returns using subjective pricing of assets the bias ratio will be higher. As such, it can identify the presence of illiquid securities where they are not expected. The bias ratio was first defined by Adil Abdulali, a manager at the investment firm Protégé Partners. The Concepts behind the Bias Ratio were formulated between 2001 and 2003 and privately used to screen money managers, the first public discussions on the subject took place in 2004 at New York Universitys Courant Institute and in 2006 at Columbia University. In 2006, the Bias Ratio was published in a letter to Investors and made available to the public by Riskdata, a risk management solution provider, that included it in its standard suite of analytics. The Bias Ratio has since used by a number of Risk Management professionals to spot suspicious funds that subsequently turned out to be frauds. The most spectacular example of this was reported in the Financial Times on 22 January 2009 titled Bias ratio seen to unmask Madoff, imagine that you are a hedge fund manager who invests in securities that are hard to value, such as mortgage-backed securities. Your peer group consists of funds with similar mandates, and all have track records with high Sharpe ratios, very few down months, and investor demand from the crowd. You are keenly aware that your potential investors look carefully at the characteristics of returns, furthermore, assume that no pricing service can reliably price your portfolio, and the assets are often sui generis with no quoted market. In order to price the portfolio for return calculations, you poll dealers for prices on each security monthly, the following real-world example illustrates this theoretical construct. When pricing this portfolio, standard market practice allows a manager to discard outliers, market participants contend that outliers are difficult to characterize methodically and thus use the heuristic rule you know it when you see it. Visible outliers consider the particular characteristics and liquidity as well as the market environment in which quotes are solicited. After discarding outliers, a manager sums up the relevant figures and determines the net asset value, now let’s consider what happens when this NAV calculation results in a small monthly loss, such as -0. 01%. Lo and behold, just before the CFO publishes the return, throwing out that one quote would raise the monthly return to +0. 01%. A manager with high integrity faces two pricing alternatives, the smooth blue histogram represents a manager who employed Option 1, and the kinked red histogram represents a manager who chose Option 2 in those critical months

4.
Cyclically adjusted price-to-earnings ratio
–
The cyclically adjusted price-to-earnings ratio, commonly known as CAPE, Shiller P/E, or P/E10 ratio, is a valuation measure usually applied to the US S&P500 equity market. It is defined as price divided by the average of ten years of earnings and it is not intended as an indicator of impending market crashes, although high CAPE values have been associated with such events. Value investors Benjamin Graham and David Dodd argued for smoothing a firms earnings over the past five to ten years in their classic text Security Analysis, Graham and Dodd noted one-year earnings were too volatile to offer a good idea of a firms true earning power. In a 1988 paper economists John Y, campbell and Robert Shiller concluded that a long moving average of real earnings helps to forecast future real dividends which in turn are correlated with returns on stocks. The idea is to take an average of earnings and adjust for inflation to forecast future returns. Shiller later popularized the 10-year version of Graham and Dodds P/E as a way to value the stock market, Shiller would share the Nobel prize in 2013 for his work in the empirical analysis of asset prices. The average CAPE value for the 20th century was 15.21, CAPE values above this produce corresponding lower returns, and vice versa. In 2014, Shiller expressed concern that the prevailing CAPE of over 25 was a level that has been surpassed since 1881 in only three previous periods, the years clustered around 1929,1999 and 2007, major market drops followed those peaks. The measure exhibits a significant amount of variation over time, and has criticised as not always accurate in signaling market tops or bottoms. One proposed reason for this variation is that CAPE does not take into account prevailing risk free interest rates. A common debate is whether the inverse CAPE ratio should be divided by the yield on 10 year Treasuries. This debate regained currency in 2014 as the CAPE ratio reached an all high in combination with historically very low rates on 10 year Treasuries. A high CAPE ratio has been linked to the phrase Irrational exuberance, after Fed President Alan Greenspan coined the term in 1996, the CAPE ratio reached an all-time high during the 2000 dot-com bubble. It also reached a high level again during the housing bubble up to 2007 before the crash of the great recession. Originally derived for the US equity market, the CAPE has since been calculated for 15 other markets, research by Norbert Keimling has demonstrated that the same relation between CAPE and future equity returns exists in every equity market so far examined. It also suggests that comparison of CAPE values can assist in identifying the best markets for future equity returns beyond the US market

5.
Deleveraging
–
At the micro-economic level, deleveraging refers to the reduction of the leverage ratio, or the percentage of debt in the balance sheet of a single economic entity, such as a household or a firm. It is the opposite of leveraging, which is the practice of borrowing money to acquire assets and multiply gains, at the macro-economic level, deleveraging of an economy refers to the simultaneous reduction of debt levels in multiple sectors, including private sectors and the government sector. It is usually measured as a decline of the debt to GDP ratio in the national account. The deleveraging of a following an financial crisis has significant macro-economic consequences and is often associated with severe recessions. While leverage allows a borrower to acquire assets and multiply gains in good times, during a market downturn when the value of assets and income plummets, a highly leveraged borrower faces heavy losses due to his or her obligation to the service of high levels of debt. If the value of assets falls below the value of debt, deleveraging reduces the total amplification of market volatility on the borrowers balance sheet. It means giving up potential gains in good times, in exchange for lower risk of heavy loss, however, precaution is not the most common reason for deleveraging. In the last case, lenders lower the leverage offered by asking for a level of collateral. It is estimated that from 2006 to 2008, the average down payment required for a buyer in the US increased from 5% to 25%. To deleverage, one needs to raise cash to pay debt, deleveraging is frustrating and painful for private sector entities in distress, selling assets at a discount can itself lead to heavy losses. In addition, dysfunctional security and credit markets make it difficult to raise capital from public market and these factors can all contribute to hindering the sources of private capital and the effort of deleveraging. Deleveraging of an economy refers to the reduction of leverage level in multiple private and public sectors. Almost every major crisis in modern history has been followed by a significant period of deleveraging. Moreover, the process of deleveraging usually begins a few years after the start of the financial crisis and this is mainly because the continuing rising of government debt, due to the Great Recession, has been offsetting the deleveraging in the private sectors in many countries. After the 2008 financial crisis, economists expected deleveraging to occur globally, instead the total debt in all nations combined increased by $57 trillion from 2007 to 2015 and government debt increased by $25 trillion. According to the McKinsey Global Institute, from 2007 to 2015, five developing nations, as of 2015, the ratio of debt to gross domestic product globally is 17 percent. According to a McKinsey Global Institute report, there are four archetypes of deleveraging processes, “Belt-tightening”, in order to increase net savings, an economy reduces spending and goes through a prolonged period of austerity. “High inflation”, high inflation mechanically increases nominal GDP growth, thus reducing the debt to GDP ratio, “Massive default”, this usually comes after a severe currency crisis

6.
Beta (finance)
–
In finance, the beta of an investment indicates whether the investment is more or less volatile than the market as a whole. In general, a less than 1 indicates that the investment is less volatile than the market. Volatility is measured as the fluctuation of the price around the mean, Beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors. The market portfolio of all investable assets has a beta of exactly 1, a beta below 1 can indicate either an investment with lower volatility than the market, or a volatile investment whose price movements are not highly correlated with the market. An example of the first is a bill, the price does not go up or down a lot. An example of the second is gold, the price of gold does go up and down a lot, but not in the same direction or at the same time as the market. A beta greater than one means that the asset both is volatile and tends to move up and down with the market. An example is a stock in a big technology company, negative betas are possible for investments that tend to go down when the market goes up, and vice versa. There are few fundamental investments with consistent and significant negative betas, Beta is important because it measures the risk of an investment that cannot be reduced by diversification. It does not measure the risk of an investment held on a stand-alone basis, in the capital asset pricing model, beta risk is the only kind of risk for which investors should receive an expected return higher than the risk-free rate of interest. The definition above covers only theoretical beta, the term is used in many related ways in finance. Thus they measure the amount of risk the fund adds to a portfolio of funds of the same type. Beta decay refers to the tendency for a company with a high beta coefficient to have its beta coefficient decline to the market beta and it is an example of regression toward the mean. A statistical estimate of beta is calculated by a regression method. Since practical data are available as a discrete time series of samples, the statistical model is r a, t = α + β r b, t + ε t. The best estimates for α and β are those such that Σεt2 is as small as possible, a common expression for beta is β = C o v V a r, where Cov and Var are the covariance and variance operators. Beta can be computed for prices in the past, where the data is known, however, what most people are interested in is future beta, which relates to risks going forward. Estimating future beta is a difficult problem, one guess is that future beta equals historical beta

7.
Greeks (finance)
–
The name is used because the most common of these sensitivities are denoted by Greek letters. Collectively these have also called the risk sensitivities, risk measures or hedge parameters. The Greeks are vital tools in risk management, for this reason, those Greeks which are particularly useful for hedging—such as delta, theta, and vega—are well-defined for measuring changes in Price, Time and Volatility. The most common of the Greeks are the first order derivatives, Delta, Vega, Theta and Rho as well as Gamma, the remaining sensitivities in this list are common enough that they have common names, but this list is by no means exhaustive. The use of Greek letter names is presumably by extension from the common finance terms alpha, several names such as vega and zomma are invented, but sound similar to Greek letters. The names color and charm presumably derive from the use of terms for exotic properties of quarks in particle physics. Delta, Δ, measures the rate of change of the option value with respect to changes in the underlying assets price. Delta is the first derivative of the value V of the option with respect to the instruments price S. The difference between the delta of a call and the delta of a put at the strike is close to but not in general equal to one. By put–call parity, long a call and short a put equals a forward F and these numbers are commonly presented as a percentage of the total number of shares represented by the option contract. This is convenient because the option will behave like the number of shares indicated by the delta. For example, if a portfolio of 100 American call options on XYZ each have a delta of 0.25, the sign and percentage are often dropped – the sign is implicit in the option type and the percentage is understood. The most commonly quoted are 25 delta put,50 delta put/50 delta call,50 Delta put and 50 Delta call are not quite identical, due to spot and forward differing by the discount factor, but they are often conflated. Delta is always positive for long calls and negative for long puts, since the delta of underlying asset is always 1.0, the trader could delta-hedge his entire position in the underlying by buying or shorting the number of shares indicated by the total delta. For example, if the delta of a portfolio of options in XYZ is +2.75 and this portfolio will then retain its total value regardless of which direction the price of XYZ moves. The Delta is close to, but not identical with, the percent moneyness of an option, for this reason some option traders use the absolute value of delta as an approximation for percent moneyness. For example, if a call option has a delta of 0.15. Similarly, if a put contract has a delta of −0.25, at-the-money puts and calls have a delta of approximately 0.5 and −0.5 respectively with a slight bias towards higher deltas for ATM calls

8.
DuPont analysis
–
DuPont Analysis is an expression which breaks ROE into three parts. The name comes from the DuPont Corporation that started using this formula in the 1920s, DuPont explosives salesman Donaldson Brown invented this formula in an internal efficiency report in 1912. ROE = ** = ** = Profitability Asset Use efficiency Financial leverage The Du Pont identity breaks down Return on Equity into three distinct elements and this analysis enables the analyst to understand the source of superior return by comparison with companies in similar industries. The Du Pont identity is less useful for such as investment banking. Variations of the Du Pont identity have been developed for industries where the elements are weakly meaningful, other industries, such as fashion, may derive a substantial portion of their competitive advantage from selling at a higher margin, rather than higher sales. For high-end fashion brands, increasing sales without sacrificing margin may be critical, the Du Pont identity allows analysts to determine which of the elements is dominant in any change of ROE. Certain types of operations, particularly stores, may have very low profit margins on sales. In contrast, though, groceries may have high turnover. The ROE of such firms may be dependent on performance of this metric. For example, same store sales of retailers is considered important as an indication that the firm is deriving greater profits from existing stores. Some sectors, such as the sector, rely on high leverage to generate acceptable ROE. Other industries would see high levels of leverage as unacceptably risky, Du Pont analysis enables third parties that rely primarily on their financial statements to compare leverage among similar companies. The return on assets ratio developed by DuPont for its own use is now used by firms to evaluate how effectively assets are used. It measures the effects of profit margins and asset turnover. ROA = Net income Sales × Sales Total assets = Net income Total assets The return on equity ratio is a measure of the rate of return to stockholders, decomposing the ROE into various factors influencing company performance is often called the Du Pont system. This is the proportion of the companys profits retained after paying income taxes and this will be 1.00 for a firm with no debt or financial leverage. The companys operating income margin or return on sales is and this is the operating income per dollar of sales. The companys leverage ratio is, which is equal to the firms and this is a measure of financial leverage