1.
Continuous-repayment mortgage
–
Analogous to continuous compounding, a continuous annuity is an ordinary annuity in which the payment interval is narrowed indefinitely. A continuous repayment mortgage is a mortgage paid by means of a continuous annuity. Mortgages are generally settled over a period of years by a series of fixed regular payments commonly referred to as an annuity, summation of the cash flow elements and accumulated interest is effected by integration as shown. It is assumed that compounding interval and payment interval are equal—i. e, compounding of interest always occurs at the same time as payment is deducted. Application of the equation yields a number of results relevant to the process which it describes. Although this article focuses primarily on mortgages, the employed are relevant to any situation in which payment or saving is effected by a regular stream of fixed interval payments. Also replace n with NT where T is the loan period in years. In this more general form of the equation we are calculating x as the fixed payment corresponding to frequency N, for example if N =365, x corresponds to a daily fixed payment. As may be seen the curves are virtually indistinguishable – calculations effected using the model differ from those using the Excel PV function by a mere 0. 3%. The data from which the graph were derived can be viewed here, define the reverse time variable z = T − t. This may be recognized as a solution to the time differential equation. The key characteristics of such equations are explained in detail at RC circuits, for home owners with mortgages the important parameter to keep in mind is the time constant of the equation which is simply the reciprocal of the annual interest rate r. Using a table of Laplace transforms and their time domain equivalents, P may be determined, the sum of these interest and principal payments must equal the cumulative fixed payments at time t i. e. Mat. The cost of a loan is simply the annual rate multiplied by loan period, then we may define loan cost factor C such that C = P0C i. e. C is the cost per unit of currency loaned, also when s is very large, exp is small so C ≈ s and thus loan cost C ≈ P0rT. By way of example, consider a loan of 1000000 at 10% repaid over 20 years. C =1000000 ×21 − e −2 ≈2.313 ×106 The product rT is an easily obtained and this is best illustrated by plotting the cost factor function for s values in domain. The linear behaviour of the function for values of s is clear

2.
Efficient frontier
–
This article is about a financial mathematical concept. For other frontiers described as efficient, see Production possibilities frontier, the efficient frontier is a concept in modern portfolio theory introduced by Harry Markowitz in 1952. It refers to investment portfolios which occupy the efficient parts of the risk-return spectrum, formally, it is the set of portfolios which satisfy the condition that no other portfolio exists with a higher expected return but with the same standard deviation of return. A combination of assets, i. e. a portfolio, is referred to as efficient if it has the best possible expected level of return for its level of risk. Here, every combination of risky assets can be plotted in risk–expected return space. In the absence of the opportunity to hold a risk-free asset, the positively sloped top boundary of this region is a portion of a parabola and is called the efficient frontier

3.
Alternative beta
–
Alternative beta is the concept of managing volatile alternative investments, often through the use of hedge funds. Alternative beta is also referred to as alternative risk premia. Researcher Lars Jaeger says that the return from an investment mainly results from exposure to risk factors. At its most basic, a fund is an investment vehicle that pools capital from a number of investors and invests in securities. It is administered by a management firm, and often structured as a limited partnership, limited liability company. For an investment that involves risk to be worthwhile, its returns must be higher than a risk-free investment, the risk is related to volatility. A measure of the factors influencing an investments volatility is the beta, the beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors. A beta below 1 can indicate either an investment with lower volatility than the market, an example of the first is a treasury bill, the price does not go up or down a lot, so it has a low beta. An example of the second is gold, the price of gold does go up and down a lot, but not in the same direction or at the same time as the market. A beta above 1 generally means that the asset both is volatile and tends to move up and down with the market, an example is a stock in a big technology company. Negative betas are possible for investments that tend to go down when the market goes up, there are few fundamental investments with consistent and significant negative betas, but some derivatives like equity put options can have large negative beta values. Investments with a high value are often called beta investments, as opposed to alpha investments which typically have lower volatility. Separating returns into alpha and beta can also be applied to determine the amount, the consensus is to charge higher fees for alpha, since it is mostly viewed as skill based. Investors have started to question whether hedge funds are actually alpha investments and this issue was raised in the 1997 paper Empirical Characteristics of Dynamic Trading Strategies, The Case of Hedge Funds by William Fung and David Hsieh. Following this paper, several groups of academics started to explain past hedge fund returns using various systematic risk factors, following this, a paper has discussed whether investable strategies based on such factors can not only explain past returns, but also replicate future ones. Traditional betas can be seen as related to investments the common investor would already be experienced with. They are typically represented through indexation, and the techniques employed here are what is called “long only”, the underlying non-traditional investment risks are often seen as being riskier, as investors are less familiar with them. Viewed from the side, investment techniques and strategies are the means to either capture risk premia or to obtain excess returns

4.
Beta (finance)
–
In finance, the beta of an investment indicates whether the investment is more or less volatile than the market as a whole. In general, a less than 1 indicates that the investment is less volatile than the market. Volatility is measured as the fluctuation of the price around the mean, Beta is a measure of the risk arising from exposure to general market movements as opposed to idiosyncratic factors. The market portfolio of all investable assets has a beta of exactly 1, a beta below 1 can indicate either an investment with lower volatility than the market, or a volatile investment whose price movements are not highly correlated with the market. An example of the first is a bill, the price does not go up or down a lot. An example of the second is gold, the price of gold does go up and down a lot, but not in the same direction or at the same time as the market. A beta greater than one means that the asset both is volatile and tends to move up and down with the market. An example is a stock in a big technology company, negative betas are possible for investments that tend to go down when the market goes up, and vice versa. There are few fundamental investments with consistent and significant negative betas, Beta is important because it measures the risk of an investment that cannot be reduced by diversification. It does not measure the risk of an investment held on a stand-alone basis, in the capital asset pricing model, beta risk is the only kind of risk for which investors should receive an expected return higher than the risk-free rate of interest. The definition above covers only theoretical beta, the term is used in many related ways in finance. Thus they measure the amount of risk the fund adds to a portfolio of funds of the same type. Beta decay refers to the tendency for a company with a high beta coefficient to have its beta coefficient decline to the market beta and it is an example of regression toward the mean. A statistical estimate of beta is calculated by a regression method. Since practical data are available as a discrete time series of samples, the statistical model is r a, t = α + β r b, t + ε t. The best estimates for α and β are those such that Σεt2 is as small as possible, a common expression for beta is β = C o v V a r, where Cov and Var are the covariance and variance operators. Beta can be computed for prices in the past, where the data is known, however, what most people are interested in is future beta, which relates to risks going forward. Estimating future beta is a difficult problem, one guess is that future beta equals historical beta

5.
Binomial options pricing model
–
In finance, the binomial options pricing model provides a generalizable numerical method for the valuation of options. The binomial model was first proposed by Cox, Ross and Rubinstein in 1979, essentially, the model uses a “discrete-time” model of the varying price over time of the underlying financial instrument. In general, Georgiadis showed that binomial options pricing models do not have closed-form solutions, the Binomial options pricing model approach has been widely used since it is able to handle a variety of conditions for which other models cannot easily be applied. This is largely because the BOPM is based on the description of an underlying instrument over a period of rather than a single point. As a consequence, it is used to value American options that are exercisable at any time in an interval as well as Bermudan options that are exercisable at specific instances of time. Being relatively simple, the model is readily implementable in computer software, although computationally slower than the Black–Scholes formula, it is more accurate, particularly for longer-dated options on securities with dividend payments. For these reasons, various versions of the model are widely used by practitioners in the options markets. When simulating a small number of time steps Monte Carlo simulation will be more computationally time-consuming than BOPM, however, the worst-case runtime of BOPM will be O, where n is the number of time steps in the simulation. Monte Carlo simulations will generally have a time complexity. Monte Carlo simulations are also susceptible to sampling errors, since binomial techniques use discrete time units. This becomes more true the smaller the discrete units become, the binomial pricing model traces the evolution of the options key underlying variables in discrete-time. This is done by means of a lattice, for a number of time steps between the valuation and expiration dates. Each node in the lattice represents a price of the underlying at a given point in time. Valuation is performed iteratively, starting at each of the final nodes, the value computed at each stage is the value of the option at that point in time. The tree of prices is produced by working forward from valuation date to expiration, at each step, it is assumed that the underlying instrument will move up or down by a specific factor per step of the tree. So, if S is the current price, then in the period the price will either be S u p = S ⋅ u or S d o w n = S ⋅ d. The up and down factors are calculated using the underlying volatility, σ, from the condition that the variance of the log of the price is σ2 t, we have, u = e σ t d = e − σ t =1 u. Above is the original Cox, Ross, & Rubinstein method, there are techniques for generating the lattice

6.
Financial correlation
–
Financial correlations measure the relationship between the changes of two or more financial variables over time. For example, the prices of equity stocks and fixed interest bonds often move in opposite directions, in this case, stock and bond prices are negatively correlated. Financial correlations play a key role in modern finance, under the capital asset pricing model, an increase in diversification increases the return/risk ratio. Measures of risk include value at risk, expected shortfall, there are several statistical measures of the degree of financial correlations. The Pearson product-moment correlation coefficient is applied to finance correlations. However, the limitations of Pearson correlation approach in finance are evident, first, linear dependencies as assessed by the Pearson correlation coefficient do not appear often in finance. Second, linear correlation measures are only natural dependence measures if the joint distribution of the variables is elliptical, third, a zero Pearson product-moment correlation coefficient does not necessarily mean independence, because only the two first moments are considered. For example, Y = X2 will lead to Pearson correlation coefficient of zero, since the Pearson approach is unsatisfactory to model financial correlations, quantitative analysts have developed specific financial correlation measures. Accurately estimating correlations requires the process of marginals to incorporate characteristics such as skewness. Not accounting for these attributes can lead to severe estimation error in the correlations, in a practical application in portfolio optimization, accurate estimation of the variance-covariance matrix is paramount. Thus, forecasting with Monte-Carlo simulation with the Gaussian copula and well-specified marginal distributions are effective, steven Heston applied a correlation approach to negatively correlate stochastic stock returns d S S and stochastic volatility σ. In equation, the underlying S follows the standard geometric Brownian motion, which is applied in Black–Scholes–Merton model. The correlation between the processes and is introduced by correlating the two Brownian motions d z 1 and d z 2. The instantaneous correlation ρ between the Brownian motions is Corr = ρ d t, the Cointelation SDE connects the SDEs above to the concept of mean reversion and drift which are usually concepts that are misunderstood by practitioners. A further financial correlation measure, mainly applied to default correlation, is the binomial correlation approach of Lucas. We define the binomial events 1 X =1 and 1 Y =1 where τ X is the time of entity X and τ Y is the default time of entity Y. Hence if entity X defaults before or at time T, the random indicator variable 1 X will take the value in 1, furthermore, P and P is the default probability of X and Y respectively, and P is the joint probability of default. By construction, equation can only model binomial events, for example default, the binomial correlation approach of equation is a limiting case of the Pearson correlation approach discussed in section 1

7.
Annual percentage rate
–
It is a finance charge expressed as an annual rate. Those terms have formal, legal definitions in some countries or legal jurisdictions, the effective APR is the fee+compound interest rate. In some areas, the percentage rate is the simplified counterpart to the effective interest rate that the borrower will pay on a loan. In many countries and jurisdictions, lenders are required to disclose the cost of borrowing in some standardized way as a form of consumer protection, the APR has been intended to make it easier to compare lenders and loan options. The nominal APR is calculated as, the rate, for a payment period, the effective APR has been called the mathematically-true interest rate for each year. When start-up fees are paid as first payment, the balance due might accrue more interest and this loan is due in the first payment, and the unpaid balance is amortized as a second long-term loan. The extra first payment is dedicated to primarily paying origination fees, for example, consider a $100 loan which must be repaid after one month, plus 5%, plus a $10 fee. If the fee is not considered, this loan has an effective APR of approximately 80%, if the $10 fee were considered, the monthly interest increases by 10%, and the effective APR becomes approximately 435%. Hence there are at least two possible effective APRs, 80% and 435%, laws vary as to whether fees must be included in APR calculations. In the U. S. the calculation and disclosure of APR is governed by the Truth in Lending Act, the APR must be disclosed to the borrower within 3 days of applying for a mortgage. This information is typically mailed to the borrower and the APR is found on the truth in lending disclosure statement, the Truth in Lending Act of 1968 resulted in honest reporting of effective APRs for more than a decade. Then in the 1980s, auto makers began to exploit a loophole in the Act, APRs calculated with the reduced, or eliminated, finance charge became the below market rate and Zero percent APR loans that were commonly advertised for the next 30 years. Zero percent APR or $1,000 rebate is the most common form of these deceptive loans, the rebate is the hidden finance charge, reclassified to car price. If the consumer doesnt accept the zero percent loan, then he or she does not accrue the extra $1,000 interest on loan. In reality, there is no rebate and no zero percent loan, auto makers have been aided in this ongoing consumer deception by the regulators who administer TILA. The current form of disclosure under TILA seems designed specifically to support the auto makers deceptive scheme, on July 30,2009, provisions of the Mortgage Disclosure Improvement Act of 2008 came into effect. A specific clause of this act refers directly to APR disclosure on mortgages, the calculation for close-ended credit can be found here. For a fixed-rate mortgage, the APR is thus equal to its rate of return under an assumption of zero prepayment

8.
Computational finance
–
Computational finance is a branch of applied computer science that deals with problems of practical interest in finance. Some slightly different definitions are the study of data and algorithms used in finance. Computational finance emphasizes practical numerical methods rather than mathematical proofs and focuses on techniques that apply directly to economic analyses and it is an interdisciplinary field between mathematical finance and numerical methods. Two major areas are efficient and accurate computation of fair values of financial securities, the birth of computational finance as a discipline can be traced to Harry Markowitz in the early 1950s. Markowitz conceived of the selection problem as an exercise in mean-variance optimization. This required more power than was available at the time. In the 1960s, hedge fund managers such as Ed Thorp, in academics, sophisticated computer processing was needed by researchers such as Eugene Fama in order to analyze large amounts of financial data in support of the efficient-market hypothesis. During the 1970s, the focus of computational finance shifted to options pricing and analyzing mortgage securitizations. In the late 1970s and early 1980s, a group of young quantitative practitioners who became known as “rocket scientists” arrived on Wall Street and this led to an explosion of both the amount and variety of computational finance applications. Many of the new techniques came from processing and speech recognition rather than traditional fields of computational economics like optimization. By the end of the 1980s, the winding down of the Cold War brought a group of displaced physicists and applied mathematicians, many from behind the Iron Curtain. These people become known as “financial engineers” and this led to a second major extension of the range of computational methods used in finance, also a move away from personal computers to mainframes and supercomputers. Around this time computational finance became recognized as an academic subfield. The first degree program in finance was offered by Carnegie Mellon University in 1994. Over the last 20 years, the field of finance has expanded into virtually every area of finance. Moreover, many specialized companies have grown up to computational finance software