1.
Applied mathematics
–
Applied mathematics is a branch of mathematics that deals with mathematical methods that find use in science, engineering, business, computer science, and industry. Thus, applied mathematics is a combination of science and specialized knowledge. The term applied mathematics also describes the professional specialty in which work on practical problems by formulating and studying mathematical models. The activity of applied mathematics is thus connected with research in pure mathematics. Historically, applied mathematics consisted principally of applied analysis, most notably differential equations, approximation theory, quantitative finance is now taught in mathematics departments across universities and mathematical finance is considered a full branch of applied mathematics. Engineering and computer science departments have made use of applied mathematics. Today, the applied mathematics is used in a broader sense. It includes the areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of mathematics are now important in applications. There is no consensus as to what the various branches of applied mathematics are, such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between applied mathematics, which is concerned with methods, and the applications of mathematics within science. Mathematicians such as Poincaré and Arnold deny the existence of applied mathematics, similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to industrial problems is also called industrial mathematics. Historically, mathematics was most important in the sciences and engineering. Academic institutions are not consistent in the way they group and label courses, programs, at some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, many applied mathematics programs consist of primarily cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph. D. programs in applied mathematics require little or no coursework outside of mathematics, in some respects this difference reflects the distinction between application of mathematics and applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT, brigham Young University also has an Applied and Computational Emphasis, a program that allows student to graduate with a Mathematics degree, with an emphasis in Applied Math
2.
Financial market
–
A financial market is a market in which people trade financial securities, commodities, and other fungible items of value at low transaction costs and at prices that reflect supply and demand. Securities include stocks and bonds, and commodities include precious metals or agricultural products, in economics, typically, the term market means the aggregate of possible buyers and sellers of a certain good or service and the transactions between them. The term market is used for what are more strictly exchanges, organizations that facilitate the trade in financial securities. This may be a location or an electronic system. Another common use of the term is as a catchall for all the markets in the financial sector, capital markets which to consist of, Stock markets, which provide financing through the issuance of shares or common stock, and enable the subsequent trading thereof. Bond markets, which provide financing through the issuance of bonds, commodity markets, which facilitate the trading of commodities. Money markets, which provide short term debt financing and investment, derivatives markets, which provide instruments for the management of financial risk. Futures markets, which provide standardized forward contracts for trading products at some future date, Foreign exchange markets, which facilitate the trading of foreign exchange. Spot market Interbanks market The capital markets may also be divided into primary markets, newly formed securities are bought or sold in primary markets, such as during initial public offerings. Secondary markets allow investors to buy and sell existing securities, the transactions in primary markets exist between issuers and investors, while secondary market transactions exist among investors. Liquidity is an aspect of securities that are traded in secondary markets. Liquidity refers to the ease with which a security can be sold without a loss of value, securities with an active secondary market mean that there are many buyers and sellers at a given point in time. Investors benefit from liquid securities because they can sell their assets whenever they want, Financial markets attract funds from investors and channel them to corporations—they thus allow corporations to finance their operations and achieve growth. Money markets allow firms to borrow funds on a term basis. Without financial markets, borrowers would have difficulty finding lenders themselves, intermediaries such as banks, Investment Banks, and Boutique Investment Banks can help in this process. Banks take deposits from those who have money to save and they can then lend money from this pool of deposited money to those who seek to borrow. Banks popularly lend money in the form of loans and mortgages, a good example of a financial market is a stock exchange. A company can raise money by selling shares to investors and its shares can be bought or sold
3.
Mathematical model
–
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a model is termed mathematical modeling. Mathematical models are used in the sciences and engineering disciplines. Physicists, engineers, statisticians, operations research analysts, and economists use mathematical models most extensively, a model may help to explain a system and to study the effects of different components, and to make predictions about behaviour. Mathematical models can take many forms, including systems, statistical models, differential equations. These and other types of models can overlap, with a model involving a variety of abstract structures. In general, mathematical models may include logical models, in many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed, in the physical sciences, the traditional mathematical model contains four major elements. These are Governing equations Defining equations Constitutive equations Constraints Mathematical models are composed of relationships. Relationships can be described by operators, such as operators, functions, differential operators. Variables are abstractions of system parameters of interest, that can be quantified, a model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, for example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, an equation is said to be linear if it can be written with linear differential operators. In a mathematical programming model, if the functions and constraints are represented entirely by linear equations. If one or more of the functions or constraints are represented with a nonlinear equation. Nonlinearity, even in simple systems, is often associated with phenomena such as chaos. Although there are exceptions, nonlinear systems and models tend to be difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be if one is trying to study aspects such as irreversibility
4.
Numerical analysis
–
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Being able to compute the sides of a triangle is important, for instance, in astronomy, carpentry. Numerical analysis continues this tradition of practical mathematical calculations. Much like the Babylonian approximation of the root of 2, modern numerical analysis does not seek exact answers. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors, before the advent of modern computers numerical methods often depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead and these same interpolation formulas nevertheless continue to be used as part of the software algorithms for solving differential equations. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of differential equations. Car companies can improve the safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving differential equations numerically. Hedge funds use tools from all fields of analysis to attempt to calculate the value of stocks. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments, historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use programs for actuarial analysis. The rest of this section outlines several important themes of numerical analysis, the field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago, to facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. The function values are no very useful when a computer is available. The mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of analysis, since now longer
5.
Financial economics
–
Financial economics is the branch of economics characterized by a concentration on monetary activities, in which money of one type or another is likely to appear on both sides of a trade. Its concern is thus the interrelation of financial variables, such as prices, interest rates and shares and it has two main areas of focus, asset pricing and corporate finance, the first being the perspective of providers of capital and the second of users of capital. The subject is concerned with the allocation and deployment of economic resources and it is built on the foundations of microeconomics and decision theory. Financial econometrics is the branch of economics that uses econometric techniques to parameterise these relationships. Mathematical finance is related in that it will derive and extend the mathematical or numerical models suggested by financial economics, note though that the emphasis there is mathematical consistency, as opposed to compatibility with economic theory. Financial economics is usually taught at the level, see Master of Financial Economics. Recently, specialist undergraduate degrees are offered in the discipline, note that this article provides an overview and survey of the field, for derivations and more technical discussion, see the specific articles linked. As above, the discipline essentially explores how rational investors would apply decision theory to the problem of investment, the subject is thus built on the foundations of microeconomics and decision theory, and derives several key results for the application of decision making under uncertainty to the financial markets. Underlying all of economics are the concepts of present value. Its history is correspondingly early, Richard Witt discusses compound interest already in 1613, in his book Arithmeticall Questions, further developed by Johan de Witt and these ideas originate with Blaise Pascal and Pierre de Fermat. This decision method, however, fails to consider risk aversion, choice under uncertainty here, may then be characterized as the maximization of expected utility. The impetus for these ideas arise from various inconsistencies observed under the expected value framework, the development here originally due to Daniel Bernoulli, and later formalized by John von Neumann and Oskar Morgenstern. The concepts of arbitrage-free, rational, pricing and equilibrium are then coupled with the above to derive classical financial economics, Rational pricing is the assumption that asset prices will reflect the arbitrage-free price of the asset, as any deviation from this price will be arbitraged away. This assumption is useful in pricing fixed income securities, particularly bonds, intuitively, this may be seen by considering that where an arbitrage opportunity does exist, then prices can be expected to change, and are therefore not in equilibrium. An arbitrage equilibrium is thus a precondition for a general economic equilibrium, the formal derivation will proceed by arbitrage arguments. All pricing models are then essentially variants of this, given specific assumptions and/or conditions and this approach is consistent with the above, but with the expectation based on the market as opposed to individual preferences. In general, this premium may be derived by the CAPM as will be seen under #Uncertainty, with the above relationship established, the further specialized Arrow–Debreu model may be derived. This important result suggests that, under certain conditions, there must be a set of prices such that aggregate supplies will equal aggregate demands for every commodity in the economy
6.
Derivative (finance)
–
In finance, a derivative is a contract that derives its value from the performance of an underlying entity. This underlying entity can be an asset, index, or interest rate, some of the more common derivatives include forwards, futures, options, swaps, and variations of these such as synthetic collateralized debt obligations and credit default swaps. Most derivatives are traded over-the-counter or on a such as the Bombay Stock Exchange. Derivatives are one of the three categories of financial instruments, the other two being stocks and debt. More recent historical origin is Bucket shop that were outlawed a century ago, Derivatives are contracts between two parties that specify conditions under which payments are to be made between the parties. The assets include commodities, stocks, bonds, interest rates and currencies, but they can also be other derivatives, from the economic point of view, financial derivatives are cash flows, that are conditioned stochastically and discounted to present value. The market risk inherent in the asset is attached to the financial derivative through contractual agreements. The underlying asset does not have to be acquired, Derivatives therefore allow the breakup of ownership and participation in the market value of an asset. This also provides an amount of freedom regarding the contract design. That contractual freedom allows to modify the participation in the performance of the underlying asset almost arbitrarily, thus, the participation in the market value of the underlying can be effectively weaker, stronger, or implemented as inverse. Hence, specifically the price risk of the underlying asset can be controlled in almost every situation. Derivatives are more common in the era, but their origins trace back several centuries. One of the oldest derivatives is rice futures, which have traded on the Dojima Rice Exchange since the eighteenth century. Derivatives are broadly categorized by the relationship between the asset and the derivative, the type of underlying asset, the market in which they trade. Derivatives may broadly be categorized as lock or option products, lock products obligate the contractual parties to the terms over the life of the contract. Option products provide the buyer the right, but not the obligation to enter the contract under the terms specified, Derivatives can be used either for risk management or for speculation. Along with many financial products and services, derivatives reform is an element of the Dodd–Frank Wall Street Reform. The Act delegated many rule-making details of regulatory oversight to the Commodity Futures Trading Commission, however, these are notional values, and some economists say that this value greatly exaggerates the market value and the true credit risk faced by the parties involved
7.
Stock
–
The stock of a corporation is constituted of the equity stock of its owners. A single share of the stock represents fractional ownership of the corporation in proportion to the number of shares. In liquidation, the stock represents the residual assets of the company that would be due to stockholders after discharge of all senior claims such as secured and unsecured debt. Stockholders equity cannot be withdrawn from the company in a way that is intended to be detrimental to the companys creditors, the stock of a corporation is partitioned into shares, the total of which are stated at the time of business formation. Additional shares may subsequently be authorized by the shareholders and issued by the company. In some jurisdictions, each share of stock has a certain declared par value, in other jurisdictions, however, shares of stock may be issued without associated par value. Shares represent a fraction of ownership in a business, a business may declare different types of shares, each having distinctive ownership rules, privileges, or share values. Ownership of shares may be documented by issuance of a stock certificate. A stock certificate is a document that specifies the amount of shares owned by the shareholder. Stock typically takes the form of shares of common stock or preferred stock. As a unit of ownership, common stock typically carries voting rights that can be exercised in corporate decisions, shares of such stock are called convertible preferred shares. New equity issue may have specific legal clauses attached that differentiate them from previous issues of the issuer. Some shares of stock may be issued without the typical voting rights, for instance, or some shares may have special rights unique to them. Often, new issues that have not been registered with a governing body may be restricted from resale for certain periods of time. Preferred stock may be hybrid by having the qualities of bonds of fixed returns and they also have preference in the payment of dividends over common stock and also have been given preference at the time of liquidation over common stock. They have other features of accumulation in dividend, Rule 144 Stock is an American term given to shares of stock subject to SEC Rule 144, Selling Restricted and Control Securities. Under Rule 144, restricted and controlled securities are acquired in unregistered form, investors either purchase or take ownership of these securities through private sales from the issuing company or from an affiliate of the issuer. Investors wishing to sell these securities are subject to different rules than those selling traditional common or preferred stock and these individuals will only be allowed to liquidate their securities after meeting the specific conditions set forth by SEC Rule 144
8.
Financial modeling
–
Financial modeling is the task of building an abstract representation of a real world financial situation. This is a model designed to represent the performance of a financial asset or portfolio of a business, project. Typically, financial modeling is understood to mean an exercise in either asset pricing or corporate finance, in corporate finance and the accounting profession, financial modeling often involves financial statement forecasting. This usually entails the preparation of detailed company-specific models used for decision making purposes, correspondingly, both characteristics are reflected in the mathematical form of these models, firstly, the models are in discrete time, secondly, they are deterministic. Modelers are sometimes referred to as number crunchers, and are often designated financial analyst, typically, the modeler will have completed an MBA or MSF with coursework in financial modeling. Accounting qualifications and finance certifications such as the CIIA and CFA generally do not provide direct or explicit training in modeling, at the same time, numerous commercial training courses are offered, both through universities and privately. Although purpose built software does exist, the vast proportion of the market is spreadsheet-based, also, analysts will each have their own criteria and methods for financial modeling. Microsoft Excel now has by far the dominant position, having overtaken Lotus 1-2-3 in the 1990s, spreadsheet-based modelling can have its own problems, and several standardizations and best practices have been proposed. Spreadsheet risk is increasingly studied and managed, one critique here, is that model outputs, i. e. line items, often incorporate “unrealistic implicit assumptions” and “internal inconsistencies”. What is required, but often lacking, is that all key elements are explicitly and consistently forecasted, related to this, is that modellers often additionally fail to identify crucial assumptions relating to inputs, and to explore what can go wrong. Here, in general, modellers use point values and simple arithmetic instead of probability distributions, other critiques discuss the lack of adequate spreadsheet design skills, and of basic computer programming concepts. More serious criticism, in fact, relates to the nature of budgeting itself, the Financial Modeling World Championships, known as ModelOff, have been held since 2012. ModelOff is an online financial modeling competition which culminates in a Live Finals Event for top competitors. From 2012-2014 the Live Finals were held in New York City and in 2015, in quantitative finance, financial modeling entails the development of a sophisticated mathematical model. Models here deal with asset prices, market movements, portfolio returns, the general nature of these problems is discussed under Mathematical finance, while specific techniques are listed under Outline of finance# Mathematical tools. Modellers are generally referred to as quants, and typically have advanced backgrounds in quantitative disciplines such as physics, engineering, computer science, mathematics or operations research. Although spreadsheets are used here also, custom C++, Fortran or Python, or numerical analysis software such as MATLAB, are often preferred. The complexity of these models may result in incorrect pricing or hedging or both and this Model risk is the subject of ongoing research by finance academics, and is a topic of great, and growing, interest in the risk management arena
9.
Computational finance
–
Computational finance is a branch of applied computer science that deals with problems of practical interest in finance. Some slightly different definitions are the study of data and algorithms used in finance. Computational finance emphasizes practical numerical methods rather than mathematical proofs and focuses on techniques that apply directly to economic analyses and it is an interdisciplinary field between mathematical finance and numerical methods. Two major areas are efficient and accurate computation of fair values of financial securities, the birth of computational finance as a discipline can be traced to Harry Markowitz in the early 1950s. Markowitz conceived of the selection problem as an exercise in mean-variance optimization. This required more power than was available at the time. In the 1960s, hedge fund managers such as Ed Thorp, in academics, sophisticated computer processing was needed by researchers such as Eugene Fama in order to analyze large amounts of financial data in support of the efficient-market hypothesis. During the 1970s, the focus of computational finance shifted to options pricing and analyzing mortgage securitizations. In the late 1970s and early 1980s, a group of young quantitative practitioners who became known as “rocket scientists” arrived on Wall Street and this led to an explosion of both the amount and variety of computational finance applications. Many of the new techniques came from processing and speech recognition rather than traditional fields of computational economics like optimization. By the end of the 1980s, the winding down of the Cold War brought a group of displaced physicists and applied mathematicians, many from behind the Iron Curtain. These people become known as “financial engineers” and this led to a second major extension of the range of computational methods used in finance, also a move away from personal computers to mainframes and supercomputers. Around this time computational finance became recognized as an academic subfield. The first degree program in finance was offered by Carnegie Mellon University in 1994. Over the last 20 years, the field of finance has expanded into virtually every area of finance. Moreover, many specialized companies have grown up to computational finance software
10.
Quantitative analyst
–
The occupation is similar to those in industrial mathematics in other industries. Examples include statistical arbitrage, quantitative investment management, algorithmic trading, Quantitative finance started in 1900 with Louis Bacheliers doctoral thesis Theory of Speculation. Harry Markowitzs 1952 Ph. D thesis Portfolio Selection and its version was one of the first efforts in economics journals to formally adapt mathematical concepts to finance. Markowitz formalized a notion of return and covariances for common stocks which allowed him to quantify the concept of diversification in a market. Although the language of finance now involves Itō calculus, management of risk in a quantifiable manner underlies much of the modern theory, in 1965 Paul Samuelson introduced stochastic calculus into the study of finance. In 1969 Robert Merton promoted continuous stochastic calculus and continuous-time processes, at the same time as Mertons work and with Mertons assistance, Fischer Black and Myron Scholes developed the Black–Scholes model, which was awarded the 1997 Nobel Memorial Prize in Economic Sciences. It provided a solution for a problem, that of finding a fair price for a European call option, i. e. the right to buy one share of a given stock at a specified price. Such options are purchased by investors as a risk-hedging device. D. Degrees, or with financial mathematics D. E. A. degrees in the French education system, typically, a quantitative analyst will also need extensive skills in computer programming, most commonly C, C++, Java, R, MATLAB, Mathematica, Python. See Master of Quantitative Finance, Master of Financial Economics, in trading and sales operations, quantitative analysts work to determine prices, manage risk, and identify profitable opportunities. In the field of algorithmic trading it has reached the point where there is little meaningful difference, front office work favours a higher speed to quality ratio, with a greater emphasis on solutions to specific problems than detailed modeling. FOQs typically are significantly better paid than those in office, risk. Although highly skilled analysts, FOQs frequently lack software engineering experience or formal training, Quantitative analysis is used extensively by asset managers. Some, such as FQ, AQR or Barclays, rely almost exclusively on quantitative strategies while others, such as Pimco, Blackrock or Citadel use a mix of quantitative, major firms invest large sums in an attempt to produce standard methods of evaluating prices and risk. These differ from front office tools in that Excel is very rare, with most development being in C++, though Java, LQs spend more time modeling ensuring the analytics are both efficient and correct, though there is tension between LQs and FOQs on the validity of their results. LQs are required to understand techniques such as Monte Carlo methods and finite difference methods, often the highest paid form of Quant, ATQs make use of methods taken from signal processing, game theory, gambling Kelly criterion, market microstructure, econometrics, and time series analysis. A core technique is value at risk, and this is backed up with various forms of stress test, economic analysis and direct analysis of the positions. In the aftermath of the crisis, there surfaced the recognition that quantitative valuation methods were generally too narrow in their approach
11.
Risk management
–
Risk management’s objective is to assure uncertainty does not deflect the endeavor from the business goals. There are two types of events i. e. negative events can be classified as risks while positive events are classified as opportunities. Several risk management standards have developed including the Project Management Institute, the National Institute of Standards and Technology, actuarial societies. Certain aspects of many of the risk management standards have come under criticism for having no measurable improvement on risk, whereas the confidence in estimates and decisions seem to increase. For example, it has shown that one in six IT projects experience cost overruns of 200% on average. A widely used vocabulary for risk management is defined by ISO Guide 73,2009, intangible risk management identifies a new type of a risk that has a 100% probability of occurring but is ignored by the organization due to a lack of identification ability. For example, when deficient knowledge is applied to a situation, relationship risk appears when ineffective collaboration occurs. Process-engagement risk may be an issue when ineffective operational procedures are applied and these risks directly reduce the productivity of knowledge workers, decrease cost-effectiveness, profitability, service, quality, reputation, brand value, and earnings quality. Intangible risk management allows management to create immediate value from the identification and reduction of risks that reduce productivity. Risk management also faces difficulties in allocating resources and this is the idea of opportunity cost. Resources spent on risk management could have spent on more profitable activities. Again, ideal risk management minimizes spending and also minimizes the effects of risks. According to the definition to the risk, the risk is the possibility that an event will occur, therefore, risk itself has the uncertainty. Risk management such as COSO ERM, can help managers have a control for their risk. Each company may have different internal components, which leads to different outcomes. For the most part, these methods consist of the elements, performed, more or less. After establishing the context, the step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems or benefits, hence, risk identification can start with the source of our problems and those of our competitors, or with the problem itself
12.
Louis Bachelier
–
Louis Jean-Baptiste Alphonse Bachelier was a French mathematician at the turn of the 20th century. He is credited with being the first person to model the process now called Brownian motion. His thesis, which discussed the use of Brownian motion to evaluate stock options, is historically the first paper to use advanced mathematics in the study of finance, thus, Bachelier is considered a pioneer in the study of financial mathematics and stochastic processes. Bachelier was born in Le Havre and his father was a wine merchant and amateur scientist, and the vice-consul of Venezuela at Le Havre. His mother was the daughter of an important banker, during this time Bachelier gained a practical acquaintance with the financial markets. His studies were delayed by military service. Bachelier arrived in Paris in 1892 to study at the Sorbonne, defended on March 29,1900 at the University of Paris, Bacheliers thesis was not well received because it attempted to apply mathematics to an unfamiliar area for mathematicians. However, his instructor, Henri Poincaré, is recorded as having some positive feedback. While it did not receive a mark of très honorable, despite its ultimate importance, the positive feedback from Poincaré can be attributed to his interest in mathematical ideas, not just rigorous proof. For several years following the successful defense of his thesis, Bachelier further developed the theory of diffusion processes, in 1909 he became a free professor at the Sorbonne. In 1914, he published a book, Le Jeu, la Chance, et le Hasard and his army service ended on December 31,1918. In 1919, he found a position as an assistant professor in Besançon and he married Augustine Jeanne Maillot in September 1920 but was soon widowed. When the professor returned in 1922, Bachelier replaced another professor at Dijon and he moved to Rennes in 1925, but was finally awarded a permanent professorship in 1927 at Besançon, where he worked for 10 years until his retirement. Besides the setback that the war had caused him, Bachelier was blackballed in 1926 when he attempted to receive a permanent position at Dijon, lévy later learned of his error, and reconciled himself with Bachelier. Le Jeu, la Chance et le Hasard, Bibliothèque de Philosophie scientifique, E. Flammarion Bachelier, la périodicité du hasard, L’Enseignement Mathématique,17, pp. 5–11 Bachelier, L. Sur la théorie des corrélations, Bulletin de la Société Mathématique de France. Comptes rendus des Séances, Séance du 7 Juillet 1920, pp. 42–44 Bachelier, L. Sur les décimales du nombre π, comptes rendus des Séances, Séance du 7 Juillet 1920, pp. 44–46 Bachelier, L. The Random Character of Stock Market Prices, Cambridge, MA, MIT Press Bachelier, L. May, Theory of Speculation, Google Documents Courtault, J-M. On the Centenary of Théorie de la Spéculation, Mathematical Finance,10, pp. 341–353, doi,10. 1111/1467-9965.00098 Felix, dictionary of Scientific Biography,1, New York, Charles Scribners Sons, ISBN 0-684-10114-9 Taqqu, M. S
13.
Fischer Black
–
Fischer Sheffey Black was an American economist, best known as one of the authors of the famous Black–Scholes equation. Black graduated from Harvard College in 1959 and received a Ph. D. in applied mathematics from Harvard University in 1964. He was initially expelled from the PhD program due to his inability to settle on a topic, having switched from physics to mathematics, then to computers. Black joined the consultancy Bolt, Beranek and Newman, working on a system for artificial intelligence and he spent a summer developing his ideas at the RAND corporation. He became a student of MIT professor Marvin Minsky, and was able to submit his research for completion of the Harvard PhD. Black joined Arthur D. Little, where he was first exposed to economic and financial consulting, in 1971, he began to work at the University of Chicago. He later left the University of Chicago in 1975 to work at the MIT Sloan School of Management, in 1984, he joined Goldman Sachs where he worked until his death. Black began thinking seriously about monetary policy around 1970 and found, at this time, in the Keynesian view, central bankers have to have discretionary powers to fulfill their role properly. Monetarists, under the leadership of Milton Friedman, believe that central banking is the problem. Friedman believed that the growth of the supply could and should be set at a constant rate, say 3% a year. On the basis of the asset pricing model, Black concluded that discretionary monetary policy could not do the good that Keynesians wanted it to do. But he also concluded that it could not do the harm monetarists feared it would do, Black said in a letter to Friedman, in January 1972, In the U. S. economy, much of the public debt is in the form of Treasury bills. Each week, some of these mature, and new bills are sold. If the Federal Reserve System tries to inject money into the private sector, if the Federal Reserve withdraws money, the private sector will allow some of its Treasury bills to mature without replacing them. In 1973, Black, along with Myron Scholes, published the paper The Pricing of Options and this was his most famous work and included the Black–Scholes equation. If future tastes and technology were known, profits and wages would grow smoothly and surely over time, a boom is a period when technology matches well with demand. A bust is a period of mismatch and this view made Black an early contributor to real business cycle theory. Economist Tyler Cowen has argued that Blacks work on monetary economics, black’s works on monetary theory, business cycles and options are parts of his vision of a unified framework
14.
Myron Scholes
–
Myron Samuel Scholes is a Canadian-American financial economist. Scholes is currently the Chairman of the Board of Economic Advisers of Stamos Capital Partners and he was a principal and Limited Partner at Long-Term Capital Management, L. P. and a Managing Director at Salomon Brothers. Scholes earned his PhD at the University of Chicago, in 1997 he was awarded the Nobel Memorial Prize in Economic Sciences for a method to determine the value of derivatives. The model provides a framework for valuing options, such as calls or puts. Myron Scholes was born to a Jewish family on July 1,1941 in Timmins, Ontario, in 1951 the family moved to Hamilton, Ontario. Scholes was a good student although fighting with impaired vision starting with his teens until finally getting an operation when he was twenty-six, after his mother died from cancer, Scholes remained in Hamilton for undergraduate studies and earned a Bachelors degree in Economics from McMaster University in 1962. One of his professors at McMaster introduced him to the works of George Stigler and Milton Friedman, after receiving his B. A. he decided to enroll in graduate studies in economics at the University of Chicago. He earned his MBA at the Booth School of Business in 1964 and his Ph. D. in 1969 with a dissertation written under the supervision of Eugene Fama, in 1968, after finishing his dissertation, Scholes took an academic position at the MIT Sloan School of Management. Here he met Fischer Black, who was a consultant for Arthur D. Little at the time, Merton, who joined MIT in 1970. For the following years Scholes, Black and Merton undertook groundbreaking research in asset pricing, including the work on their famous option pricing model, at the same time, Scholes continued collaborating with Merton Miller and Michael Jensen. While at Chicago, Scholes also started working closely with the Center for Research in Security Prices, helping to develop, in 1981 he moved to Stanford University, where he remained until he retired from teaching in 1996. Since then he holds the position of Frank E. Buck Professor of Finance Emeritus at Stanford, while at Stanford his research interest concentrated on the economics of investment banking and tax planning in corporate finance. In 1997 he shared the Nobel Prize in Economics with Robert C, Merton for a new method to determine the value of derivatives. Fischer Black, who co-authored with them the work that was awarded, had died in 1995 and thus was not eligible for the prize. In 2012, he authored an article entitled Not All Growth Is Good in The 4% Solution, Unleashing the Economic Growth America Needs, in 1990 Scholes decided to get involved more directly with the financial markets. He went to Salomon Brothers as a consultant, then becoming a managing director. In 1994 Scholes joined several colleagues, including John Meriwether, the former vice-chairman and head of trading at Salomon Brothers. Merton, and co-founded a hedge fund called Long-Term Capital Management, the fund, which started operations with $1 billion of investor capital, performed extremely well in the first years, realizing annualized returns of over 40%
15.
Robert C. Merton
–
Merton was born in New York City to a Jewish father sociologist Robert K. Merton and mother Suzanne Carhart who was from a multigenerational southern New Jersey Methodist/Quaker family. He grew up in Hastings-on-Hudson, NY and he then joined the faculty of the MIT Sloan School of Management, where he taught until 1988. On June 11,2010 it was announced that Merton would retire from Harvard, Merton also sits on the QFINANCE Strategic Advisory Board. Merton is the School of Management Distinguished Professor of Finance at the MIT Sloan School of Management, Merton is University Professor Emeritus at Harvard University. He was the George Fisher Baker Professor of Business Administration and John and he previously served on the finance faculty of the Sloan School from 1970 until 1988. Merton received the Alfred Nobel Memorial Prize in Economic Sciences in 1997 for expanding the Black-Scholes formula and he is past President of the American Finance Association, a member of the National Academy of Sciences and a fellow of the American Academy of Arts and Sciences. He has also written on the operation and regulation of financial institutions, Merton has also been recognized for translating finance science into practice. He received the inaugural Financial Engineer of the Year Award from the International Association of Financial Engineers in 1993, Derivatives Strategy magazine named him to its Derivatives Hall of Fame as did Risk magazine to its Risk Hall of Fame. He also received Risk’s Lifetime Achievement Award for contributions to the field of risk management and his first professional association with a hedge fund came in 1968. His advisor at the time, Paul Samuelson, brought him on board Arbitrage Management Company, to join founder Michael Goodkin, AMC is the first known attempt at computerized arbitrage trading. After a successful run as a hedge fund, AMC was sold to Stuart & Co. in 1971. Merton married June Rose in 1966 and they have three children, two sons and one daughter. In 1993, Merton became a member of the U. S, united States National Academy of Sciences. In 1997, Merton was awarded the Nobel Memorial Prize in Economic Sciences with Myron Scholes for their work on stock options, in 1999, Merton was awarded a lifetime achievement award in mathematical finance. In 2005 the Baker Library at Harvard University opened The Merton Exhibit in his honor, in 2010, Merton received the Kolmogorov medal. Harvard Business School Resident Scientist, Dimensional Fund Advisors Pension solution Dimensional Managed DC Robert A. Jarrow Speech in Honor of Robert C, Merton 1999 Mathematical Finance Day Lifetime Achievement Award. April 25,1999 Baker Library, About the Merton Exhibit at the Wayback Machine The Kolmogorov Lecture, november 13,2009 Hamilton Medal CME Group Fred Arditti Innovation Award Robert Muh Award Robert C. Merton. Merton at the Mathematics Genealogy Project
16.
Market liquidity
–
In business, economics or investment, market liquidity is a markets ability to purchase or sell an asset without causing drastic change in the assets price. Equivalently, a market liquidity describes the assets ability to sell quickly without having to reduce its price to a significant degree. Liquidity is about how big the trade-off is between the speed of the sale and the price it can be sold for, in a liquid market, the trade-off is mild, selling quickly will not reduce the price much. In a relatively illiquid market, selling it quickly will require cutting its price by some amount, money, or cash, is the most liquid asset, because it can be sold for goods and services instantly with no loss of value. There is no wait for a buyer of the cash. There is no trade-off between speed and value and it can be used immediately to perform economic actions like buying, selling, or paying debt, meeting immediate wants and needs. If an asset is moderately liquid, it has moderate liquidity, in an alternative definition, liquidity can mean the amount of cash and cash equivalents. If a business has moderate liquidity, it has an amount of very liquid assets. If a business has sufficient liquidity, it has a sufficient amount of liquid assets. An act of exchanging a less liquid asset for a liquid asset is called liquidation. Often liquidation is trading the less liquid asset for cash, also known as selling it, for the same asset, its liquidity can change through time or between different markets, such as in different countries. The change in the liquidity is just based on the market liquidity for the asset at the particular time or in the particular country. The liquidity of a product can be measured as how often it is bought, Liquidity is defined formally in many accounting regimes and has in recent years been more strictly defined. For instance, the US Federal Reserve intends to apply quantitative liquidity requirements based on Basel III liquidity rules as of fiscal 2012, bank directors will also be required to know of, and approve, major liquidity risks personally. A liquid asset has some or all of the features, It can be sold rapidly, with minimal loss of value. The essential characteristic of a market is that there are always ready. A market may be considered both deep and liquid if there are ready and willing buyers and sellers in large quantities, an illiquid asset is an asset which is not readily salable due to uncertainty about its value or the lack of a market in which it is regularly traded. The mortgage-related assets which resulted in the mortgage crisis are examples of illiquid assets
17.
Supply and demand
–
In microeconomics, supply and demand is an economic model of price determination in a market. By contrast, responses to changes in the price of the good are represented as movements along unchanged supply, a supply schedule is a table that shows the relationship between the price of a good and the quantity supplied. Under the assumption of perfect competition, supply is determined by marginal cost and that is, firms will produce additional output while the cost of producing an extra unit of output is less than the price they would receive. A hike in the cost of raw goods would decrease supply, shifting costs up, while a discount would increase supply, shifting costs down, by its very nature, conceptualizing a supply curve requires the firm to be a perfect competitor. This is true because each point on the curve is the answer to the question If this firm is faced with this potential price, how much output will it be able to. Economists distinguish between the curve of an individual firm and between the market supply curve. The market supply curve is obtained by summing the quantities supplied by all suppliers at each potential price, thus, in the graph of the supply curve, individual firms supply curves are added horizontally to obtain the market supply curve. Economists also distinguish the market supply curve from the long-run market supply curve. In this context, two things are assumed constant by definition of the run, the availability of one or more fixed inputs. In the long run, firms have a chance to adjust their holdings of physical capital, furthermore, in the long run potential competitors can enter or exit the industry in response to market conditions. For both of these reasons, long-run market supply curves are generally flatter than their short-run counterparts, the determinants of supply are, Production costs, how much a goods costs to be produced. Production costs are the cost of the inputs, primarily labor, capital, energy and they depend on the technology used in production, and/or technological advances. Following the law of demand, the curve is almost always represented as downward-sloping, meaning that as price decreases. Just like the supply curves reflect marginal cost curves, demand curves are determined by marginal utility curves, the demand schedule is defined as the willingness and ability of a consumer to purchase a given product in a given frame of time. It is aforementioned, that the curve is generally downward-sloping. Two different hypothetical types of goods with upward-sloping demand curves are Giffen goods, by its very nature, conceptualizing a demand curve requires that the purchaser be a perfect competitor—that is, that the purchaser has no influence over the market price. This is true because each point on the curve is the answer to the question If this buyer is faced with this potential price. If a buyer has market power, so its decision of how much to buy influences the price, then the buyer is not faced with any price
18.
Option (finance)
–
The strike price may be set by reference to the spot price of the underlying security or commodity on the day an option is taken out, or it may be fixed at a discount in a premium. The seller has the obligation to fulfill the transaction – to sell or buy – if the buyer exercises the option. Both are commonly used in and by the old traded, when an option is exercised, the cost to the buyer of the asset acquired is the strike price plus the premium, if any. When the option expiration date passes without the option being exercised, then the option expires, in any case, the premium is income to the seller, and normally a capital loss to the buyer. The owner of an option may on-sell the option to a party in a secondary market, in either an over-the-counter transaction or on an options exchange. The market price of an American-style option normally closely follows that of the underlying stock, being the difference between the market price of the stock and the strike price of the option. The ownership of an option does not generally entitle the holder to any rights associated with the asset, such as voting rights or any income from the underlying asset. Contracts similar to options have been used since ancient times, the first reputed option buyer was the ancient Greek mathematician and philosopher Thales of Miletus. When spring came and the olive harvest was larger than expected he exercised his options, in London, puts and refusals first became well-known trading instruments in the 1690s during the reign of William and Mary. Privileges were options sold over the counter in nineteenth century America and their exercise price was fixed at a rounded-off market price on the day or week that the option was bought, and the expiry date was generally three months after purchase. They were not traded in secondary markets, film or theatrical producers often buy the right — but not the obligation — to dramatize a specific book or script. Lines of credit give the borrower the right — but not the obligation — to borrow within a specified time period. Many choices, or embedded options, have traditionally included in bond contracts. For example, many bonds are convertible into common stock at the buyers option, mortgage borrowers have long had the option to repay the loan early, which corresponds to a callable bond option. Options contracts have been known for decades, the Chicago Board Options Exchange was established in 1973, which set up a regime using standardized forms and terms and trade through a guaranteed clearing house. Trading activity and academic interest has increased since then, Options are part of a larger class of financial instruments known as derivative products, or simply, derivatives. A financial option is a contract between two counterparties with the terms of the option specified in a term sheet, Exchange traded options have standardized contracts, and are settled through a clearing house with fulfillment guaranteed by the Options Clearing Corporation. Since the contracts are standardized, accurate pricing models are often available, the terms of an OTC option are unrestricted and may be individually tailored to meet any business need
19.
Convertible bonds
–
It is a hybrid security with debt- and equity-like features. It originated in the century, and was used by early speculators such as Jacob Little. Convertible bonds are most often issued by companies with a low credit rating, to compensate for having additional value through the option to convert the bond to stock, a convertible bond typically has a coupon rate lower than that of similar, non-convertible debt. The investor receives the potential upside of conversion into equity while protecting downside with cash flow from the coupon payments and these properties lead naturally to the idea of convertible arbitrage, where a long position in the convertible bond is balanced by a short position in the underlying equity. From the issuers perspective, the key benefit of raising money by selling convertible bonds is a reduced cash interest payment, the advantage for companies of issuing convertible bonds is that, if the bonds are converted to stocks, companies debt vanishes. However, in exchange for the benefit of reduced interest payments, underwriters have been quite innovative and provided several variations of the initial convertible structure. They grant the holder the right to convert into certain amount of shares determined according to a price determined in advance. They may offer coupon regular payments during the life of the security and have a maturity date where the nominal value of the bond is redeemable by the holder. Mandatory convertibles are a variation of the vanilla subtype, especially on the US market. Mandatory convertible would force the holder to convert into shares at maturity—hence the term Mandatory and those securities would very often bear two conversion prices, making their profiles similar to a risk reversal option strategy. The first conversion price would limit the price where the investor would receive the equivalent of its par value back in shares, note that if the stock price is below the first conversion price the investor would suffer a capital loss compared to its original investment. Mandatory convertibles can be compared to selling of equity at a premium. Reverse convertibles are a common variation, mostly issued synthetically. This negative convexity would be compensated by a usually high regular coupon payment, Packaged convertibles or sometimes bond + option structures are simply a straight bonds and a call option/warrant wrapped together. Usually the investor would be able to trade both legs separately. They would for instance miss the modified duration mitigation effect usual with plain vanilla convertibles structures and you could have more than one conversion price for non-vanilla convertible issuances. Issuance premium, Difference between the price and the stock price at the issuance. Conversion ratio, The number of shares each convertible bond converts into and it may be expressed per bond or on a per centum basis
20.
Brownian motion
–
Brownian motion or pedesis is the random motion of particles suspended in a fluid resulting from their collision with the fast-moving atoms or molecules in the gas or liquid. This transport phenomenon is named after the botanist Robert Brown and this explanation of Brownian motion served as convincing evidence that atoms and molecules exist, and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 for his work on the structure of matter. Brownian motion is among the simplest of the stochastic processes. This universality is closely related to the universality of the normal distribution, in both cases, it is often mathematical convenience, rather than the accuracy of the models, that motivates their use. The Roman Lucretiuss scientific poem On the Nature of Things has a description of Brownian motion of dust particles in verses 113 –140 from Book II. He uses this as a proof of the existence of atoms, Observe what happens when sunbeams are admitted into a building and you will see a multitude of tiny particles mingling in a multitude of ways. Their dancing is an indication of underlying movements of matter that are hidden from our sight. It originates with the atoms which move of themselves, then those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against slightly larger bodies. So the movement mounts up from the atoms and gradually emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible. Although the mingling motion of dust particles is caused largely by air currents, while Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is often credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of matter he was able to rule out that the motion was life-related. The first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880. This was followed independently by Louis Bachelier in 1900 in his PhD thesis The theory of speculation, in which he presented an analysis of the stock. The Brownian motion model of the market is often cited. Albert Einstein and Marian Smoluchowski brought the solution of the problem to the attention of physicists and their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908. In this way Einstein was able to determine the size of atoms, in accordance to Avogadros law this volume is the same for all ideal gases, which is 22.414 liters at standard temperature and pressure
21.
Langevin equation
–
In statistical physics, a Langevin equation is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective variables changing only slowly in comparison to the variables of the system. The fast variables are responsible for the nature of the Langevin equation. The degree of freedom of interest here is the x of the particle. The δ-function form of the correlations in time means that the force at a t is assumed to be completely uncorrelated with it at any other time. This is an approximation, the random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a particle at a much longer time scale, and in this limit the δ -correlation. Another prototypical feature of the Langevin equation is the occurrence of the damping coefficient λ in the function of the random force. A strictly δ -correlated fluctuating force η isnt a function in the mathematical sense. The Langevin equation as it requires an interpretation in this case. There is a derivation of a generic Langevin equation from classical mechanics. This generic equation plays a role in the theory of critical dynamics. The equation for Brownian motion above is a special case, an essential condition of the derivation is a criterion dividing the degrees of freedom into the categories slow and fast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Densities of conserved quantities, and in particular their long wavelength components, technically this division is realized with the Zwanzig projection operator, the essential tool in the derivation. The derivation is not completely rigorous because it relies on assumptions akin to assumptions required elsewhere in basic statistical mechanics, let A = denote the slow variables. The fluctuating force η i obeys a Gaussian probability distribution with correlation function ⟨ η i η j ⟩ =2 λ i, j δ and this implies the Onsager reciprocity relation λ i, j = λ j, i for the damping coefficients λ. The dependence d λ i, j / d A j of λ on A is negligible in most cases, in the Brownian motion case one would have H = p 2 /, A = or A = and = δ i, j
22.
Random walk
–
A random walk is a mathematical object, known as a stochastic or random process, that describes a path that consists of a succession of random steps on some mathematical space such as the integers. As illustrated by examples, random walks have applications to many scientific fields including ecology, psychology, computer science, physics, chemistry. Random walks explain the behaviors of many processes in these fields. As a more mathematical application, the value of pi can be approximated by the usage of random walk in agent-based modelling environment, the term random walk was first introduced by Karl Pearson in 1905. Various types of walks are of interest, which can differ in several ways. The time parameter can also be manipulated, in the simplest context the walk is in discrete time, that is a sequence of random variables = indexed by the natural numbers. However, it is possible to define random walks which take their steps at random times. Specific cases or limits of random walks include the Lévy flight, Random walks are a fundamental topic in discussions of Markov processes. Their mathematical study has been extensive, several properties, including dispersal distributions, first-passage or hitting times, encounter rates, recurrence or transience, have been introduced to quantify their behaviour. A popular random walk model is that of a walk on a regular lattice. In a simple walk, the location can only jump to neighboring sites of the lattice. In simple symmetric random walk on a finite lattice, the probabilities of the location jumping to each one of its immediate neighbours are the same. The best studied example is of random walk on the integer lattice Z d. An elementary example of a walk is the random walk on the integer number line, Z. This walk can be illustrated as follows, a marker is placed at zero on the number line and a fair coin is flipped. If it lands on heads, the marker is moved one unit to the right, if it lands on tails, the marker is moved one unit to the left. After five flips, the marker could now be on 1, −1,3, −3,5, with five flips, three heads and two tails, in any order, will land on 1. There are 10 ways of landing on 1,10 ways of landing on −1,5 ways of landing on 3,5 ways of landing on −3,1 way of landing on 5, see the figure below for an illustration of the possible outcomes of 5 flips
23.
Time series
–
A time series is a series of data points indexed in time order. Most commonly, a series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data, examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. Time series are very frequently plotted via line charts, Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values, Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in there is no natural ordering of the observations. Time series analysis is also distinct from data analysis where the observations typically relate to geographical locations. A stochastic model for a series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. Methods for time series analysis may be divided into two classes, frequency-domain methods and time-domain methods, the former include spectral analysis and wavelet analysis, the latter include auto-correlation and cross-correlation analysis. In the time domain, correlation and analysis can be made in a filter-like manner using scaled correlation, additionally, time series analysis techniques may be divided into parametric and non-parametric methods. The parametric approaches assume that the stationary stochastic process has a certain structure which can be described using a small number of parameters. In these approaches, the task is to estimate the parameters of the model describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure, Methods of time series analysis may also be divided into linear and non-linear, and univariate and multivariate. A time series is one type of Panel data, Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel. A data set may exhibit characteristics of both data and time series data. One way to tell is to ask what makes one data record unique from the other records, if the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a data field and an additional identifier which is unrelated to time. If the differentiation lies on the identifier, then the data set is a cross-sectional data set candidate
24.
Logarithm
–
In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, in simple cases the logarithm counts factors in multiplication. For example, the base 10 logarithm of 1000 is 3, the logarithm of x to base b, denoted logb, is the unique real number y such that by = x. For example, log2 =6, as 64 =26, the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the e as its base, its use is widespread in mathematics and physics. The binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations and they were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes, for example, the decibel is a unit quantifying signal power log-ratios and amplitude log-ratios. In chemistry, pH is a measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and they describe musical intervals, appear in formulas counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant, it has uses in public-key cryptography, the idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the power of 2 is 8, because 8 is the product of three factors of 2,23 =2 ×2 ×2 =8. It follows that the logarithm of 8 with respect to base 2 is 3, the third power of some number b is the product of three factors equal to b. More generally, raising b to the power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors, exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, the logarithm of a positive real number x with respect to base b, a positive real number not equal to 1, is the exponent by which b must be raised to yield x
25.
Variance
–
The variance has a central role in statistics. It is used in statistics, statistical inference, hypothesis testing, goodness of fit. This makes it a central quantity in numerous such as physics, biology, chemistry, cryptography, economics. The variance of a random variable X is the value of the squared deviation from the mean of X, μ = E . This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, the variance can also be thought of as the covariance of a random variable with itself, Var = Cov . The variance is also equivalent to the second cumulant of a probability distribution that generates X, the variance is typically designated as Var , σ X2, or simply σ2. On computational floating point arithmetic, this equation should not be used, if a continuous distribution does not have an expected value, as is the case for the Cauchy distribution, it does not have a variance either. Many other distributions for which the value does exist also do not have a finite variance because the integral in the variance definition diverges. An example is a Pareto distribution whose index k satisfies 1 < k ≤2. e, the normal distribution with parameters μ and σ is a continuous distribution whose probability density function is given by f =12 π σ2 e −22 σ2. In this distribution, E = μ and the variance Var is related with σ via Var = ∫ − ∞ ∞22 π σ2 e −22 σ2 d x = σ2. The role of the distribution in the central limit theorem is in part responsible for the prevalence of the variance in probability. The exponential distribution with parameter λ is a distribution whose support is the semi-infinite interval. Its probability density function is given by f = λ e − λ x, the variance is equal to Var = ∫0 ∞2 λ e − λ x d x = λ −2. So for an exponentially distributed random variable, σ2 = μ2, the Poisson distribution with parameter λ is a discrete distribution for k =0,1,2, …. Its probability mass function is given by p = λ k k, E − λ, and it has expected value μ = λ. The variance is equal to Var = ∑ k =0 ∞ λ k k, E − λ2 = λ, So for a Poisson-distributed random variable, σ2 = μ. The binomial distribution with n and p is a discrete distribution for k =0,1,2, …, n. Its probability mass function is given by p = p k n − k, the variance is equal to Var = ∑ k =0 n p k n − k 2 = n p
26.
Normal distribution
–
In probability theory, the normal distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are used in the natural and social sciences to represent real-valued random variables whose distributions are not known. The normal distribution is useful because of the limit theorem. Physical quantities that are expected to be the sum of independent processes often have distributions that are nearly normal. Moreover, many results and methods can be derived analytically in explicit form when the relevant variables are normally distributed, the normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped, the probability density of the normal distribution is, f =12 π σ2 e −22 σ2 Where, μ is mean or expectation of the distribution. σ is standard deviation σ2 is variance A random variable with a Gaussian distribution is said to be distributed and is called a normal deviate. The simplest case of a distribution is known as the standard normal distribution. The factor 1 /2 in the exponent ensures that the distribution has unit variance and this function is symmetric around x =0, where it attains its maximum value 1 /2 π and has inflection points at x = +1 and x = −1. Authors may differ also on which normal distribution should be called the standard one, the probability density must be scaled by 1 / σ so that the integral is still 1. If Z is a normal deviate, then X = Zσ + μ will have a normal distribution with expected value μ. Conversely, if X is a normal deviate, then Z = /σ will have a standard normal distribution. Every normal distribution is the exponential of a function, f = e a x 2 + b x + c where a is negative. In this form, the mean value μ is −b/, for the standard normal distribution, a is −1/2, b is zero, and c is − ln /2. The standard Gaussian distribution is denoted with the Greek letter ϕ. The alternative form of the Greek phi letter, φ, is used quite often. The normal distribution is often denoted by N. Thus when a random variable X is distributed normally with mean μ and variance σ2, some authors advocate using the precision τ as the parameter defining the width of the distribution, instead of the deviation σ or the variance σ2
27.
Geometric Brownian motion
–
A geometric Brownian motion is a continuous-time stochastic process in which the logarithm of the randomly varying quantity follows a Brownian motion with drift. It is an important example of stochastic processes satisfying a stochastic differential equation, in particular, the former is used to model deterministic trends, while the latter term is often used to model a set of unpredictable events occurring during this motion. For an arbitrary initial value S0 the above SDE has the analytic solution, to arrive at this formula, we will divide the SDE by S t in order to have our choice random variable be on only one side. From there we write the equation in Itō integral form, ∫0 t d S t S t = μ t + σ W t. Of course, d S t S t looks related to the derivative of ln S t, however, S t is an Itō process which requires the use of Itō calculus. Applying Itōs formula leads to d = d S t S t −121 S t 2 d S t d S t Where d S t d S t is the variation of the SDE. This can also be written as d t or ⟨ S. ⟩ t, in this case we have, d S t d S t = σ2 S t 2 d t. Plugging the value of d S t in the above equation, taking the exponential and multiplying both sides by S0 gives the solution claimed above. When deriving further properties of GBM, use can be made of the SDE of which GBM is the solution, for example, consider the stochastic process log. This is a process, because in the Black–Scholes model it is related to the log return of the stock price. It follows that E log = log + t and this result can also be derived by applying the logarithm to the explicit solution of GBM, log = log = log + t + σ W t. Taking the expectation yields the result as above, E log = log + t. GBM can be extended to the case there are multiple correlated price paths. For the multivariate case, this implies that C o v = S0 i S0 j e t, Geometric Brownian motion is used to model stock prices in the Black–Scholes model and is the most widely used model of stock price behavior. Some of the arguments for using GBM to model stock prices are, The expected returns of GBM are independent of the value of the process, a GBM process only assumes positive values, just like real stock prices. A GBM process shows the kind of roughness in its paths as we see in real stock prices. Calculations with GBM processes are relatively easy, in real life, stock prices often show jumps caused by unpredictable events or news, but in GBM, the path is continuous. In an attempt to make GBM more realistic as a model for stock prices, if we assume that the volatility is a deterministic function of the stock price and time, this is called a local volatility model
28.
Nobel Memorial Prize in Economic Sciences
–
The prize was established in 1968 by a donation from Swedens central bank, the Swedish National Bank, on the banks 300th anniversary. Although it is not one of the prizes that Alfred Nobel established in his will in 1895, laureates are announced with the other Nobel Prize laureates, and receive the award at the same ceremony. Laureates in the Memorial Prize in Economics are selected by the Royal Swedish Academy of Sciences and it was first awarded in 1969 to the Dutch and Norwegian economists Jan Tinbergen and Ragnar Frisch, for having developed and applied dynamic models for the analysis of economic processes. An endowment in perpetuity from Sveriges Riksbank pays the Nobel Foundations administrative expenses associated with the prize, since 2012, the monetary portion of the Prize in Economics has totalled 8 million Swedish kronor. This is equivalent to the amount given for the original Nobel Prizes, the Prize in Economics is not one of the original Nobel Prizes created by Alfred Nobels will. However, the process, selection criteria, and awards presentation of the Prize in Economic Sciences are performed in a manner similar to that of the Nobel Prizes. Laureates are announced with the Nobel Prize laureates, and receive the award at the same ceremony, shall have conferred the greatest benefit on mankind. According to its website, the Royal Swedish Academy of Sciences administers a researcher exchange with academies in other countries and publishes six scientific journals. Members of the Academy and former laureates are also authorised to nominate candidates, all proposals and their supporting evidence must be received before February 1. The proposals are reviewed by the Prize Committee and specially appointed experts, before the end of September, the committee chooses potential laureates. If there is a tie, the chairman of the committee casts the deciding vote, next, the potential laureates must be approved by the Royal Swedish Academy of Sciences. Members of the Ninth Class of the Academy vote in mid-October to determine the next laureate or laureates of the Prize in Economics. The first prize in economics was awarded in 1969 to Ragnar Frisch, in 2009, Elinor Ostrom became the first woman awarded the prize. This makes it available to researchers in such topics as political science, psychology, moreover, the composition of the Economics Prize Committee changed to include two non-economists. This has not been confirmed by the Economics Prize Committee, the members of the 2007 Economics Prize Committee are still dominated by economists, as the secretary and four of the five members are professors of economics. Some critics argue that the prestige of the Prize in Economics derives in part from its association with the Nobel Prizes, among them is the Swedish human rights lawyer Peter Nobel, a great-grandson of Ludvig Nobel. Nobel criticizes the institution of misusing his familys name. He explaiend that Nobel despised people who cared more about profits than societys well-being and this does not matter in the natural sciences
29.
Stochastic process
–
In probability theory and related fields, a stochastic or random process is a mathematical object usually defined as a collection of random variables. Stochastic processes are used as mathematical models of systems and phenomena that appear to vary in a random manner. Furthermore, seemingly random changes in financial markets have motivated the use of stochastic processes in finance. Applications and the study of phenomena have in turn inspired the proposal of new stochastic processes. Examples of such processes include the Wiener process or Brownian motion process, used by Louis Bachelier to study price changes on the Paris Bourse. Erlang to study the number phone calls occurring in a period of time. The term random function is used to refer to a stochastic or random process. The terms stochastic process and random process are used interchangeably, often no specific mathematical space for the set that indexes the random variables. But often these two terms are used when the variables are indexed by the integers or an interval of the real line. If the random variables are indexed by the Cartesian plane or some higher-dimensional Euclidean space, the values of a stochastic process are not always numbers and can be vectors or other mathematical objects. The theory of processes is considered to be an important contribution to mathematics. The set used to index the random variables is called the index set, historically, the index set was some subset of the real line, such as the natural numbers, giving the index set the interpretation of time. Each random variable in the collection takes values from the same space known as the state space. This state space can be, for example, the integers, an increment is the amount that a stochastic process changes between two index values, often interpreted as two points in time. A stochastic process can have many outcomes, due its randomness, and an outcome of a stochastic process is called, among other names. A stochastic process can be classified in different ways, for example, by its space, its index set. One common way of classification is by the cardinality of the index set, if the index set is some interval of the real line, then time is said to be continuous. The two types of processes are respectively referred to as discrete-time and continuous-time stochastic processes
30.
Expected value
–
In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents. For example, the value in rolling a six-sided die is 3.5. Less roughly, the law of large states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is known as the expectation, mathematical expectation, EV, average, mean value, mean. More practically, the value of a discrete random variable is the probability-weighted average of all possible values. In other words, each value the random variable can assume is multiplied by its probability of occurring. The same principle applies to a random variable, except that an integral of the variable with respect to its probability density replaces the sum. The expected value does not exist for random variables having some distributions with large tails, for random variables such as these, the long-tails of the distribution prevent the sum/integral from converging. The expected value is a key aspect of how one characterizes a probability distribution, by contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations, it is the value of the squared deviation of the variables value from the variables expected value. The expected value plays important roles in a variety of contexts, in regression analysis, one desires a formula in terms of observed data that will give a good estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, a formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate can be shown to equal the true value of the desired parameter. In decision theory, and in particular in choice under uncertainty, one example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a fraction of the expected loss. Suppose random variable X can take value x1 with probability p1, value x2 with probability p2, then the expectation of this random variable X is defined as E = x 1 p 1 + x 2 p 2 + ⋯ + x k p k. If all outcomes xi are equally likely, then the weighted average turns into the simple average and this is intuitive, the expected value of a random variable is the average of all values it can take, thus the expected value is what one expects to happen on average. If the outcomes xi are not equally probable, then the simple average must be replaced with the weighted average, the intuition however remains the same, the expected value of X is what one expects to happen on average. Let X represent the outcome of a roll of a fair six-sided die, more specifically, X will be the number of pips showing on the top face of the die after the toss
31.
Blackboard bold
–
Blackboard bold is a typeface style that is often used for certain symbols in mathematical texts, in which certain lines of the symbol are doubled. The symbols usually denote number sets, one way of producing blackboard bold is to double-strike a character with a small offset on a typewriter. Thus they are referred to as double struck. e. by using the edge rather than point of the chalk. It then made its way back in print form as a style from ordinary bold, possibly starting with the original 1965 edition of Gunning. Some mathematicians do not recognize blackboard bold as a style from bold. Jean-Pierre Serre uses double-struck letters when writing bold on the blackboard, donald Knuth also prefers boldface to blackboard bold, and consequently did not include blackboard bold in the Computer Modern fonts he created for the TeX mathematical typesetting system. The Chicago Manual of Style in 1993 advises, blackboard bold should be confined to the classroom whereas in 2003 it states that open-faced symbols are reserved for systems of numbers. In Unicode, a few of the common blackboard bold characters are encoded in the Basic Multilingual Plane in the Letterlike Symbols area. The rest, however, are encoded outside the BMP, from U+1D538 to U+1D550, U+1D552 to U+1D56B, being outside the BMP, these are relatively new and not widely supported. The following table shows all available Unicode blackboard bold characters, the symbols are nearly universal in their interpretation, unlike their normally-typeset counterparts, which are used for many different purposes. The first column shows the letter as typically rendered by the ubiquitous LaTeX markup system, the second column shows the Unicode codepoint. The third column shows the symbol itself, the fourth column describes known typical usage in mathematical texts. In addition, a blackboard-bold Greek letter mu is used by number theorists. Mathematical alphanumeric symbols Set notation Weisstein, Eric W. Doublestruck, http, //www. w3. org/TR/MathML2/double-struck. html shows blackboard bold symbols together with their Unicode encodings. Encodings in the Basic Multilingual Plane are highlighted in yellow
32.
Charles Dow
–
Charles Henry Dow was an American journalist who co-founded Dow Jones & Company with Edward Jones and Charles Bergstresser. Dow also founded The Wall Street Journal, which has one of the most respected financial publications in the world. He also invented the Dow Jones Industrial Average as part of his research into market movements and he developed a series of principles for understanding and analyzing market behavior which later became known as Dow theory, the groundwork for technical analysis. Charles Henry Dow was born in Sterling, Connecticut, on November 6,1851, when he was six years old his father, who was a farmer, died. The family lived in the hills of eastern Connecticut, not far from Rhode Island, Dow did not have much education or training, but he managed to find work at the age of 21 with the Springfield Daily Republican, in Massachusetts. He worked there from 1872 until 1875 as a city reporter for Samuel Bowles, Dow then moved on to Rhode Island, joining the Providence Star, where he worked for two years as a night editor. He also reported for the Providence Evening Press, in 1877, Dow joined the staff of the prominent Providence Journal. George W. Danielson, the editor there, had not wanted to hire the 26-year-old, upon learning that Dow had worked for Bowles for three years, Danielson reconsidered and gave Dow a job writing business stories. Dow specialized in articles on history, some of which were later published in pamphlet form. Dow made history come alive in his writing by explaining the development of various industries, in 1877, he published a History of Steam Navigation between New York and Providence. Three years later, he published Newport, The City by the Sea and it was an account of Newport, Rhode Islands settlement, rise, decline, and rebirth as a summer vacation spot and the location of a naval academy, training station, and war college. Dow reported on Newport real estate investments, recording the money earned and he also wrote histories of public education and the prison system in the state. Danielson was so impressed with Dows careful research that he assigned him to accompany a group of bankers and reporters to Leadville, Colorado, the bankers wanted the publicity in order to gain investors in the mines. In 1879, Dow and various tycoons, geologists, lawmakers, Dow learned a great deal about the world of money on that journey as the men smoked cigars, played cards, and swapped stories. He interviewed many highly successful financiers and heard what sort of information the investors on Wall Street needed to make money, the businessmen seemed to like and trust Dow, knowing that he would quote them accurately and keep a confidence. Dow wrote nine Leadville Letters based on his experiences there and he described the Rocky Mountains, the mining companies, and the boomtowns gambling, saloons, and dance halls. He also wrote of raw capitalism and the information that drove investments and he described the disappearance of the individual mine-owners and the financiers who underwrote shares in large mining consortiums. In his last letter, Dow warned, Mining securities are not the thing for widows and orphans or country clergymen, in 1880, Dow left Providence for New York City, realizing that the ideal location for business and financial reporting was there