The Cold War was a period of geopolitical tension between the Soviet Union with its satellite states, the United States with its allies after World War II. A common historiography of the conflict begins between 1946, the year U. S. diplomat George F. Kennan's "Long Telegram" from Moscow cemented a U. S. foreign policy of containment of Soviet expansionism threatening strategically vital regions, the Truman Doctrine of 1947, ending between the Revolutions of 1989, which ended communism in Eastern Europe, the 1991 collapse of the USSR, when nations of the Soviet Union abolished communism and restored their independence. The term "cold" is used because there was no large-scale fighting directly between the two sides, but they each supported major regional conflicts known as proxy wars; the conflict split the temporary wartime alliance against Nazi Germany and its allies, leaving the USSR and the US as two superpowers with profound economic and political differences. The capitalist West was led by the United States, a federal republic with a two-party presidential system, as well as the other First World nations of the Western Bloc that were liberal democratic with a free press and independent organizations, but were economically and politically entwined with a network of banana republics and other authoritarian regimes, most of which were the Western Bloc's former colonies.
Some major Cold War frontlines such as Indochina and the Congo were still Western colonies in 1947. The Soviet Union, on the other hand, was a self-proclaimed Marxist–Leninist state led by its Communist Party, which in turn was dominated by a totalitarian leader with different titles over time, a small committee called the Politburo; the Party controlled the state, the press, the military, the economy, many organizations throughout the Second World, including the Warsaw Pact and other satellites, funded communist parties around the world, sometimes in competition with communist China following the Sino-Soviet split of the 1960s. The two worlds were fighting for dominance in low-developed regions known as the Third World. In time, a neutral bloc arose in these regions with the Non-Aligned Movement, which sought good relations with both sides. Notwithstanding isolated incidents of air-to-air dogfights and shoot-downs, the two superpowers never engaged directly in full-scale armed combat. However, both were armed in preparation for a possible all-out nuclear world war.
Each side had a nuclear strategy that discouraged an attack by the other side, on the basis that such an attack would lead to the total destruction of the attacker—the doctrine of mutually assured destruction. Aside from the development of the two sides' nuclear arsenals, their deployment of conventional military forces, the struggle for dominance was expressed via proxy wars around the globe, psychological warfare, massive propaganda campaigns and espionage, far-reaching embargoes, rivalry at sports events, technological competitions such as the Space Race; the first phase of the Cold War began in the first two years after the end of the Second World War in 1945. The USSR consolidated its control over the states of the Eastern Bloc, while the United States began a strategy of global containment to challenge Soviet power, extending military and financial aid to the countries of Western Europe and creating the NATO alliance; the Berlin Blockade was the first major crisis of the Cold War. With the victory of the Communist side in the Chinese Civil War and the outbreak of the Korean War, the conflict expanded.
The USSR and the US competed for influence in Latin America and the decolonizing states of Africa and Asia. The Soviets suppressed the Hungarian Revolution of 1956; the expansion and escalation sparked more crises, such as the Suez Crisis, the Berlin Crisis of 1961, the Cuban Missile Crisis of 1962, the closest the two sides came to nuclear war. Meanwhile, an international peace movement took root and grew among citizens around the world, first in Japan from 1954, when people became concerned about nuclear weapons testing, but soon in Europe and the US; the peace movement, in particular the anti-nuclear movement, gained pace and popularity from the late 1950s and early 1960s, continued to grow through the'70s and'80s with large protest marches and various non-parliamentary activism opposing war and calling for global nuclear disarmament. Following the Cuban Missile Crisis, a new phase began that saw the Sino-Soviet split complicate relations within the Communist sphere, while US allies France, demonstrated greater independence of action.
The USSR crushed the 1968 Prague Spring liberalization program in Czechoslovakia, while the US experienced internal turmoil from the civil rights movement and opposition to the Vietnam War, which ended with the defeat of the US-backed Republic of Vietnam, prompting further adjustments. By the 1970s, both sides had become interested in making allowances in order to create a more stable and predictable international system, ushering in a period of détente that saw Strategic Arms Limitation Talks and the US opening relations with the People's Republic of China as a strategic counterweight to the Soviet Union. Détente collapsed at the end of the decade with the beginning of the Soviet–Afghan War in 1979; the early 1980s were another period of elevated tension, with the Soviet downing of KAL Flight 007 and the "Able Archer" NATO military exercises, both in 1983. The United States increased diplomatic and economic pressures on the Soviet Union, at a time when the communist state was suffering from economic stag
A personal computer is a multi-purpose computer whose size and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications these systems run commercial software, free-of-charge software or free and open-source software, provided in ready-to-run form. Software for personal computers is developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible; this contrasts with mobile systems, where software is only available through a manufacturer-supported channel, end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry; these include free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices provides the main alternative to Intel's processors; the advent of personal computers and the concurrent Digital Revolution have affected the lives of people in all countries. "PC" is an initialism for "personal computer". The IBM Personal Computer incorporated the designation in its model name, it is sometimes useful to distinguish personal computers of the "IBM Personal Computer" family from personal computers made by other manufacturers. For example, "PC" is used in contrast with "Mac", an Apple Macintosh computer.. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" computers.
The "brain" may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far. In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit trained, person; this mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30of 1956, the Programma 101 introduced in 1964, the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
In what was to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of what would become the staples of daily working life in the 21st century: e-mail, word processing, video conferencing, the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time; the development of the microprocessor, with widespread commercial availability starting in the mid 1970's, made computers cheap enough for small businesses and individuals to own. Early personal computers—generally called microcomputers—were sold in a kit form and in limited volumes, were of interest to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008.
It was built starting in 1972, few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use; the CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP based on the IBM PALM processor with a Philips compact cassette drive, small CRT, full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was available only on mainframe computers, most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC; because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.
C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weigh
Algorithmic trading is a method of executing a large order using automated pre-programmed trading instructions accounting for variables such as time and volume to send small slices of the order out to the market over time. They were developed so that traders do not need to watch a stock and send those slices out manually. Popular "algos" include Percentage of Volume, Pegged, VWAP, TWAP, Implementation Shortfall, Target Close. In the twenty-first century, algorithmic trading has been gaining traction with both retail and institutional traders. Algorithmic trading is not an attempt to make a trading profit, it is a way to minimize the cost, market impact and risk in execution of an order. It is used by investment banks, pension funds, mutual funds, hedge funds because these institutional traders need to execute large orders in markets that cannot support all of the size at once; the term is used to mean automated trading system. These do indeed have the goal of making a profit. Known as black box trading, these encompass trading strategies that are reliant on complex mathematical formulas and high-speed computer programs.
Such systems run strategies including market making, inter-market spreading, arbitrage, or pure speculation such as trend following. Many fall into the category of high-frequency trading, which are characterized by high turnover and high order-to-trade ratios; as a result, in February 2012, the Commodity Futures Trading Commission formed a special working group that included academics and industry experts to advise the CFTC on how best to define HFT. HFT strategies utilize computers that make elaborate decisions to initiate orders based on information, received electronically, before human traders are capable of processing the information they observe. Algorithmic trading and HFT have resulted in a dramatic change of the market microstructure in the way liquidity is provided. Profitability projections by the TABB Group, a financial services industry research firm, for the US equities HFT industry were US$1.3 billion before expenses for 2014 down on the maximum of US$21 billion that the 300 securities firms and hedge funds that specialized in this type of trading took in profits in 2008, which the authors had called "relatively small" and "surprisingly modest" when compared to the market's overall trading volume.
In March 2014, Virtu Financial, a high-frequency trading firm, reported that during five years the firm as a whole was profitable on 1,277 out of 1,278 trading days, losing money just one day, empirically demonstrating the law of large numbers benefit of trading thousands to millions of tiny, low-risk and low-edge trades every trading day. A third of all European Union and United States stock trades in 2006 were driven by automatic programs, or algorithms; as of 2009, studies suggested HFT firms accounted for 60–73% of all US equity trading volume, with that number falling to 50% in 2012. In 2006, at the London Stock Exchange, over 40% of all orders were entered by algorithmic traders, with 60% predicted for 2007. American markets and European markets have a higher proportion of algorithmic trades than other markets, estimates for 2008 range as high as an 80% proportion in some markets. Foreign exchange markets have active algorithmic trading. Futures markets are considered easy to integrate into algorithmic trading, with about 20% of options volume expected to be computer-generated by 2010.
Bond markets are moving toward more access to algorithmic traders. Algorithmic trading and HFT have been the subject of much public debate since the U. S. Securities and Exchange Commission and the Commodity Futures Trading Commission said in reports that an algorithmic trade entered by a mutual fund company triggered a wave of selling that led to the 2010 Flash Crash; the same reports found HFT strategies may have contributed to subsequent volatility by pulling liquidity from the market. As a result of these events, the Dow Jones Industrial Average suffered its second largest intraday point swing to that date, though prices recovered. A July 2011 report by the International Organization of Securities Commissions, an international body of securities regulators, concluded that while "algorithms and HFT technology have been used by market participants to manage their trading and risk, their usage was clearly a contributing factor in the flash crash event of May 6, 2010." However, other researchers have reached a different conclusion.
One 2010 study found that HFT did not alter trading inventory during the Flash Crash. Some algorithmic trading ahead of index fund rebalancing transfers profits from investors. Computerization of the order flow in financial markets began in the early 1970s, with some landmarks being the introduction of the New York Stock Exchange's “designated order turnaround” system, which routed orders electronically to the proper trading post, which executed them manually; the "opening automated reporting system" aided the specialist in determining the market clearing opening price. Program trading is defined by the New York Stock Exchange as an order to buy or sell 15 or more stocks valued at over US$1 million total. In practice this means. In the 1980s, program trading became used in trading between the S&P 500 equity and futures markets. In stock index arbitrage a trader buys a stock index futures contract such as the S&P 500 futures and sells a portfolio of
Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. Numerical analysis finds application in all fields of engineering and the physical sciences, but in the 21st century the life sciences, social sciences, medicine and the arts have adopted elements of scientific computations; as an aspect of mathematics and computer science that generates and implements algorithms, the growth in power and the revolution in computing has raised the use of realistic mathematical models in science and engineering, complex numerical analysis is required to provide solutions to these more involved models of the world. Ordinary differential equations appear in celestial mechanics. Before the advent of modern computers, numerical methods depended on hand interpolation in large printed tables. Since the mid 20th century, computers calculate the required functions instead; these same interpolation formulas continue to be used as part of the software algorithms for solving differential equations.
One of the earliest mathematical writings is a Babylonian tablet from the Yale Babylonian Collection, which gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Being able to compute the sides of a triangle is important, for instance, in astronomy and construction. Numerical analysis continues this long tradition of practical mathematical calculations. Much like the Babylonian approximation of the square root of 2, modern numerical analysis does not seek exact answers, because exact answers are impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors; the overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to hard problems, the variety of, suggested by the following: Advanced numerical methods are essential in making numerical weather prediction feasible.
Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes; such simulations consist of solving partial differential equations numerically. Hedge funds use tools from all fields of numerical analysis to attempt to calculate the value of stocks and derivatives more than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices and crew assignments and fuel needs; such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis; the rest of this section outlines several important themes of numerical analysis. The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve good numerical estimates of some functions; the canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a large number of used formulas and functions and their values at many points. The function values are no longer useful when a computer is available, but the large listing of formulas can still be handy; the mechanical calculator was developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, it was found that these computers were useful for administrative purposes, but the invention of the computer influenced the field of numerical analysis, since now longer and more complicated calculations could be done.
Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution. In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test involving the residual, is specified in order to decide when a sufficiently accurate solution has been found. Using infinite precision arithmetic these methods would not reach the solution within a finite number of steps. Examples include Newton's method, the bisection method, Jacobi iteration. In computational matrix algebra, iterative methods are generall
A time series is a series of data points indexed in time order. Most a time series is a sequence taken at successive spaced points in time, thus it is a sequence of discrete-time data. Examples of time series are heights of ocean tides, counts of sunspots, the daily closing value of the Dow Jones Industrial Average. Time series are frequently plotted via line charts. Time series are used in statistics, signal processing, pattern recognition, mathematical finance, weather forecasting, earthquake prediction, electroencephalography, control engineering, communications engineering, in any domain of applied science and engineering which involves temporal measurements. Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on observed values. While regression analysis is employed in such a way as to test theories that the current values of one or more independent time series affect the current value of another time series, this type of analysis of time series is not called "time series analysis", which focuses on comparing values of a single time series or multiple dependent time series at different points in time.
Interrupted time series analysis is the analysis of interventions on a single time series. Time series data have a natural temporal ordering; this makes time series analysis distinct from cross-sectional studies, in which there is no natural ordering of the observations. Time series analysis is distinct from spatial data analysis where the observations relate to geographical locations. A stochastic model for a time series will reflect the fact that observations close together in time will be more related than observations further apart. In addition, time series models will make use of the natural one-way ordering of time so that values for a given period will be expressed as deriving in some way from past values, rather than from future values Time series analysis can be applied to real-valued, continuous data, discrete numeric data, or discrete symbolic data. Methods for time series analysis may be divided into two classes: frequency-domain methods and time-domain methods; the former include wavelet analysis.
In the time domain and analysis can be made in a filter-like manner using scaled correlation, thereby mitigating the need to operate in the frequency domain. Additionally, time series analysis techniques may be divided into parametric and non-parametric methods; the parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters. In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure. Methods of time series analysis may be divided into linear and non-linear, univariate and multivariate. A time series is one type of panel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel. A data set may exhibit characteristics of both panel data and time series data.
One way to tell is to ask. If the answer is the time data field this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier, unrelated to time it is panel data candidate. If the differentiation lies on the non-time identifier the data set is a cross-sectional data set candidate. There are several types of motivation and data analysis available for time series which are appropriate for different purposes and etc. In the context of statistics, quantitative finance, seismology and geophysics the primary goal of time series analysis is forecasting. In the context of signal processing, control engineering and communication engineering it is used for signal detection and estimation, while in the context of data mining, pattern recognition and machine learning time series analysis can be used for clustering, query by content, anomaly detection as well as forecasting; the clearest way to examine a regular time series manually is with a line chart such as the one shown for tuberculosis in the United States, made with a spreadsheet program.
The number of cases was standardized to a rate per 100,000 and the percent change per year in this rate was calculated. The nearly dropping line shows that the TB incidence was decreasing in most years, but the percent change in this rate varied by as much as +/- 10%, with'surges' in 1975 and around the early 1990s; the use of both vertical axes allows the comparison of two time series in one graphic. Other techniques include: Autocorrelation analysis to examine serial dependence Spectral analysis to examine cyclic behavior which need not be related to seasonality. For example, sun spot activity vari
Eugene Francis "Gene" Fama is an American economist, best known for his empirical work on portfolio theory, asset pricing, the efficient-market hypothesis. He is Robert R. McCormick Distinguished Service Professor of Finance at the University of Chicago Booth School of Business. In 2013, he shared the Nobel Memorial Prize in Economic Sciences jointly with Robert Shiller and Lars Peter Hansen; the Research Papers in Economics project ranked him as the 11th-most influential economist of all-time based on his academic contributions. Fama was born in Boston, the son of Angelina and Francis Fama. All of his grandparents were immigrants from Italy. Fama is a Malden Catholic High School Athletic Hall of Fame honoree, he earned his undergraduate degree in Romance Languages magna cum laude in 1960 from Tufts University where he was selected as the school’s outstanding student-athlete. His M. B. A. and Ph. D. came from the Booth School of Business at the University of Chicago in economics and finance. His doctoral supervisors were Nobel prize winner Merton Miller and Harry Roberts, but Benoit Mandelbrot was an important influence.
He has spent all of his teaching career at the University of Chicago. His Ph. D. thesis, which concluded that short-term stock price movements are unpredictable and approximate a random walk, was published in the January 1965 issue of the Journal of Business, entitled "The Behavior of Stock Market Prices". That work was subsequently rewritten into a less technical article, "Random Walks In Stock Market Prices", published in the Financial Analysts Journal in 1965 and Institutional Investor in 1968, his work with Kenneth French showed that predictability in expected stock returns can be explained by time-varying discount rates, for example higher average returns during recessions can be explained by a systematic increase in risk aversion which lowers prices and increases average returns. His article "The Adjustment of Stock Prices to New Information" in the International Economic Review, 1969 was the first event study that sought to analyze how stock prices respond to an event, using price data from the newly available CRSP database.
This was the first of hundreds of such published studies. In 2013, he won the Nobel Memorial Prize in Economic Sciences. Fama is most thought of as the father of the efficient-market hypothesis, beginning with his Ph. D. thesis. In 1965 he published an analysis of the behaviour of stock market prices that showed that they exhibited so-called fat tail distribution properties, implying extreme movements were more common than predicted on the assumption of Normality. In an article in the May 1970 issue of the Journal of Finance, entitled "Efficient Capital Markets: A Review of Theory and Empirical Work," Fama proposed two concepts that have been used on efficient markets since. First, Fama proposed three types of efficiency: strong-form, they are explained in the context of. In weak form efficiency the information set is just historical prices, which can be predicted from historical price trend. Semi-strong form requires that all public information is reflected in prices such as companies' announcements or annual earnings figures.
The strong-form concerns all information sets, including private information, are incorporated in price trend. Second, Fama demonstrated that the notion of market efficiency could not be rejected without an accompanying rejection of the model of market equilibrium; this concept, known as the "joint hypothesis problem," has since vexed researchers. Market efficiency denotes how information is factored in price, Fama emphasizes that the hypothesis of market efficiency must be tested in the context of expected returns; the joint hypothesis problem states that when a model yields a predicted return different from the actual return, one can never be certain if there exists an imperfection in the model or if the market is inefficient. Researchers can only modify their models by adding different factors to eliminate any anomalies, in hopes of explaining the return within the model; the anomaly known as alpha in the modeling test, thus functions as a signal to the model maker whether it can predict returns by the factors in the model.
However, as long as there exists an alpha, neither the conclusion of a flawed model nor market inefficiency can be drawn according to the Joint Hypothesis. Fama stresses that market efficiency per se is not testable and can only be tested jointly with some model of equilibrium, i.e. an asset-pricing model. In recent years, Fama has become controversial again, for a series of papers, co-written with Kenneth French, that cast doubt on the validity of the Capital Asset Pricing Model, which posits that a stock's beta alone should explain its average return; these papers describe two factors above and beyond a stock's market beta which can explain differences in stock returns: market capitalization and "value". They offer evidence that a variety of patterns in average returns labeled as "anomalies" in past work, can be explained with their Fama–French three-factor model; the Theory of Finance, Dryden Press, 1972 Foundations of Finance: Portfolio Decisions and Securities Prices, Basic Books, 1976 Media related to Eugene Fama at Wikimedia Commons Faculty profile at the University of Chicago List of published works
Harry Max Markowitz is an American economist, a recipient of the 1989 John von Neumann Theory Prize and the 1990 Nobel Memorial Prize in Economic Sciences. Markowitz is a professor of finance at the Rady School of Management at the University of California, San Diego, he is best known for his pioneering work in modern portfolio theory, studying the effects of asset risk, return and diversification on probable investment portfolio returns. Harry Markowitz was born to the son of Morris and Mildred Markowitz. During high school, Markowitz developed an interest in physics and philosophy, in particular the ideas of David Hume, an interest he continued to follow during his undergraduate years at the University of Chicago. After receiving his Ph. B. in Liberal Arts, Markowitz decided to continue his studies at the University of Chicago, choosing to specialize in economics. There he had the opportunity to study under important economists, including Milton Friedman, Tjalling Koopmans, Jacob Marschak and Leonard Savage.
While still a student, he was invited to become a member of the Cowles Commission for Research in Economics, in Chicago at the time. He completed his A. M. in Economics from the university in 1950. Markowitz chose to apply mathematics to the analysis of the stock market as the topic for his dissertation. Jacob Marschak, the thesis advisor, encouraged him to pursue the topic, noting that it had been a favorite interest of Alfred Cowles, the founder of the Cowles Commission. While researching the current understanding of stock prices, which at the time consisted in the present value model of John Burr Williams, Markowitz realized that the theory lacks an analysis of the impact of risk; this insight led to the development of his seminal theory of portfolio allocation under uncertainty, published in 1952 by the Journal of Finance. In 1952, Harry Markowitz went to work for the RAND Corporation, where he met George Dantzig. With Dantzig's help, Markowitz continued to research optimization techniques, further developing the critical line algorithm for the identification of the optimal mean-variance portfolios, relying on what was named the Markowitz frontier.
In 1954, he received a PhD in Economics from the University of Chicago with a thesis on the portfolio theory. The topic was so novel that, while Markowitz was defending his dissertation, Milton Friedman argued his contribution was not economics. During 1955–1956 Markowitz spent a year at the Cowles Foundation, which had moved to Yale University, at the invitation of James Tobin, he published the critical line algorithm in a 1956 paper and used this time at the foundation to write a book on portfolio allocation, published in 1959. Markowitz won the Nobel Memorial Prize in Economic Sciences in 1990 while a professor of finance at Baruch College of the City University of New York. In the preceding year, he received the John von Neumann Theory Prize from the Operations Research Society of America for his contributions in the theory of three fields: portfolio theory. Sparse matrix methods are now used to solve large systems of simultaneous equations whose coefficients are zero. SIMSCRIPT has been used to program computer simulations of manufacturing and computer systems as well as war games.
SIMSCRIPT included the Buddy memory allocation method, developed by Markowitz. The company that would become CACI International was founded by Herb Karr and Harry Markowitz on July 17, 1962 as California Analysis Center, Inc, they helped develop SIMSCRIPT, the first simulation programming language, at RAND and after it was released to the public domain, CACI was founded to provide support and training for SIMSCRIPT. In 1968, Markowitz joined Arbitrage Management company founded by Michael Goodkin. Working with Paul Samuelson and Robert Merton he created a hedge fund that represents the first known attempt at computerized arbitrage trading, he took over as chief executive in 1970. After a successful run as a private hedge fund, AMC was sold to Stuart & Co. in 1971. A year Markowitz left the company. Years he was involved with CACI's SIMSCRIPT addition of Object-oriented features. Markowitz now divides his time between teaching, he serves on the Advisory Board of SkyView Investment Advisors, a traditional and alternative investment advisory firm.
Markowitz serves on the Investment Committee of LWI Financial Inc. a San Jose, California-based investment advisor. Markowitz advises and serves on the board of ProbabilityManagement.org, a 501 non-profit founded by Dr. Sam L. Savage to reshape the communication and calculation of uncertainty. Markowitz is co-founder and Chief Architect of GuidedChoice, a 401 managed accounts provider and investment advisor. Markowitz’s more recent work has included designing the backbone software analytics for the GuidedChoice investment solution and heading the GuidedChoice Investment Committee, he is involved in designing the next step in the retirement process: assisting retirees with w