Microeconomics is a branch of economics that studies the behaviour of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions, it analyzes market failure, where markets fail to produce efficient results. Microeconomics stands in contrast to macroeconomics, which involves "the sum total of economic activity, dealing with the issues of growth and unemployment and with national policies relating to these issues". Microeconomics deals with the effects of economic policies on microeconomic behavior and thus on the aforementioned aspects of the economy. In the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e. Based upon basic assumptions about micro-level behavior.
Microeconomic theory begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive; the technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable. Microeconomic theory progresses by defining a competitive budget set, a subset of the consumption set, it is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem is developed; the utility maximization problem is the heart of consumer theory.
The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well; that is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well. The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists; that is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence; the utility maximization problem has so far been developed by taking consumer tastes as the primitive.
However, an alternative way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as revealed preference theory; the theory of supply and demand assumes that markets are competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions. Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation, suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good.
In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had existed. This is studied in the field of public choice theory. "Optimal welfare" takes on a Paretian norm, a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics is limited in implications without mixing the belief of the economist and their theory; the demand for various commodities by individuals is thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set. The study of microeconomics involves several "key" areas: Supply and demand is an economic model of price determination in a competitive market.
It concludes that in a competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium. Elasticity is the measurement of how resp
Agricultural economics is an applied field of economics concerned with the application of economic theory in optimizing the production and distribution of food and fiber. Agricultural economics began as a branch of economics that dealt with land usage, it focused on maximizing the crop yield while maintaining a good soil ecosystem. Throughout the 20th century the discipline expanded and the current scope of the discipline is much broader. Agricultural economics today includes a variety of applied areas, having considerable overlap with conventional economics. Agricultural economists have made substantial contributions to research in economics, development economics, environmental economics. Agricultural economics influences food policy, agricultural policy, environmental policy. Economics has been defined as the study of resource allocation under scarcity. Agricultural economics, or the application of economic methods to optimizing the decisions made by agricultural producers, grew to prominence around the turn of the 20th century.
The field of agricultural economics can be traced out to works on land economics. Henry Charles Taylor was the greatest contributor with the establishment of the Department of Agricultural Economics at Wisconsin in 1909. Another contributor, 1979 Nobel Economics Prize winner Theodore Schultz, was among the first to examine development economics as a problem related directly to agriculture. Schultz was instrumental in establishing econometrics as a tool for use in analyzing agricultural economics empirically. One scholar summarizes the development of agricultural economics as follows: "Agricultural economics arose in the late 19th century, combined the theory of the firm with marketing and organization theory, developed throughout the 20th century as an empirical branch of general economics; the discipline was linked to empirical applications of mathematical statistics and made early and significant contributions to econometric methods. In the 1960s and afterwards, as agricultural sectors in the OECD countries contracted, agricultural economists were drawn to the development problems of poor countries, to the trade and macroeconomic policy implications of agriculture in rich countries, to a variety of production and environmental and resource problems."Agricultural economists have made many well-known contributions to the economics field with such models as the cobweb model, hedonic regression pricing models, new technology and diffusion models, multifactor productivity and efficiency theory and measurement, the random coefficients regression.
The farm sector is cited as a prime example of the perfect competition economic paradigm. In Asia, agricultural economics was offered first by the University of the Philippines Los Baños Department of Agricultural Economics in 1919. Today, the field of agricultural economics has transformed into a more integrative discipline which covers farm management and production economics, rural finance and institutions, agricultural marketing and prices, agricultural policy and development and nutrition economics, environmental and natural resource economics. Since the 1970s, agricultural economics has focused on seven main topics, according to a scholar in the field: agricultural environment and resources. In the field of environmental economics, agricultural economists have contributed in three main areas: designing incentives to control environmental externalities, estimating the value of non-market benefits from natural resources and environmental amenities, the complex interrelationship between economic activities and environmental consequences.
With regard to natural resources, agricultural economists have developed quantitative tools for improving land management, preventing erosion, managing pests, protecting biodiversity, preventing livestock diseases. While at one time, the field of agricultural economics was focused on farm-level issues, in recent years agricultural economists have studied diverse topics related to the economics of food consumption. In addition to economists' long-standing emphasis on the effects of prices and incomes, researchers in this field have studied how information and quality attributes influence consumer behavior. Agricultural economists have contributed to understanding how households make choices between purchasing food or preparing it at home, how food prices are determined, definitions of poverty thresholds, how consumers respond to price and income changes in a consistent way, survey and experimental tools for understanding consumer preferences. Agricultural economics research has addressed diminishing returns in agricultural production, as well as farmers' costs and supply responses.
Much research has applied economic theory to farm-level decisions. Studies of risk and decision-making under uncertainty have real-world applications to crop insurance policies and to understanding how farmers in developing countries make choices about technology adoption; these topics are important for understanding prospects for producing sufficient food for a growing world population, subject to new resource and environmental challenges such as water scarcity and global climate change. Development economics is broadly concerned with the improvement of living conditions in low-income countries, the improvement of economic performance in low-inc
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. Further, the term operational analysis is used in the British military as an intrinsic part of capability development and assurance. In particular, operational analysis forms part of the Combined Operational Effectiveness and Investment Appraisals, which support British defense capability acquisition decision-making, it is considered to be a sub-field of applied mathematics. The terms management science and decision science are sometimes used as synonyms. Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems; because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, draws on psychology and organization science.
Operations research is concerned with determining the extreme values of some real-world objective: the maximum or minimum. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Operational research encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, neural networks, expert systems, decision analysis, the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system; because of the computational and statistical nature of most of these fields, OR has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, constraints on time and computing power.
The major sub-disciplines in modern operational research, as identified by the journal Operations Research, are: Computing and information technologies Financial engineering Manufacturing, service sciences, supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation In the decades after the two world wars, the tools of operations research were more applied to problems in business and society. Since that time, operational research has expanded into a field used in industries ranging from petrochemicals to airlines, finance and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize complex systems, has become an area of active academic and industrial research. In the 17th century, mathematicians like Christiaan Huygens and Blaise Pascal tried to solve problems involving complex decisions with probability. Others in the 18th and 19th centuries solved these types of problems with combinatorics.
Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I. Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would attempt to extend these to the social sciences. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 and was the result of an initiative of the station's superintendent, A. P. Rowe. Rowe conceived the idea as a means to analyse and improve the working of the UK's early warning radar system, Chain Home, he analysed the operating of the radar equipment and its communication networks, expanding to include the operating personnel's behaviour.
This allowed remedial action to be taken. Scientists in the United Kingdom including Patrick Blackett, Cecil Gordon, Solly Zuckerman, C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson, in the United States with George Dantzig looked for ways to make better decisions in such areas as logistics and training schedules The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army. Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an
Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships. More it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference". An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships"; the first known use of the term "econometrics" was by Polish economist Paweł Ciompa in 1910. Jan Tinbergen is considered by many to be one of the founding fathers of econometrics. Ragnar Frisch is credited with coining the term in the sense. A basic tool for econometrics is the multiple linear regression model. Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness and consistency.
Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, forecasting. A basic tool for econometrics is the multiple linear regression model. In modern econometrics, other statistical tools are used, but linear regression is still the most used starting point for an analysis. Estimating a linear regression on two variables can be visualised as fitting a line through data points representing paired values of the independent and dependent variables. For example, consider Okun's law, which relates GDP growth to the unemployment rate; this relationship is represented in a linear regression where the change in unemployment rate is a function of an intercept, a given value of GDP growth multiplied by a slope coefficient β 1 and an error term, ε: Δ Unemployment = β 0 + β 1 Growth + ε. The unknown parameters β β 1 can be estimated. Here β 1 is estimated to be −1.77 and β 0 is estimated to be 0.83.
This means that if GDP growth increased by one percentage point, the unemployment rate would be predicted to drop by 1.77 points. The model could be tested for statistical significance as to whether an increase in growth is associated with a decrease in the unemployment, as hypothesized. If the estimate of β 1 were not different from 0, the test would fail to find evidence that changes in the growth rate and unemployment rate were related; the variance in a prediction of the dependent variable as a function of the independent variable is given in polynomial least squares. Econometric theory uses statistical theory and mathematical statistics to evaluate and develop econometric methods. Econometricians try to find estimators that have desirable statistical properties including unbiasedness and consistency. An estimator is unbiased. Ordinary least squares is used for estimation since it provides the BLUE or "best linear unbiased estimator" given the Gauss-Markov assumptions; when these assumptions are violated or other statistical properties are desired, other estimation techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used.
Estimators that incorporate prior beliefs are advocated by those who favour Bayesian statistics over traditional, classical or "frequentist" approaches. Applied econometrics uses theoretical econometrics and real-world data for assessing economic theories, developing econometric models, analysing economic history, forecasting. Econometrics may use standard statistical models to study economic questions, but most they are with observational data, rather than in controlled experiments. In this, the design of observational studies in econometrics is similar to the design of studies in other observational disciplines, such as astronomy, epidemiology and political science. Analysis of data from an observational study is guided by the study protocol, although exploratory data analysis may be useful for generating new hypotheses. Economics analyses systems of equations and inequalities, such as supply and demand hypothesized to be in equilibrium; the field of econometrics has developed methods for identification and estimation of simultaneous-equation models.
These methods are analogous to methods used in other areas of science, such as the field of system identification in systems analysis and control theory. Such methods may allow researchers to estimate models and investigate their empirical consequences, without directly manipulating the system. One of the fundamental statistical methods used by econometricians is regression analysis. Regression methods are important i
Heterodoxy is a term that may be used in contrast with orthodoxy in schools of economic thought or methodologies, that may be beyond neoclassical economics. Heterodoxy is an umbrella term; these might for example include anarchist, Marxian, evolutionary, Austrian, social, post-Keynesian, ecological economics among others. Economics may be called conventional economics by its critics. Alternatively, mainstream economics deals with the "rationality–individualism–equilibrium nexus" and heterodox economics is more "radical" in dealing with the "institutions–history–social structure nexus". Many economists dismiss heterodox economics as "fringe" and "irrelevant", with little or no influence on the vast majority of academic mainstream economists in the English-speaking world. A recent review documented several prominent groups of heterodox economists since at least the 1990s as working together with a resulting increase in coherence across different constituents. Along these lines, the International Confederation of Associations for Pluralism in Economics does not define "heterodox economics" and has avoided defining its scope.
ICAPE defines its mission as "promoting pluralism in economics." In defining a common ground in the "critical commentary," one writer described fellow heterodox economists as trying to do three things: identify shared ideas that generate a pattern of heterodox critique across topics and chapters of introductory macro texts. One study suggests four key factors as important to the study of economics by self-identified heterodox economists: history, natural systems and power. A number of heterodox schools of economic thought challenged the dominance of neoclassical economics after the neoclassical revolution of the 1870s. In addition to socialist critics of capitalism, heterodox schools in this period included advocates of various forms of mercantilism, such as the American School dissenters from neoclassical methodology such as the historical school, advocates of unorthodox monetary theories such as Social credit. Other heterodox schools active before and during the Great Depression included Technocracy and Georgism.
Physical scientists and biologists were the first individuals to use energy flows to explain social and economic development. Joseph Henry, an American physicist and first secretary of the Smithsonian Institution, remarked that the "fundamental principle of political economy is that the physical labor of man can only be ameliorated by… the transformation of matter from a crude state to a artificial condition...by expending what is called power or energy."The rise, absorption into the mainstream of Keynesian economics, which appeared to provide a more coherent policy response to unemployment than unorthodox monetary or trade policies contributed to the decline of interest in these schools. After 1945, the neoclassical synthesis of Keynesian and neoclassical economics resulted in a defined mainstream position based on a division of the field into microeconomics and macroeconomics. Austrians and post-Keynesians who dissented from this synthesis emerged as defined heterodox schools. In addition, the Marxist and institutionalist schools remained active.
Up to 1980 the most notable themes of heterodox economics in its various forms included: rejection of the atomistic individual conception in favor of a embedded individual conception. From 1980 mainstream economics has been influenced by a number of new research programs, including behavioral economics, complexity economics, evolutionary economics, experimental economics, neuroeconomics; as a consequence, some heterodox economists, such as John B. Davis, proposed that the definition of heterodox economics has to be adapted to this new, more complex reality:...heterodox economics post-1980 is a complex structure, being composed out of two broadly different kinds of heterodox work, each internally differentiated with a number of research programs having different historical origins and orientations: the traditional left heterodoxy familiar to most and the'new heterodoxy' resulting from other science imports. There is no single "heterodox economic theory". What they all share, however, is a rejection of the neoclassical orthodoxy as representing the appropriate tool for understanding the workings of economic and social life.
The reasons for this rejection may vary. Some of the elements found in heterodox critiques are listed below. One of the most broadly accepted principles of neoclassical economics is the assumption of the "rationality of economic agents". Indeed, for a number of economists, the notion of rational maximizing behavior is taken to be synonymous with economic behavior; when some economists' studies do not embrace the rationality assumption, they are seen as placing the analyses outside the boundaries of the Neoclassical economics discipline. Neoclassical economics begins with the a priori assumptions that agents are rational and that they seek to maximize their individual utility (or prof