Distribution is one of the four elements of the marketing mix. Distribution is the process of making a product or service available for the consumer or business user who needs it; this can be done directly by the producer or service provider, or using indirect channels with distributors or intermediaries. The other three elements of the marketing mix are product and promotion. Decisions about distribution need to be taken in line with a company's overall strategic vision and mission. Developing a coherent distribution plan is a central component of strategic planning. At the strategic level, there are three broad approaches to distribution, namely mass, selective or exclusive distribution; the number and type of intermediaries selected depends on the strategic approach. The overall distribution channel should add value to the consumer. Distribution is fundamentally concerned with ensuring that products reach target customers in the most direct and cost efficient manner. In the case of services, distribution is principally concerned with access.
Although distribution, as a concept, is simple, in practice distribution management may involve a diverse range of activities and disciplines including: detailed logistics, warehousing, inventory management as well as channel management including selection of channel members and rewarding distributors. Prior to designing a distribution system, the planner needs to determine what the distribution channel is to achieve in broad terms; the overall approach to distributing products or services depends on a number of factors including the type of product perishability. The process of setting out a broad statement of the aims and objectives of a distribution channel is a strategic level decision. Strategically, there are three approaches to distribution: Mass distribution: When products are destined for a mass market, the marketer will seek out intermediaries that appeal to a broad market base. For example, snack foods and drinks are sold via a wide variety of outlets including supermarkets, convenience stores, vending machines and others.
The choice of distribution outlet is skewed towards those than can deliver mass markets in a cost efficient manner. Selective distribution: A manufacturer may choose to restrict the number of outlets handling a product. For example, a manufacturer of premium electrical goods may choose to deal with department stores and independent outlets that can provide added value service level required to support the product. Dr Scholl orthopedic sandals, for example, only sell their product through pharmacies because this type of intermediary supports the desired therapeutic positioning of the product; some of the prestige brands of cosmetics and skincare, such as Estee Lauder and Clinique, insist that sales staff are trained to use the product range. The manufacturer will only allow trained clinicians to sell their products. Exclusive distribution: In an exclusive distribution approach, a manufacturer chooses to deal with one intermediary or one type of intermediary; the advantage of an exclusive approach is that the manufacturer retains greater control over the distribution process.
In exclusive arrangements, the distributor is expected to work with the manufacturer and add value to the product through service level, after sales care or client support services. Another definition of exclusive arrangement is an agreement between a supplier and a retailer granting the retailer exclusive rights within a specific geographic area to carry the supplier's product. Summary of strategic approaches to distribution In consumer markets, another key strategic level decision is whether to use a push or pull strategy. In a push strategy, the marketer uses intensive advertising and incentives aimed at distributors retailers and wholesalers, with the expectation that they will stock the product or brand, that consumers will purchase it when they see it in stores. In contrast, in a pull strategy, the marketer promotes the product directly to consumers hoping that they will pressure retailers to stock the product or brand, thereby pulling it through the distribution channel; the choice of a push or pull strategy has important implications for promotion.
In a push strategy the promotional mix would consist of trade advertising and sales calls while the advertising media would be weighted towards trade magazines and trade shows while a pull strategy would make more extensive use consumer advertising and sales promotions while the media mix would be weighted towards mass-market media such as newspapers, magazines and radio. Distribution of products takes place by means of a marketing channel known as a distribution channel. A marketing channel is the people and activities necessary to transfer the ownership of goods from the point of production to the point of consumption, it is the way products get to the consumer. This is accomplished through merchant retailers or wholesalers or, in the international context, by importers. In certain specialist markets, agents or brokers may become involved in the marketing channel. Typical intermediaries involved in distribution include: Wholesaler: A merchant intermediary who sells chiefly to retailers, other merchants, or industrial and commercial users for resale or business use.
Wholesalers sell in large quantities.. Retailer: A merchant intermediary who sells direct to the public. There are many different types of retail outlet - from hypermarts and supermarkets
Risk is the possibility of losing something of value. Values can be gained or lost when taking risk resulting from a given action or inaction, foreseen or unforeseen. Risk can be defined as the intentional interaction with uncertainty. Uncertainty is a potential and uncontrollable outcome. Risk perception is the subjective judgment people make about the severity and probability of a risk, may vary person to person. Any human endeavour carries some risk; the Oxford English Dictionary cites the earliest use of the word in English as of 1621, the spelling as risk from 1655. It defines risk as: the possibility of injury, or other adverse or unwelcome circumstance. Risk is an influence affecting strategy caused by an incentive or condition that inhibits transformation to quality excellence. Risk is an uncertain event or condition that, if it occurs, has an effect on at least one objective.. The probability of something happening multiplied by benefit if it does; the probability or threat of quantifiable damage, liability, loss, or any other negative occurrence, caused by external or internal vulnerabilities, that may be avoided through preemptive action.
Finance: The possibility that an actual return on an investment will be lower than the expected return. Insurance: A situation where the probability of a variable is known but when a mode of occurrence or the actual value of the occurrence is not. A risk is not a peril, or a hazard. Securities trading: The probability of a loss or drop in value. Trading risk is divided into two general categories: Systematic risk affects all securities in the same class and is linked to the overall capital-market system and therefore cannot be eliminated by diversification. Called market risk. Non-systematic risk is any risk. Called non-market risk, extra-market risk or diversifiable risk. Workplace: Product of the consequence and probability of a hazardous event or phenomenon. For example, the risk of developing cancer is estimated as the incremental probability of developing cancer over a lifetime as a result of exposure to potential carcinogens; the International Organization for Standardization publication ISO 31000 / ISO Guide 73:2002 definition of risk is the'effect of uncertainty on objectives'.
In this definition, uncertainties include events and uncertainties caused by ambiguity or a lack of information. It includes both negative and positive impacts on objectives. Many definitions of risk exist in common usage, however this definition was developed by an international committee representing over 30 countries and is based on the input of several thousand subject matter experts. Different approaches to risk management are taken in different fields, e.g. "Risk is the unwanted subset of a set of uncertain outcomes". Risk can be seen as relating to the probability of uncertain future events. For example, according to Factor Analysis of Information Risk, risk is: the probable frequency and probable magnitude of future loss. In computer science this definition is used by The Open Group. OHSAS defines risk as the combination of the probability of a hazard resulting in an adverse event, the severity of the event. In information security risk is defined as "the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause harm to the organization".
Financial risk is defined as the unpredictable variability or volatility of returns, this would include both potential better-than-expected and worse-than-expected returns. References to negative risk below should be read as applying to positive impacts or opportunity unless the context precludes this interpretation; the related terms "threat" and "hazard" are used to mean something that could cause harm. Risk is ubiquitous in all areas of life and risk management is something that we all must do, whether we are managing a major organisation or crossing the road; when describing risk however, it is convenient to consider that risk practitioners operate in some specific practice areas. Economic risks can be manifested in higher expenditures than expected; the causes can be many, for instance, the hike in the price for raw materials, the lapsing of deadlines for construction of a new operating facility, disruptions in a production process, emergence of a serious competitor on the market, the loss of key personnel, the change of a political regime, or natural disasters.
Risks in personal health may be reduced by primary prevention actions that decrease early causes of illness or by secondary prevention actions after a person has measured clinical signs or symptoms recognised as risk factors. Tertiary prevention reduces the negative impact of an established disease by restoring f
Theory of the firm
The theory of the firm consists of a number of economic theories that explain and predict the nature of the firm, company, or corporation, including its existence, behaviour and relationship to the market. In simplified terms, the theory of the firm aims to answer these questions: Existence. Why do firms emerge? Why are not all transactions in the economy mediated over the market? Boundaries. Why is the boundary between firms and the market located there with relation to size and output variety? Which transactions are performed internally and which are negotiated on the market? Organization. Why are firms structured in such a specific way, for example as to hierarchy or decentralization? What is the interplay of formal and informal relationships? Heterogeneity of firm actions/performances. What drives different actions and performances of firms? Evidence. What tests are there for respective theories of the firm? Firms exist as an alternative system to the market-price mechanism when it is more efficient to produce in a non-market environment.
For example, in a labor market, it might be difficult or costly for firms or organizations to engage in production when they have to hire and fire their workers depending on demand/supply conditions. It might be costly for employees to shift companies every day looking for better alternatives, it may be costly for companies to find new suppliers daily. Thus, firms engage in a long-term contract with their employees or a long-term contract with suppliers to minimize the cost or maximize the value of property rights; the First World War period saw change of emphasis in economic theory away from industry-level analysis which included analyzing markets to analysis at the level of the firm, as it became clear that perfect competition was no longer an adequate model of how firms behaved. Economic theory until had focused on trying to understand markets alone and there had been little study on understanding why firms or organisations exist. Markets are guided by prices and quality as illustrated by vegetable markets where a buyer is free to switch sellers in an exchange.
The need for a revised theory of the firm was emphasized by empirical studies by Adolf Berle and Gardiner Means, who made it clear that ownership of a typical American corporation is spread over a wide number of shareholders, leaving control in the hands of managers who own little equity themselves. R. L. Hall and Charles J. Hitch found that executives made decisions by rule of thumb rather than in the marginalist way. According to Ronald Coase, people begin to organise their production in firms when the transaction cost of coordinating production through the market exchange, given imperfect information, is greater than within the firm. Ronald Coase set out his transaction cost theory of the firm in 1937, making it one of the first attempts to define the firm theoretically in relation to the market. One aspect of its'neoclassicism' lies in presenting an explanation of the firm consistent with constant returns to scale, rather than relying on increasing returns to scale. Another is in defining a firm in a manner, both realistic and compatible with the idea of substitution at the margin, so instruments of conventional economic analysis apply.
He notes that a firm’s interactions with the market may not be under its control, but its internal allocation of resources are: “Within a firm, … market transactions are eliminated and in place of the complicated market structure with exchange transactions is substituted the entrepreneur … who directs production.” He asks why alternative methods of production, could not either achieve all production, so that either firms use internal prices for all their production, or one big firm runs the entire economy. Coase begins from the standpoint that markets could in theory carry out all production, that what needs to be explained is the existence of the firm, with its "distinguishing mark … the supersession of the price mechanism." Coase identifies some reasons why firms might arise, dismisses each as unimportant: if some people prefer to work under direction and are prepared to pay for the privilege. Instead, for Coase the main reason to establish a firm is to avoid some of the transaction costs of using the price mechanism.
These include discovering relevant prices, as well as the costs of negotiating and writing enforceable contracts for each transaction. Moreover, contracts in an uncertain world will be incomplete and have to be re-negotiated; the costs of haggling about division of surplus if there is asymmetric information and asset specificity, may be considerable. If a firm operated internally under the market system, many contracts would be required. In contrast, a real firm has few contracts, such as defining a manager's power of direction over employees, in exchange for which the employee is paid; these kinds of contracts are drawn up in situations of uncertainty, in particular for relationships which last long periods of time. Such a situation runs counter to neo-classical economic theory; the neo-classical market is instantaneous, forbidding the development of extended agent-principal relationships, of planning, of trust. Coase concludes that “a firm is
A natural monopoly is a monopoly in an industry in which high infrastructural costs and other barriers to entry relative to the size of the market give the largest supplier in an industry the first supplier in a market, an overwhelming advantage over potential competitors. This occurs in industries where capital costs predominate, creating economies of scale that are large in relation to the size of the market. Natural monopolies were discussed as a potential source of market failure by John Stuart Mill, who advocated government regulation to make them serve the public good. Two different types of cost are important in microeconomics: marginal cost, fixed cost; the marginal cost is the cost to the company of serving one more customer. In an industry where a natural monopoly does not exist, the vast majority of industries, the marginal cost decreases with economies of scale increases as the company has growing pains. Along with this, the average cost of its products increases. A natural monopoly has a different cost structure.
A natural monopoly has a high fixed cost for a product that does not depend on output, but its marginal cost of producing one more good is constant, small. All industries have costs associated with entering them. A large portion of these costs is required for investment. Larger industries, like utilities, require enormous initial investment; this barrier to entry reduces the number of possible entrants into the industry regardless of the earning of the corporations within. Natural monopolies arise where the largest supplier in an industry the first supplier in a market, has an overwhelming cost advantage over other actual or potential competitors; the fixed cost of constructing a competing transmission network is so high, the marginal cost of transmission for the incumbent so low, that it bars potential competitors from the monopolist's market, acting as a nearly insurmountable barrier to entry into the market place. A firm with high fixed costs requires a large number of customers in order to have a meaningful return on investment.
This is. Since each firm has large initial costs, as the firm gains market share and increases its output the fixed cost is divided among a larger number of customers. Therefore, in industries with large initial investment requirements, average total cost declines as output increases over a much larger range of output levels. Companies that take advantage of economies of scale run into problems of bureaucracy. If that ideal size is large enough to supply the whole market that market is a natural monopoly. Once a natural monopoly has been established because of the large initial cost and that, according to the rule of economies of scale, the larger corporation has lower average cost and therefore a huge advantage. With this knowledge, no firms attempt to enter the industry and an oligopoly or monopoly develops. William Baumol provided the current formal definition of a natural monopoly where "n industry in which multi-firm production is more costly than production by a monopoly", he linked the definition to the mathematical concept of subadditivity.
Baumol noted that for a firm producing a single product, scale economies were a sufficient but not a necessary condition to prove subadditivity. The original concept of natural monopoly is attributed to John Stuart Mill, who believed that prices would reflect the costs of production in absence of an artificial or natural monopoly. In Principles of Political Economy Mill criticised Smith's neglect of an area that could explain wage disparity. Taking up the examples of professionals such as jewellers and lawyers, he said, The superiority of reward is not here the consequence of competition, but of its absence: not a compensation for disadvantages inherent in the employment, but an extra advantage. If unskilled labourers had it in their power to compete with skilled, by taking the trouble of learning the trade, the difference of wages might not exceed what would compensate them for that trouble, at the ordinary rate at which labour is remunerated, but the fact that a course of instruction is required, of a low degree of costliness, or that the labourer must be maintained for a considerable time from other sources, suffices everywhere to exclude the great body of the labouring people from the possibility of any such competition.
So Mill's initial use of the term concerned natural abilities, in contrast to the common contemporary usage, which refers to market failure in a particular type of industry, such as rail, post or electricity. Mill's development of the idea is. All the natural monopolies which produce o
Microeconomics is a branch of economics that studies the behaviour of individuals and firms in making decisions regarding the allocation of scarce resources and the interactions among these individuals and firms. One goal of microeconomics is to analyze the market mechanisms that establish relative prices among goods and services and allocate limited resources among alternative uses. Microeconomics shows conditions, it analyzes market failure, where markets fail to produce efficient results. Microeconomics stands in contrast to macroeconomics, which involves "the sum total of economic activity, dealing with the issues of growth and unemployment and with national policies relating to these issues". Microeconomics deals with the effects of economic policies on microeconomic behavior and thus on the aforementioned aspects of the economy. In the wake of the Lucas critique, much of modern macroeconomic theories has been built upon microfoundations—i.e. Based upon basic assumptions about micro-level behavior.
Microeconomic theory begins with the study of a single rational and utility maximizing individual. To economists, rationality means an individual possesses stable preferences that are both complete and transitive; the technical assumption that preference relations are continuous is needed to ensure the existence of a utility function. Although microeconomic theory can continue without this assumption, it would make comparative statics impossible since there is no guarantee that the resulting utility function would be differentiable. Microeconomic theory progresses by defining a competitive budget set, a subset of the consumption set, it is at this point that economists make the technical assumption that preferences are locally non-satiated. Without the assumption of LNS there is no 100% guarantee but there would be a rational rise in individual utility. With the necessary tools and assumptions in place the utility maximization problem is developed; the utility maximization problem is the heart of consumer theory.
The utility maximization problem attempts to explain the action axiom by imposing rationality axioms on consumer preferences and mathematically modeling and analyzing the consequences. The utility maximization problem serves not only as the mathematical foundation of consumer theory but as a metaphysical explanation of it as well; that is, the utility maximization problem is used by economists to not only explain what or how individuals make choices but why individuals make choices as well. The utility maximization problem is a constrained optimization problem in which an individual seeks to maximize utility subject to a budget constraint. Economists use the extreme value theorem to guarantee that a solution to the utility maximization problem exists; that is, since the budget constraint is both bounded and closed, a solution to the utility maximization problem exists. Economists call the solution to the utility maximization problem a Walrasian demand function or correspondence; the utility maximization problem has so far been developed by taking consumer tastes as the primitive.
However, an alternative way to develop microeconomic theory is by taking consumer choice as the primitive. This model of microeconomic theory is referred to as revealed preference theory; the theory of supply and demand assumes that markets are competitive. This implies that there are many buyers and sellers in the market and none of them have the capacity to influence prices of goods and services. In many real-life transactions, the assumption fails because some individual buyers or sellers have the ability to influence prices. Quite a sophisticated analysis is required to understand the demand-supply equation of a good model. However, the theory works well in situations meeting these assumptions. Mainstream economics does not assume a priori that markets are preferable to other forms of social organization. In fact, much analysis is devoted to cases where market failures lead to resource allocation, suboptimal and creates deadweight loss. A classic example of suboptimal resource allocation is that of a public good.
In such cases, economists may attempt to find policies that avoid waste, either directly by government control, indirectly by regulation that induces market participants to act in a manner consistent with optimal welfare, or by creating "missing markets" to enable efficient trading where none had existed. This is studied in the field of public choice theory. "Optimal welfare" takes on a Paretian norm, a mathematical application of the Kaldor–Hicks method. This can diverge from the Utilitarian goal of maximizing utility because it does not consider the distribution of goods between people. Market failure in positive economics is limited in implications without mixing the belief of the economist and their theory; the demand for various commodities by individuals is thought of as the outcome of a utility-maximizing process, with each individual trying to maximize their own utility under a budget constraint and a given consumption set. The study of microeconomics involves several "key" areas: Supply and demand is an economic model of price determination in a competitive market.
It concludes that in a competitive market with no externalities, per unit taxes, or price controls, the unit price for a particular good is the price at which the quantity demanded by consumers equals the quantity supplied by producers. This price results in a stable economic equilibrium. Elasticity is the measurement of how resp
Computer-aided manufacturing is the use of software to control machine tools and related ones in the manufacturing of workpieces. This is not the only definition for CAM, its primary purpose is to create a faster production process and components and tooling with more precise dimensions and material consistency, which in some cases, uses only the required amount of raw material, while reducing energy consumption. CAM is now a system used in lower educational purposes. CAM is a subsequent computer-aided process after computer-aided design and sometimes computer-aided engineering, as the model generated in CAD and verified in CAE can be input into CAM software, which controls the machine tool. CAM is used in many schools alongside Computer-Aided Design to create objects. Traditionally, CAM has been considered as a numerical control programming tool, where in two-dimensional or three-dimensional models of components generated in CAD; as with other “Computer-Aided” technologies, CAM does not eliminate the need for skilled professionals such as manufacturing engineers, NC programmers, or machinists.
CAM, in fact, leverages both the value of the most skilled manufacturing professionals through advanced productivity tools, while building the skills of new professionals through visualization and optimization tools. Early commercial applications of CAM was in large companies in the automotive and aerospace industries, for example Pierre Béziers work developing the CAD/CAM application UNISURF in the 1960s for car body design and tooling at Renault. CAM software was seen to have several shortcomings that necessitated an overly high level of involvement by skilled CNC machinists. Fallows created the first CAD software but this had severe shortcomings and was promptly taken back into the developing stage. CAM software would output code for the least capable machine, as each machine tool control added on to the standard G-code set for increased flexibility. In some cases, such as improperly set up CAM software or specific tools, the CNC machine required manual editing before the program will run properly.
None of these issues were so insurmountable that a thoughtful engineer or skilled machine operator could not overcome for prototyping or small production runs. In high production or high precision shops, a different set of problems were encountered where an experienced CNC machinist must both hand-code programs and run CAM software. Integration of CAD with other components of CAD/CAM/CAE Product lifecycle management environment requires an effective CAD data exchange, it had been necessary to force the CAD operator to export the data in one of the common data formats, such as IGES or STL or Parasolid formats that are supported by a wide variety of software. The output from the CAM software is a simple text file of G-code/M-codes, sometimes many thousands of commands long, transferred to a machine tool using a direct numerical control program or in modern Controllers using a common USB Storage Device. CAM packages could not, still cannot, reason as a machinist can, they could not optimize toolpaths to the extent required of mass production.
Users would select the type of machining process and paths to be used. While an engineer may have a working knowledge of G-code programming, small optimization and wear issues compound over time. Mass-produced items that require machining are initially created through casting or some other non-machine method; this enables hand-written and optimized G-code that could not be produced in a CAM package. At least in the United States, there is a shortage of young, skilled machinists entering the workforce able to perform at the extremes of manufacturing; as CAM software and machines become more complicated, the skills required of a machinist or machine operator advance to approach that of a computer programmer and engineer rather than eliminating the CNC machinist from the workforce. Typical areas of concern: High Speed Machining, including streamlining of tool paths Multi-function Machining 5 Axis Machining Feature recognition and machining Automation of Machining processes Ease of Use Over time, the historical shortcomings of CAM are being attenuated, both by providers of niche solutions and by providers of high-end solutions.
This is occurring in three arenas: Ease of usage Manufacturing complexity Integration with PLM and the extended enterpriseEase in useFor the user, just getting started as a CAM user, out-of-the-box capabilities providing Process Wizards, libraries, machine tool kits, automated feature based machining and job function specific tailorable user interfaces build user confidence and speed the learning curve. User confidence is further built on 3D visualization through a closer integration with the 3D CAD environment, including error-avoiding simulations and optimizations. Manufacturing complexity The manufacturing environment is complex; the need for CAM and PLM tools by the manufacturing engineer, NC programmer or machinist is similar to the need for computer assistance by the pilot of modern aircraft systems. The modern machinery cannot be properly used without this assistance. Today's CAM systems support the full range of machine tools including: turning, 5 axis machining, laser / plasma cutting, wire EDM.
Today’s CAM user can generate streamlined tool paths, optimized tool axis tilt for higher feed rates, better tool life and surface finish, an
Umbrella branding is a marketing practice involving the use of a single brand name for the sale of two or more related products. Umbrella branding is used by companies with a positive brand equity. All products use the same means of lack additional brand names or symbols etc.. This marketing practice differs from brand extension in that umbrella branding involves the marketing of similar products, rather than differentiated products, under one brand name. Hence, umbrella branding may be considered as a type of brand extension; the practice of umbrella branding does not disallow a firm to implement different branding approaches for different product lines. Marketers may increase the chance of success for a new product launch by using a sub-brand name and a parent brand name simultaneously. In the article by Howard Pong Yuen LAM and other co-authors, they report the successful case of using two brand names--dual branding strategy--by practitioners in China for the Minute Maid Orange Pulp juice drink launch.
"A suggestive sub-brand name helps consumers recall the key benefits and features of the new product. A suggestive parent brand name communicates the benefits of the product category. A dual branding strategy addresses the problem of using only one brand name for a new product launch. After the successful launch of the first new product by a parent brand, marketers are able to launch other new products under other sub-brand names in the future to meet different consumer needs. Marketers may use the same parent brand to introduce different products to build scale for the brand, are able to differentiate the different product offerings under different subbrand names. If a company acquires a brand from another company, a marketer may position the acquired brand as a sub-brand under the parent brand if the marketer has defined the business scope of the parent brand broadly enough and with a suggestive parent brand name." Umbrella Branding is used to provide uniformity to certain product lines by grouping them under a single brand name, making them more identifiable and hence enhancing their marketability.
All products under the same corporate umbrella are expected to have uniform quality and user experience. Factors that may determine the impact of umbrella branding include: The degree of commonality among the products falling under the corporate umbrella; the brand equity of a corporation. Various theories attempt to explain a consumer's decisions and judgements during product purchasing that cause umbrella branding to be a successful marketing strategy; the categorisation theory is based upon the notion that consumers tend to categorise products by associating them to brands and their past experiences with those particular brands in order to evade the initial confusion caused by the extensive choice of products they are presented with. New information on certain products are categorised into various sections such as product class and brand and stored. Afterwards, consumers evaluate the product quality through past experiences with the brand's products as well as the brand equity; this theory explains for the popularity of umbrella branding.
Consumers tend to evaluate new products not only by positive brand equity but if the brand's concept is consistent with their extended products. For instance, assuming that the consumer had satisfactory past experiences with the company's products, if Apple Inc. would develop and sell a new version of a Macbook, consumers would deem it more reliable and of superior quality rather than if Apple would produce a new beverage due to Apple's past product line. The schema congruity theory suggests that the storage of new information and retrieval of memory is majorly influenced by past expectations. Schemas are a human's personal cognitive representations of the environment that guide their perceptions and actions. Schemas learns new information. Nonetheless, the new information is firstly evaluated on the basis of existing schemas. Relating the theory to consumer evaluation of products, a consumer possesses pre-existing schemas from past experiences with certain brands and therefore new products are evaluated based on the existing schema the consumer has with the certain brand.
This theory is quite similar to the categorisation theory. Confirmation bias is a form of statistical bias, describing the tendency to seek for or interpret evidence in ways that support one's existing beliefs. After a consumer creates a preference of one brand over others, any additional feature that may be common between various brands will most only strengthen the consumer's pre-existing preference, causing them to disregard other brands. Hence, a positive brand equity may not be as influential if a consumer has a pre-existing brand preference. Umbrella branding has become a popular marketing practice utilised by companies due to its various potential benefits; such marketing practice may create advertising efficiencies through the reduced costs of brand development. This strategy reduces a firm's marketing costs due to the consumer-brand association through which consumers recognise certain brands, making new p