1.
Scale analysis (mathematics)
–
Scale analysis is a powerful tool used in the mathematical sciences for the simplification of equations with many terms. First the approximate magnitude of individual terms in the equations is determined, then some negligibly small terms may be ignored. In synoptic scale we can expect horizontal velocities about U =101 m. s−1, horizontal scale is L =106 m and vertical scale is H =104 m. Typical time scale is T = L/U =105 s, pressure differences in troposphere are ΔP =104 Pa and density of air ρ =100 kg·m−3. Other physical properties are approximately, R =6.378 ×106 m, Ω =7.292 × 10−5 rad·s−1, ν =1.46 × 10−5 m2·s−1, g =9.81 m·s−2. We can see that all terms — except the first and second on the right-hand side — are negligibly small, thus we can simplify the vertical momentum equation to the hydrostatic equilibrium equation,1 ϱ ∂ p ∂ z = − g. The object of analysis is to use the basic principles of convective heat transfer to produce order-of-magnitude estimates for the quantities of interest. Scale analysis anticipates within a factor of one when done properly. Scale analysis ruled as follows, Rule1- First step in analysis is to define the domain of extent in which we apply scale analysis. Any scale analysis of a region that is not uniquely defined is not valid. Rule2- One equation constitutes an equivalence between the scales of two dominant terms appearing in the equation, for example, ρ c P ∂ T ∂ t = k ∂2 T ∂ x 2. In the above example, the side could be of equal order of magnitude as the right-hand side. ~ represents two terms are of order of magnitude. > represents greater than, in the sense of order-of-magnitude, consider the steady laminar flow of a viscous fluid inside a circular tube. Let the fluid enter with a velocity over the flow across section. As the fluid moves down the tube a boundary layer of low-velocity fluid forms, hydrodynamic entrance length as that part of the tube in which the momentum boundary layer grows and the velocity distribution changes with length. The fixed velocity distribution in the developed region is called fully developed velocity profile. Solving equation subject to the condition u =0, y = ± D2 this results in the well-known Hagen–Poiseuille solution for fully developed flow between parallel plates
2.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms
3.
Curve fitting
–
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a smooth function is constructed that approximately fits the data. A related topic is regression analysis, which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, most commonly, one fits a function of the form y=f. Starting with a first degree polynomial equation, y = a x + b and this is a line with slope a. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates. If the order of the equation is increased to a second degree polynomial and this will exactly fit a simple curve to three points. If the order of the equation is increased to a third degree polynomial and this will exactly fit four points. A more general statement would be to say it will exactly fit four constraints, each constraint can be a point, angle, or curvature. Angle and curvature constraints are most often added to the ends of a curve, identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a single spline. Higher-order constraints, such as the change in the rate of curvature, many other combinations of constraints are possible for these and for higher order polynomial equations. If there are more than n +1 constraints, the curve can still be run through those constraints. An exact fit to all constraints is not certain, in general, however, some method is then needed to evaluate each approximation. The least squares method is one way to compare the deviations, there are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match. Even if a match exists, it does not necessarily follow that it can be readily discovered. Depending on the algorithm used there may be a divergent case and this situation might require an approximate solution. The effect of averaging out questionable data points in a sample, rather than distorting the curve to fit them exactly, runges phenomenon, high order polynomials can be highly oscillatory. If a curve runs through two points A and B, it would be expected that the curve would run somewhat near the midpoint of A and B, as well
4.
Taylor polynomial
–
In calculus, Taylors theorem gives an approximation of a k-times differentiable function around a given point by a k-th order Taylor polynomial. For analytic functions the Taylor polynomials at a point are finite order truncations of its Taylor series. The exact content of Taylors theorem is not universally agreed upon, indeed, there are several versions of it applicable in different situations, and some of them contain explicit estimates on the approximation error of the function by its Taylor polynomial. Taylors theorem is named after the mathematician Brook Taylor, who stated a version of it in 1712, yet an explicit expression of the error was not provided until much later on by Joseph-Louis Lagrange. An earlier version of the result was already mentioned in 1671 by James Gregory, Taylors theorem is taught in introductory level calculus courses and it is one of the central elementary tools in mathematical analysis. Within pure mathematics it is the point of more advanced asymptotic analysis. Taylors theorem also generalizes to multivariate and vector valued functions f, R n → R m on any dimensions n and m and this generalization of Taylors theorem is the basis for the definition of so-called jets which appear in differential geometry and partial differential equations. If a real-valued function f is differentiable at the point a then it has an approximation at the point a. This means that there exists a function h1 such that f = f + f ′ + h 1, here P1 = f + f ′ is the linear approximation of f at the point a. The graph of y = P1 is the tangent line to the graph of f at x = a, the error in the approximation is R1 = f − P1 = h 1. Note that this goes to zero a little bit faster than x − a as x tends to a, if we wanted a better approximation to f, we might instead try a quadratic polynomial instead of a linear function. Instead of just matching one derivative of f at a, we can match two derivatives, thus producing a polynomial that has the slope and concavity as f at a. The quadratic polynomial in question is P2 = f + f ′ + f ″22, Taylors theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of the point a, a better approximation than the linear approximation. Specifically, f = P2 + h 22, lim x → a h 2 =0. Here the error in the approximation is R2 = f − P2 = h 22 which, given the behavior of h 2. Similarly, we might get better approximations to f if we use polynomials of higher degree. In general, the error in approximating a function by a polynomial of degree k will go to zero a little bit faster than k as x tends to a. Find the smallest degree k for which the polynomial Pk approximates f to within an error on a given interval
5.
Scientific modelling
–
Modelling is an essential and inseparable part of many scientific disciplines have their own ideas about specific types of modelling. There is also an increasing attention to modelling in fields such as science education, philosophy of science, systems theory. There is growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling, a scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, building and disputing models is fundamental to the scientific enterprise. Attempts to formalize the principles of the sciences use an interpretation to model reality. The aim of these attempts is to construct a system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the world only insofar as these scientific models are true. For the scientist, a model is also a way in which the thought processes can be amplified. Such computer models are in silico, other types of scientific models are in vivo and in vitro. Models are typically used when it is impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions will always be more reliable than modelled estimates of outcomes, within modelling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven, because a model is captured with a question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task, abstraction aggregates information that is important, but not needed in the same detail as the object of interest. Both activities, simplification and abstraction, are done purposefully, however, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint, there are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints which limit what we are able to explain with our current theories. This model comprises the propertied concepts, their behavior, and their relations in formal form and is referred to as a Conceptual model. In order to execute the model, it needs to be implemented as a Computer simulation and this requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the pillar of scientific methods, theory building, simulation
6.
Science
–
Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. The formal sciences are often excluded as they do not depend on empirical observations, disciplines which use science, like engineering and medicine, may also be considered to be applied sciences. However, during the Islamic Golden Age foundations for the method were laid by Ibn al-Haytham in his Book of Optics. In the 17th and 18th centuries, scientists increasingly sought to formulate knowledge in terms of physical laws, over the course of the 19th century, the word science became increasingly associated with the scientific method itself as a disciplined way to study the natural world. It was during this time that scientific disciplines such as biology, chemistry, Science in a broad sense existed before the modern era and in many historical civilizations. Modern science is distinct in its approach and successful in its results, Science in its original sense was a word for a type of knowledge rather than a specialized word for the pursuit of such knowledge. In particular, it was the type of knowledge which people can communicate to each other, for example, knowledge about the working of natural things was gathered long before recorded history and led to the development of complex abstract thought. This is shown by the construction of calendars, techniques for making poisonous plants edible. For this reason, it is claimed these men were the first philosophers in the strict sense and they were mainly speculators or theorists, particularly interested in astronomy. In contrast, trying to use knowledge of nature to imitate nature was seen by scientists as a more appropriate interest for lower class artisans. A clear-cut distinction between formal and empirical science was made by the pre-Socratic philosopher Parmenides, although his work Peri Physeos is a poem, it may be viewed as an epistemological essay on method in natural science. Parmenides ἐὸν may refer to a system or calculus which can describe nature more precisely than natural languages. Physis may be identical to ἐὸν and he criticized the older type of study of physics as too purely speculative and lacking in self-criticism. He was particularly concerned that some of the early physicists treated nature as if it could be assumed that it had no intelligent order, explaining things merely in terms of motion and matter. The study of things had been the realm of mythology and tradition, however. Aristotle later created a less controversial systematic programme of Socratic philosophy which was teleological and he rejected many of the conclusions of earlier scientists. For example, in his physics, the sun goes around the earth, each thing has a formal cause and final cause and a role in the rational cosmic order. Motion and change is described as the actualization of potentials already in things, while the Socratics insisted that philosophy should be used to consider the practical question of the best way to live for a human being, they did not argue for any other types of applied science
7.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
8.
Smooth function
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
9.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
10.
Taylor series
–
In mathematics, a Taylor series is a representation of a function as an infinite sum of terms that are calculated from the values of the functions derivatives at a single point. The concept of a Taylor series was formulated by the Scottish mathematician James Gregory, a function can be approximated by using a finite number of terms of its Taylor series. Taylors theorem gives quantitative estimates on the error introduced by the use of such an approximation, the polynomial formed by taking some initial terms of the Taylor series is called a Taylor polynomial. The Taylor series of a function is the limit of that functions Taylor polynomials as the degree increases, a function may not be equal to its Taylor series, even if its Taylor series converges at every point. A function that is equal to its Taylor series in an interval is known as an analytic function in that interval. The Taylor series of a real or complex-valued function f that is differentiable at a real or complex number a is the power series f + f ′1. Which can be written in the more compact sigma notation as ∑ n =0 ∞ f n, N where n. denotes the factorial of n and f denotes the nth derivative of f evaluated at the point a. The derivative of order zero of f is defined to be f itself and 0 and 0. are both defined to be 1, when a =0, the series is also called a Maclaurin series. The Maclaurin series for any polynomial is the polynomial itself. The Maclaurin series for 1/1 − x is the geometric series 1 + x + x 2 + x 3 + ⋯ so the Taylor series for 1/x at a =1 is 1 − +2 −3 + ⋯. The Taylor series for the exponential function ex at a =0 is x 00, + ⋯ =1 + x + x 22 + x 36 + x 424 + x 5120 + ⋯ = ∑ n =0 ∞ x n n. The above expansion holds because the derivative of ex with respect to x is also ex and this leaves the terms n in the numerator and n. in the denominator for each term in the infinite sum. The Greek philosopher Zeno considered the problem of summing an infinite series to achieve a result, but rejected it as an impossibility. It was through Archimedess method of exhaustion that a number of progressive subdivisions could be performed to achieve a finite result. Liu Hui independently employed a similar method a few centuries later, in the 14th century, the earliest examples of the use of Taylor series and closely related methods were given by Madhava of Sangamagrama. The Kerala school of astronomy and mathematics further expanded his works with various series expansions, in the 17th century, James Gregory also worked in this area and published several Maclaurin series. It was not until 1715 however that a method for constructing these series for all functions for which they exist was finally provided by Brook Taylor. The Maclaurin series was named after Colin Maclaurin, a professor in Edinburgh, if f is given by a convergent power series in an open disc centered at b in the complex plane, it is said to be analytic in this disc
11.
Scientist
–
A scientist is a person engaging in a systematic activity to acquire knowledge that describes and predicts the natural world. In a more restricted sense, a scientist may refer to an individual who uses the scientific method, the person may be an expert in one or more areas of science. The term scientist was coined by the theologian, philosopher and historian of science William Whewell and this article focuses on the more restricted use of the word. Scientists perform research toward a comprehensive understanding of nature, including physical, mathematical and social realms. Philosophers aim to provide an understanding of fundamental aspects of reality and experience, often pursuing inquiries with conceptual, rather than empirical. When science is done with a goal toward practical utility, it is called applied science, an applied scientist may not be designing something in particular, but rather is conducting research with the aim of developing new technologies and practical methods. When science seeks to answer questions about aspects of reality it is sometimes called natural philosophy. Science and technology have continually modified human existence through the engineering process, as a profession the scientist of today is widely recognized. Jurisprudence and mathematics are often grouped with the sciences, some of the greatest physicists have also been creative mathematicians and lawyers. There is a continuum from the most theoretical to the most empirical scientists with no distinct boundaries, in terms of personality, interests, training and professional activity, there is little difference between applied mathematicians and theoretical physicists. Scientists can be motivated in several ways, many have a desire to understand why the world is as we see it and how it came to be. They exhibit a strong curiosity about reality, other motivations are recognition by their peers and prestige, or the desire to apply scientific knowledge for the benefit of peoples health, the nations, the world, nature or industries. Scientists tend to be motivated by direct financial reward for their work than other careers. As a result, scientific researchers often accept lower average salaries when compared with other professions which require a similar amount of training. The number of scientists is vastly different from country to country, for instance, there are only 4 full-time scientists per 10,000 workers in India while this number is 79 for the United Kingdom and the United States. According to the US National Science Foundation 4.7 million people with science degrees worked in the United States in 2015, across all disciplines, the figure included twice as many men as women. Of that total, 17% worked in academia, that is, at universities and undergraduate institutions, 5% of scientists worked for the federal government and about 3. 5% were self-employed. Of the latter two groups, two-thirds were men, 59% of US scientists were employed in industry or business, and another 6% worked in non-profit positions
12.
Guessing
–
A guess is a swift conclusion drawn from data directly at hand, and held as probable or tentative, while the person making the guess admittedly lacks material for a greater degree of certainty. In many of its uses, the meaning of guessing is assumed as implicitly understood, guessing may combine elements of deduction, induction, abduction, and the purely random selection of one choice from a set of given options. Guessing may also involve the intuition of the guesser, who may have a gut feeling about which answer is correct without necessarily being able to articulate a reason for having this feeling. Tschaepe quotes the description given by William Whewell, who says that this goes on so rapidly that we cannot trace it in its successive steps. A guess that is merely a hunch or is groundless. is arbitrary, a guess made with no factual basis for its correctness may be called a wild guess. Jonathan Baron has said that he value of a guess is l/N + l/N - l/N = l/N. Philosopher David Stove described this process as follows, A paradigm case of guessing is, when captains toss a coin to start a cricket match and this cannot be a case of knowledge, scientific knowledge or any other, if it is a case of guessing. If the captain knows that the coin will fall heads, it is just logically impossible for him also to guess that it will. More than that, however, guessing, at least in such a paradigm case, does not even belong on what may be called the epistemic scale. That is, if the captain, when he heads, is guessing, he is not, in virtue of that, believing, or inclining to think, or conjecturing, or anything of that sort. And in fact, of course, he normally is not doing any of these things when he guesses, and this is guessing, whatever else is. In such an instance, there not only is no reason for favoring heads or tails, Tschaepe also addresses the guess made in a coin flip, contending that it merely represents an extremely limited case of guessing a random number. Allows for combining abductive reasoning with deductive and inductive reasoning, in Jane Austens Emma, however, the author has the character, Emma, respond to a character calling a match that she made a lucky guess by saying that a lucky guess is never merely luck. There is always some talent in it, by contrast, a guess made using prior knowledge to eliminate clearly wrong possibilities may be called an informed guess or an educated guess. Uninformed guesses can be distinguished from the kind of informed guesses that lead to the development of a scientific hypothesis, Tschaepe notes that his process of guessing is distinct from that of a coin toss or picking a number. It has also noted that hen a decision must be made. A guess, however, may also be purely a matter of selecting one possible answer from the set of possible answers, Tschaepe notes that guessing has been indicated as an important part of scientific processes, especially with regard to hypothesis-generation. Regarding scientific hypothesis-generation, Tschaepe has stated that guessing is the initial, following the work of Charles S. Peirce, guessing is a combination of musing and logical analysis
13.
Function (mathematics)
–
In mathematics, a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. An example is the function that each real number x to its square x2. The output of a function f corresponding to a x is denoted by f. In this example, if the input is −3, then the output is 9, likewise, if the input is 3, then the output is also 9, and we may write f =9. The input variable are sometimes referred to as the argument of the function, Functions of various kinds are the central objects of investigation in most fields of modern mathematics. There are many ways to describe or represent a function, some functions may be defined by a formula or algorithm that tells how to compute the output for a given input. Others are given by a picture, called the graph of the function, in science, functions are sometimes defined by a table that gives the outputs for selected inputs. A function could be described implicitly, for example as the inverse to another function or as a solution of a differential equation, sometimes the codomain is called the functions range, but more commonly the word range is used to mean, instead, specifically the set of outputs. For example, we could define a function using the rule f = x2 by saying that the domain and codomain are the numbers. The image of this function is the set of real numbers. In analogy with arithmetic, it is possible to define addition, subtraction, multiplication, another important operation defined on functions is function composition, where the output from one function becomes the input to another function. Linking each shape to its color is a function from X to Y, each shape is linked to a color, there is no shape that lacks a color and no shape that has more than one color. This function will be referred to as the color-of-the-shape function, the input to a function is called the argument and the output is called the value. The set of all permitted inputs to a function is called the domain of the function. Thus, the domain of the function is the set of the four shapes. The concept of a function does not require that every possible output is the value of some argument, a second example of a function is the following, the domain is chosen to be the set of natural numbers, and the codomain is the set of integers. The function associates to any number n the number 4−n. For example, to 1 it associates 3 and to 10 it associates −6, a third example of a function has the set of polygons as domain and the set of natural numbers as codomain
14.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
15.
Formula
–
In science, a formula is a concise way of expressing information symbolically as in a mathematical or chemical formula. The informal use of the formula in science refers to the general construct of a relationship between given quantities. The plural of formula can be spelled either as formulas or formulae, in mathematics, a formula is an entity constructed using the symbols and formation rules of a given logical language. Note that the volume V and the radius r are expressed as single instead of words or phrases. This convention, while important in a relatively simple formula, means that mathematicians can more quickly manipulate larger. Mathematical formulas are often algebraic, closed form, and/or analytical, for example, H2O is the chemical formula for water, specifying that each molecule consists of two hydrogen atoms and one oxygen atom. Similarly, O−3 denotes an ozone molecule consisting of three atoms and having a net negative charge. In a general context, formulas are applied to provide a solution for real world problems. Some may be general, F = ma, which is one expression of Newtons second law, is applicable to a range of physical situations. Other formulas may be created to solve a particular problem, for example. In all cases, however, formulas form the basis for calculations, expressions are distinct from formulas in that they cannot contain an equals sign. Whereas formulas are comparable to sentences, expressions are more like phrases, a chemical formula identifies each constituent element by its chemical symbol and indicates the proportionate number of atoms of each element. In empirical formulas, these begin with a key element and then assign numbers of atoms of the other elements in the compound. For molecular compounds, these numbers can all be expressed as whole numbers. For example, the formula of ethanol may be written C2H6O because the molecules of ethanol all contain two carbon atoms, six hydrogen atoms, and one oxygen atom. Some types of compounds, however, cannot be written with entirely whole-number empirical formulas. An example is boron carbide, whose formula of CBn is a variable non-whole number ratio with n ranging from over 4 to more than 6.5. When the chemical compound of the consists of simple molecules
16.
Line (mathematics)
–
The notion of line or straight line was introduced by ancient mathematicians to represent straight objects with negligible width and depth. Lines are an idealization of such objects, the straight line is that which is equally extended between its points. In modern mathematics, given the multitude of geometries, the concept of a line is tied to the way the geometry is described. When a geometry is described by a set of axioms, the notion of a line is left undefined. The properties of lines are determined by the axioms which refer to them. One advantage to this approach is the flexibility it gives to users of the geometry, thus in differential geometry a line may be interpreted as a geodesic, while in some projective geometries a line is a 2-dimensional vector space. This flexibility also extends beyond mathematics and, for example, permits physicists to think of the path of a light ray as being a line, to avoid this vicious circle certain concepts must be taken as primitive concepts, terms which are given no definition. In geometry, it is frequently the case that the concept of line is taken as a primitive, in those situations where a line is a defined concept, as in coordinate geometry, some other fundamental ideas are taken as primitives. When the line concept is a primitive, the behaviour and properties of lines are dictated by the axioms which they must satisfy, in a non-axiomatic or simplified axiomatic treatment of geometry, the concept of a primitive notion may be too abstract to be dealt with. In this circumstance it is possible that a description or mental image of a notion is provided to give a foundation to build the notion on which would formally be based on the axioms. Descriptions of this type may be referred to, by some authors and these are not true definitions and could not be used in formal proofs of statements. The definition of line in Euclids Elements falls into this category, when geometry was first formalised by Euclid in the Elements, he defined a general line to be breadthless length with a straight line being a line which lies evenly with the points on itself. These definitions serve little purpose since they use terms which are not, themselves, in fact, Euclid did not use these definitions in this work and probably included them just to make it clear to the reader what was being discussed. In an axiomatic formulation of Euclidean geometry, such as that of Hilbert, for example, for any two distinct points, there is a unique line containing them, and any two distinct lines intersect in at most one point. In two dimensions, i. e. the Euclidean plane, two lines which do not intersect are called parallel, in higher dimensions, two lines that do not intersect are parallel if they are contained in a plane, or skew if they are not. Any collection of many lines partitions the plane into convex polygons. Lines in a Cartesian plane or, more generally, in affine coordinates, in two dimensions, the equation for non-vertical lines is often given in the slope-intercept form, y = m x + b where, m is the slope or gradient of the line. B is the y-intercept of the line, X is the independent variable of the function y = f
17.
Slope
–
In mathematics, the slope or gradient of a line is a number that describes both the direction and the steepness of the line. The direction of a line is increasing, decreasing, horizontal or vertical. A line is increasing if it goes up from left to right, the slope is positive, i. e. m >0. A line is decreasing if it goes down from left to right, the slope is negative, i. e. m <0. If a line is horizontal the slope is zero, if a line is vertical the slope is undefined. The steepness, incline, or grade of a line is measured by the value of the slope. A slope with an absolute value indicates a steeper line Slope is calculated by finding the ratio of the vertical change to the horizontal change between two distinct points on a line. Sometimes the ratio is expressed as a quotient, giving the number for every two distinct points on the same line. A line that is decreasing has a negative rise, the line may be practical - as set by a road surveyor, or in a diagram that models a road or a roof either as a description or as a plan. The rise of a road between two points is the difference between the altitude of the road at two points, say y1 and y2, or in other words, the rise is = Δy. Here the slope of the road between the two points is described as the ratio of the altitude change to the horizontal distance between any two points on the line. In mathematical language, the m of the line is m = y 2 − y 1 x 2 − x 1. The concept of slope applies directly to grades or gradients in geography, as a generalization of this practical description, the mathematics of differential calculus defines the slope of a curve at a point as the slope of the tangent line at that point. When the curve given by a series of points in a diagram or in a list of the coordinates of points, thereby, the simple idea of slope becomes one of the main basis of the modern world in terms of both technology and the built environment. This is described by the equation, m = Δ y Δ x = vertical change horizontal change = rise run. Given two points and, the change in x from one to the other is x2 − x1, substituting both quantities into the above equation generates the formula, m = y 2 − y 1 x 2 − x 1. The formula fails for a line, parallel to the y axis. Suppose a line runs through two points, P = and Q =, since the slope is positive, the direction of the line is increasing
18.
Linear approximation
–
In mathematics, a linear approximation is an approximation of a general function using a linear function. They are widely used in the method of differences to produce first order methods for solving or approximating solutions to equations. Given a twice differentiable function f of one real variable. The linear approximation is obtained by dropping the remainder, f ≈ f + f ′ and this is a good approximation for x when it is close enough to a, since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the side is just the equation for the tangent line to the graph of f at. For this reason, this process is called the tangent line approximation. If f is concave down in the interval between x and a, the approximation will be an overestimate, if f is concave up, the approximation will be an underestimate. Linear approximations for vector functions of a variable are obtained in the same way. For example, given a function f with real values. The right-hand side is the equation of the tangent to the graph of z = f at. In the more general case of Banach spaces, one has f ≈ f + D f where D f is the Fréchet derivative of f at a, in this approximation, trigonometric functions can be expressed as linear functions of the angles. Gaussian optics applies to systems in all the optical surfaces are either flat or are portions of a sphere. It is independent of the mass of the bob and this property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, the electrical resistivity of most materials changes with temperature. The parameter α is an empirical parameter fitted from measurement data, because the linear approximation is only an approximation, α is different for different reference temperatures. For this reason it is usual to specify the temperature that α was measured at with a suffix, such as α15, when the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. Eulers method Finite differences Finite difference methods Newtons method Power series Taylor series Weinstein, Alan, Marsden, how to Prepare for the AP Calculus
19.
Quadratic polynomial
–
A univariate quadratic function has the form f = a x 2 + b x + c, a ≠0 in the single variable x. The graph of a quadratic function is a parabola whose axis of symmetry is parallel to the y-axis. If the quadratic function is set equal to zero, then the result is a quadratic equation, the solutions to the univariate equation are called the roots of the univariate function. In general there can be a large number of variables, in which case the resulting surface is called a quadric. The adjective quadratic comes from the Latin word quadrātum, a term like x2 is called a square in algebra because it is the area of a square with side x. In general, a prefix indicates the number 4. Quadratum is the Latin word for square, the coefficients of a polynomial are often taken to be real or complex numbers, but in fact, a polynomial may be defined over any ring. When using the quadratic polynomial, authors sometimes mean having degree exactly 2. If the degree is less than 2, this may be called a degenerate case, usually the context will establish which of the two is meant. Sometimes the word order is used with the meaning of degree, a quadratic polynomial may involve a single variable x, or multiple variables such as x, y, and z. Any single-variable quadratic polynomial may be written as a x 2 + b x + c, where x is the variable, and a, b, and c represent the coefficients. In elementary algebra, such polynomials often arise in the form of a quadratic equation a x 2 + b x + c =0, each quadratic polynomial has an associated quadratic function, whose graph is a parabola. Such polynomials are fundamental to the study of sections, which are characterized by equating the expression for f to zero. Similarly, quadratic polynomials with three or more variables correspond to quadric surfaces and hypersurfaces, in linear algebra, quadratic polynomials can be generalized to the notion of a quadratic form on a vector space. F = a 2 + k is called the vertex form, the coefficient a is the same value in all three forms. To convert the standard form to factored form, one only the quadratic formula to determine the two roots r1 and r2. To convert the standard form to form, one needs a process called completing the square. To convert the factored form to form, one needs to multiply
20.
Parabola
–
A parabola is a two-dimensional, mirror-symmetrical curve, which is approximately U-shaped when oriented as shown in the diagram below, but which can be in any orientation in its plane. It fits any of several different mathematical descriptions which can all be proved to define curves of exactly the same shape. One description of a parabola involves a point and a line, the focus does not lie on the directrix. The parabola is the locus of points in that plane that are equidistant from both the directrix and the focus, a parabola is a graph of a quadratic function, y = x2, for example. The line perpendicular to the directrix and passing through the focus is called the axis of symmetry, the point on the parabola that intersects the axis of symmetry is called the vertex, and is the point where the parabola is most sharply curved. The distance between the vertex and the focus, measured along the axis of symmetry, is the focal length, the latus rectum is the chord of the parabola which is parallel to the directrix and passes through the focus. Parabolas can open up, down, left, right, or in some arbitrary direction. Any parabola can be repositioned and rescaled to fit exactly on any other parabola — that is, conversely, light that originates from a point source at the focus is reflected into a parallel beam, leaving the parabola parallel to the axis of symmetry. The same effects occur with sound and other forms of energy and this reflective property is the basis of many practical uses of parabolas. The parabola has many important applications, from an antenna or parabolic microphone to automobile headlight reflectors to the design of ballistic missiles. They are frequently used in physics, engineering, and many other areas, the earliest known work on conic sections was by Menaechmus in the fourth century BC. He discovered a way to solve the problem of doubling the cube using parabolas, the name parabola is due to Apollonius who discovered many properties of conic sections. It means application, referring to application of concept, that has a connection with this curve. The focus–directrix property of the parabola and other conics is due to Pappus, Galileo showed that the path of a projectile follows a parabola, a consequence of uniform acceleration due to gravity. The idea that a reflector could produce an image was already well known before the invention of the reflecting telescope. Designs were proposed in the early to mid seventeenth century by many mathematicians including René Descartes, Marin Mersenne, when Isaac Newton built the first reflecting telescope in 1668, he skipped using a parabolic mirror because of the difficulty of fabrication, opting for a spherical mirror. Parabolic mirrors are used in most modern reflecting telescopes and in satellite dishes, solving for y yields y =14 f x 2. The length of the chord through the focus is called latus rectum, one half of it semi latus rectum
21.
Polynomial interpolation
–
In numerical analysis, polynomial interpolation is the interpolation of a given data set by a polynomial, given some points, find a polynomial which goes exactly through these points. Polynomials can be used to approximate complicated curves, for example, a relevant application is the evaluation of the natural logarithm and trigonometric functions, pick a few known data points, create a lookup table, and interpolate between those data points. This results in significantly faster computations, Polynomial interpolation also forms the basis for algorithms in numerical quadrature and numerical ordinary differential equations and Secure Multi Party Computation, Secret Sharing schemes. For example, given a = f = a0x0 + a1x1 +. and b = g = b0x0 + b1x1 +, the product ab is equivalent to W = fg. Finding points along W by substituting x for small values in f and g yields points on the curve, interpolation based on those points will yield the terms of W and subsequently the product ab. In the case of Karatsuba multiplication this technique is faster than quadratic multiplication. This is especially true when implemented in parallel hardware. Given a set of n +1 data points where no two xi are the same, one is looking for a polynomial p of degree at most n with the property p = y i, i =0, …, n. The unisolvence theorem states that such a polynomial p exists and is unique, and can be proved by the Vandermonde matrix, as described below. The theorem states that for n +1 interpolation nodes, polynomial interpolation defines a linear bijection L n, K n +1 → Π n where Πn is the space of polynomials of degree at most n. Suppose that the polynomial is in the form p = a n x n + a n −1 x n −1 + ⋯ + a 2 x 2 + a 1 x + a 0. The statement that p interpolates the data points means that p = y i for all i ∈, if we substitute equation in here, we get a system of linear equations in the coefficients ak. The system in matrix-vector form reads = and we have to solve this system for ak to construct the interpolant p. The matrix on the left is commonly referred to as a Vandermonde matrix, the condition number of the Vandermonde matrix may be large, causing large errors when computing the coefficients ai if the system of equations is solved using Gaussian elimination. These methods rely on constructing first a Newton interpolation of the polynomial, suppose we interpolate through n +1 data points with an at-most n degree polynomial p. Suppose also another polynomial exists also of degree at most n that also interpolates the n +1 points and we know, r is a polynomial r has degree at most n, since p and q are no higher than this and we are just subtracting them. At the n +1 data points, r = p − q = y i − y i =0, Therefore, r has n +1 roots. But r is a polynomial of degree ≤ n and it has one root too many
22.
Linearization
–
In mathematics linearization refers to finding the linear approximation to a function at a given point. In the study of systems, linearization is a method for assessing the local stability of an equilibrium point of a system of nonlinear differential equations or discrete dynamical systems. This method is used in such as engineering, physics, economics. Linearizations of a function are lines—usually lines that can be used for purposes of calculation, in short, linearization approximates the output of a function near x = a. However, what would be an approximation of 4.001 =4 +.001. For any given function y = f, f can be approximated if it is near a known differentiable point, the most basic requisite is that L a = f, where L a is the linearization of f at x = a. The point-slope form of an equation forms an equation of a line, given a point, the general form of this equation is, y − K = M. Using the point, L a becomes y = f + M, because differentiable functions are locally linear, the best slope to substitute in would be the slope of the line tangent to f at x = a. While the concept of local linearity applies the most to points arbitrarily close to x = a, the slope M should be, most accurately, the slope of the tangent line at x = a. Visually, the diagram shows the tangent line of f at x. At f, where h is any positive or negative value. The final equation for the linearization of a function at x = a is, the derivative of f is f ′, and the slope of f at a is f ′. To find 4.001, we can use the fact that 4 =2. The linearization of f = x at x = a is y = a +12 a, substituting in a =4, the linearization at 4 is y =2 + x −44. In this case x =4.001, so 4.001 is approximately 2 +4.001 −44 =2.00025. The true value is close to 2.00024998, so the linearization approximation has an error of less than 1 millionth of a percent. Linearization makes it possible to use tools for studying linear systems to analyze the behavior of a function near a given point. The linearization of a function is the first order term of its Taylor expansion around the point of interest, in stability analysis of autonomous systems, one can use the eigenvalues of the Jacobian matrix evaluated at a hyperbolic equilibrium point to determine the nature of that equilibrium