1.
Vector field
–
In vector calculus, a vector field is an assignment of a vector to each point in a subset of space. A vector field in the plane, can be visualised as, the elements of differential and integral calculus extend naturally to vector fields. Vector fields can usefully be thought of as representing the velocity of a flow in space. In coordinates, a field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point. More generally, vector fields are defined on manifolds, which are spaces that look like Euclidean space on small scales. In this setting, a field gives a tangent vector at each point of the manifold. Vector fields are one kind of tensor field, given a subset S in Rn, a vector field is represented by a vector-valued function V, S → Rn in standard Cartesian coordinates. If each component of V is continuous, then V is a vector field. A vector field can be visualized as assigning a vector to individual points within an n-dimensional space, in physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a distinct entity from a simple list of scalars. Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector V are V x =, then the components of the vector V in the new coordinates are required to satisfy the transformation law Such a transformation law is called contravariant. Given a differentiable manifold M, a field on M is an assignment of a tangent vector to each point in M. More precisely, a vector field F is a mapping from M into the tangent bundle TM so that p ∘ F is the identity mapping where p denotes the projection from TM to M, in other words, a vector field is a section of the tangent bundle. If the manifold M is smooth or analytic—that is, the change of coordinates is smooth —then one can make sense of the notion of vector fields. The collection of all vector fields on a smooth manifold M is often denoted by Γ or C∞. A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind, the length of the arrow will be an indication of the wind speed
2.
Unit vector
–
In mathematics, a unit vector in a normed vector space is a vector of length 1. A unit vector is denoted by a lowercase letter with a circumflex, or hat. The term direction vector is used to describe a unit vector being used to represent spatial direction, two 2D direction vectors, d1 and d2 are illustrated. 2D spatial directions represented this way are equivalent numerically to points on the unit circle, the same construct is used to specify spatial directions in 3D. As illustrated, each direction is equivalent numerically to a point on the unit sphere. The normalized vector or versor û of a vector u is the unit vector in the direction of u, i. e. u ^ = u ∥ u ∥ where ||u|| is the norm of u. The term normalized vector is used as a synonym for unit vector. Unit vectors are often chosen to form the basis of a vector space, every vector in the space may be written as a linear combination of unit vectors. By definition, in a Euclidean space the dot product of two vectors is a scalar value amounting to the cosine of the smaller subtended angle. In three-dimensional Euclidean space, the product of two arbitrary unit vectors is a 3rd vector orthogonal to both of them having length equal to the sine of the smaller subtended angle. Unit vectors may be used to represent the axes of a Cartesian coordinate system and they are often denoted using normal vector notation rather than standard unit vector notation. In most contexts it can be assumed that i, j, the notations, or, with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity. When a unit vector in space is expressed, with Cartesian notation, as a combination of i, j, k. The value of each component is equal to the cosine of the angle formed by the vector with the respective basis vector. This is one of the used to describe the orientation of a straight line, segment of straight line, oriented axis. It is important to note that ρ ^ and φ ^ are functions of φ, when differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. For a more complete description, see Jacobian matrix, to minimize degeneracy, the polar angle is usually taken 0 ≤ θ ≤180 ∘. It is especially important to note the context of any ordered triplet written in spherical coordinates, here, the American physics convention is used
3.
Surface (topology)
–
In topology and differential geometry, a surface is a two-dimensional manifold, and, as such, may be an abstract surface not embedded in any Euclidean space. For example, the Klein bottle is a surface, which cannot be represented in the three-dimensional Euclidean space without introducing self-intersections, in mathematics, a surface is a geometrical shape that resembles to a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, the exact definition of a surface may depend on the context. Typically, in geometry, a surface may cross itself, while, in topology and differential geometry. A surface is a space, this means that a moving point on a surface may move in two directions. In other words, around almost every point, there is a patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles a two-dimensional sphere, the concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the properties of an airplane. A surface is a space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a chart and it is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean. In most writings on the subject, it is assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second countable. It is also assumed that the surfaces under consideration are connected. The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second countable and these homeomorphisms are also known as charts. The boundary of the upper half-plane is the x-axis, a point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of points is known as the boundary of the surface which is necessarily a one-manifold, that is. On the other hand, a point mapped to above the x-axis is an interior point, the collection of interior points is the interior of the surface which is always non-empty. The closed disk is an example of a surface with boundary
4.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
5.
Applied mathematics
–
Applied mathematics is a branch of mathematics that deals with mathematical methods that find use in science, engineering, business, computer science, and industry. Thus, applied mathematics is a combination of science and specialized knowledge. The term applied mathematics also describes the professional specialty in which work on practical problems by formulating and studying mathematical models. The activity of applied mathematics is thus connected with research in pure mathematics. Historically, applied mathematics consisted principally of applied analysis, most notably differential equations, approximation theory, quantitative finance is now taught in mathematics departments across universities and mathematical finance is considered a full branch of applied mathematics. Engineering and computer science departments have made use of applied mathematics. Today, the applied mathematics is used in a broader sense. It includes the areas noted above as well as other areas that have become increasingly important in applications. Even fields such as number theory that are part of mathematics are now important in applications. There is no consensus as to what the various branches of applied mathematics are, such categorizations are made difficult by the way mathematics and science change over time, and also by the way universities organize departments, courses, and degrees. Many mathematicians distinguish between applied mathematics, which is concerned with methods, and the applications of mathematics within science. Mathematicians such as Poincaré and Arnold deny the existence of applied mathematics, similarly, non-mathematicians blend applied mathematics and applications of mathematics. The use and development of mathematics to industrial problems is also called industrial mathematics. Historically, mathematics was most important in the sciences and engineering. Academic institutions are not consistent in the way they group and label courses, programs, at some schools, there is a single mathematics department, whereas others have separate departments for Applied Mathematics and Mathematics. It is very common for Statistics departments to be separated at schools with graduate programs, many applied mathematics programs consist of primarily cross-listed courses and jointly appointed faculty in departments representing applications. Some Ph. D. programs in applied mathematics require little or no coursework outside of mathematics, in some respects this difference reflects the distinction between application of mathematics and applied mathematics. Research universities dividing their mathematics department into pure and applied sections include MIT, brigham Young University also has an Applied and Computational Emphasis, a program that allows student to graduate with a Mathematics degree, with an emphasis in Applied Math
6.
Electromagnetism
–
Electromagnetism is a branch of physics involving the study of the electromagnetic force, a type of physical interaction that occurs between electrically charged particles. The electromagnetic force usually exhibits electromagnetic fields such as fields, magnetic fields. The other three fundamental interactions are the interaction, the weak interaction, and gravitation. The word electromagnetism is a form of two Greek terms, ἤλεκτρον, ēlektron, amber, and μαγνῆτις λίθος magnētis lithos, which means magnesian stone. The electromagnetic force plays a role in determining the internal properties of most objects encountered in daily life. Ordinary matter takes its form as a result of forces between individual atoms and molecules in matter, and is a manifestation of the electromagnetic force. Electrons are bound by the force to atomic nuclei, and their orbital shapes. The electromagnetic force governs the processes involved in chemistry, which arise from interactions between the electrons of neighboring atoms, there are numerous mathematical descriptions of the electromagnetic field. In classical electrodynamics, electric fields are described as electric potential, although electromagnetism is considered one of the four fundamental forces, at high energy the weak force and electromagnetic force are unified as a single electroweak force. In the history of the universe, during the epoch the unified force broke into the two separate forces as the universe cooled. Originally, electricity and magnetism were considered to be two separate forces, Magnetic poles attract or repel one another in a manner similar to positive and negative charges and always exist as pairs, every north pole is yoked to a south pole. An electric current inside a wire creates a corresponding magnetic field outside the wire. Its direction depends on the direction of the current in the wire. A current is induced in a loop of wire when it is moved toward or away from a field, or a magnet is moved towards or away from it. While preparing for a lecture on 21 April 1820, Hans Christian Ørsted made a surprising observation. As he was setting up his materials, he noticed a compass needle deflected away from north when the electric current from the battery he was using was switched on. At the time of discovery, Ørsted did not suggest any explanation of the phenomenon. However, three later he began more intensive investigations
7.
Surface integral
–
In mathematics, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral, given a surface, one may integrate over its scalar fields, and vector fields. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism, let such a parameterization be x, where varies in some region T in the plane. The surface integral can also be expressed in the equivalent form ∬ S f d Σ = ∬ T f g d s d t where g is the determinant of the first fundamental form of the mapping x. So that ∂ r ∂ x =, and ∂ r ∂ y =, one can recognize the vector in the second line above as the normal vector to the surface. Note that because of the presence of the product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, consider a vector field v on S, that is, for each x in S, v is a vector. The surface integral can be defined according to the definition of the surface integral of a scalar field. This applies for example in the expression of the field at some fixed point due to an electrically charged surface. Alternatively, if we integrate the normal component of the vector field, imagine that we have a fluid flowing through S, such that v determines the velocity of the fluid at x. The flux is defined as the quantity of flowing through S per unit time. This illustration implies that if the field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S. This also implies that if v does not just flow along S and we find the formula ∬ S v ⋅ d Σ = ∬ S d Σ = ∬ T ∥ ∥ d s d t = ∬ T v ⋅ d s d t. The cross product on the side of this expression is a surface normal determined by the parametrization. This formula defines the integral on the left and we may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. The transformation of the forms are similar. Then, the integral of f on S is given by ∬ D d s d t where ∂ x ∂ s × ∂ x ∂ t = is the surface element normal to S. Let us note that the integral of this 2-form is the same as the surface integral of the vector field which has as components f x, f y and f z
8.
Latin
–
Latin is a classical language belonging to the Italic branch of the Indo-European languages. The Latin alphabet is derived from the Etruscan and Greek alphabets, Latin was originally spoken in Latium, in the Italian Peninsula. Through the power of the Roman Republic, it became the dominant language, Vulgar Latin developed into the Romance languages, such as Italian, Portuguese, Spanish, French, and Romanian. Latin, Italian and French have contributed many words to the English language, Latin and Ancient Greek roots are used in theology, biology, and medicine. By the late Roman Republic, Old Latin had been standardised into Classical Latin, Vulgar Latin was the colloquial form spoken during the same time and attested in inscriptions and the works of comic playwrights like Plautus and Terence. Late Latin is the language from the 3rd century. Later, Early Modern Latin and Modern Latin evolved, Latin was used as the language of international communication, scholarship, and science until well into the 18th century, when it began to be supplanted by vernaculars. Ecclesiastical Latin remains the language of the Holy See and the Roman Rite of the Catholic Church. Today, many students, scholars and members of the Catholic clergy speak Latin fluently and it is taught in primary, secondary and postsecondary educational institutions around the world. The language has been passed down through various forms, some inscriptions have been published in an internationally agreed, monumental, multivolume series, the Corpus Inscriptionum Latinarum. Authors and publishers vary, but the format is about the same, volumes detailing inscriptions with a critical apparatus stating the provenance, the reading and interpretation of these inscriptions is the subject matter of the field of epigraphy. The works of several hundred ancient authors who wrote in Latin have survived in whole or in part and they are in part the subject matter of the field of classics. The Cat in the Hat, and a book of fairy tales, additional resources include phrasebooks and resources for rendering everyday phrases and concepts into Latin, such as Meissners Latin Phrasebook. The Latin influence in English has been significant at all stages of its insular development. From the 16th to the 18th centuries, English writers cobbled together huge numbers of new words from Latin and Greek words, dubbed inkhorn terms, as if they had spilled from a pot of ink. Many of these words were used once by the author and then forgotten, many of the most common polysyllabic English words are of Latin origin through the medium of Old French. Romance words make respectively 59%, 20% and 14% of English, German and those figures can rise dramatically when only non-compound and non-derived words are included. Accordingly, Romance words make roughly 35% of the vocabulary of Dutch, Roman engineering had the same effect on scientific terminology as a whole
9.
Method of Fluxions
–
Method of Fluxions is a book by Isaac Newton. The book was completed in 1671, and published in 1736, fluxion is Newtons term for a derivative. He originally developed the method at Woolsthorpe Manor during the closing of Cambridge during the Great Plague of London from 1665 to 1667, leibniz however published his discovery of differential calculus in 1684, nine years before Newton formally published his fluxion notation form of calculus in part during 1693. For a period of time encompassing Newtons working life, the discipline of analysis was a subject of controversy in the mathematical community, instead, analysts were often forced to invoke infinitesimal, or infinitely small, quantities to justify their algebraic manipulations. Some of Newtons mathematical contemporaries, such as Isaac Barrow, were skeptical of such techniques. Method of Fluxions at the Internet Archive
10.
Differential calculus
–
In mathematics, differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change. It is one of the two divisions of calculus, the other being integral calculus. The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, the derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation, geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point. Differential calculus and integral calculus are connected by the theorem of calculus. Differentiation has applications to nearly all quantitative disciplines, for example, in physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of velocity with respect to time is acceleration. The derivative of the momentum of a body equals the applied to the body. The reaction rate of a reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials, derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena, derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra. Suppose that x and y are real numbers and that y is a function of x and this relationship can be written as y = f. If f is the equation for a line, then there are two real numbers m and b such that y = mx + b. In this slope-intercept form, the m is called the slope and can be determined from the formula, m = change in y change in x = Δ y Δ x. It follows that Δy = m Δx, a general function is not a line, so it does not have a slope. Geometrically, the derivative of f at the point x = a is the slope of the tangent line to the function f at the point a and this is often denoted f ′ in Lagranges notation or dy/dx|x = a in Leibnizs notation. Since the derivative is the slope of the approximation to f at the point a. If every point a in the domain of f has a derivative, for example, if f = x2, then the derivative function f ′ = dy/dx = 2x
11.
Isaac Newton
–
His book Philosophiæ Naturalis Principia Mathematica, first published in 1687, laid the foundations of classical mechanics. Newton also made contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for developing the infinitesimal calculus. Newtons Principia formulated the laws of motion and universal gravitation that dominated scientists view of the universe for the next three centuries. Newtons work on light was collected in his influential book Opticks. He also formulated a law of cooling, made the first theoretical calculation of the speed of sound. Newton was a fellow of Trinity College and the second Lucasian Professor of Mathematics at the University of Cambridge, politically and personally tied to the Whig party, Newton served two brief terms as Member of Parliament for the University of Cambridge, in 1689–90 and 1701–02. He was knighted by Queen Anne in 1705 and he spent the last three decades of his life in London, serving as Warden and Master of the Royal Mint and his father, also named Isaac Newton, had died three months before. Born prematurely, he was a child, his mother Hannah Ayscough reportedly said that he could have fit inside a quart mug. When Newton was three, his mother remarried and went to live with her new husband, the Reverend Barnabas Smith, leaving her son in the care of his maternal grandmother, Newtons mother had three children from her second marriage. From the age of twelve until he was seventeen, Newton was educated at The Kings School, Grantham which taught Latin and Greek. He was removed from school, and by October 1659, he was to be found at Woolsthorpe-by-Colsterworth, Henry Stokes, master at the Kings School, persuaded his mother to send him back to school so that he might complete his education. Motivated partly by a desire for revenge against a bully, he became the top-ranked student. In June 1661, he was admitted to Trinity College, Cambridge and he started as a subsizar—paying his way by performing valets duties—until he was awarded a scholarship in 1664, which guaranteed him four more years until he would get his M. A. He set down in his notebook a series of Quaestiones about mechanical philosophy as he found it, in 1665, he discovered the generalised binomial theorem and began to develop a mathematical theory that later became calculus. Soon after Newton had obtained his B. A. degree in August 1665, in April 1667, he returned to Cambridge and in October was elected as a fellow of Trinity. Fellows were required to become ordained priests, although this was not enforced in the restoration years, however, by 1675 the issue could not be avoided and by then his unconventional views stood in the way. Nevertheless, Newton managed to avoid it by means of a special permission from Charles II. A and he was elected a Fellow of the Royal Society in 1672. Newtons work has been said to distinctly advance every branch of mathematics then studied and his work on the subject usually referred to as fluxions or calculus, seen in a manuscript of October 1666, is now published among Newtons mathematical papers
12.
James Clerk Maxwell
–
James Clerk Maxwell FRS FRSE was a Scottish scientist in the field of mathematical physics. Maxwells equations for electromagnetism have been called the great unification in physics after the first one realised by Isaac Newton. With the publication of A Dynamical Theory of the Electromagnetic Field in 1865, Maxwell proposed that light is an undulation in the same medium that is the cause of electric and magnetic phenomena. The unification of light and electrical phenomena led to the prediction of the existence of radio waves, Maxwell helped develop the Maxwell–Boltzmann distribution, a statistical means of describing aspects of the kinetic theory of gases. He is also known for presenting the first durable colour photograph in 1861 and his discoveries helped usher in the era of modern physics, laying the foundation for such fields as special relativity and quantum mechanics. Many physicists regard Maxwell as the 19th-century scientist having the greatest influence on 20th-century physics and his contributions to the science are considered by many to be of the same magnitude as those of Isaac Newton and Albert Einstein. In the millennium poll—a survey of the 100 most prominent physicists—Maxwell was voted the third greatest physicist of all time, behind only Newton and Einstein. On the centenary of Maxwells birthday, Einstein described Maxwells work as the most profound and the most fruitful that physics has experienced since the time of Newton. James Clerk Maxwell was born on 13 June 1831 at 14 India Street, Edinburgh, to John Clerk Maxwell of Middlebie, an advocate and his father was a man of comfortable means of the Clerk family of Penicuik, holders of the baronetcy of Clerk of Penicuik. His fathers brother was the 6th Baronet, James was the first cousin of the artist Jemima Blackburn and cousin of the civil engineer William Dyce Cay. They were close friends and Cay acted as his best man when Maxwell married, Maxwells parents met and married when they were well into their thirties, his mother was nearly 40 when he was born. They had had one child, a daughter named Elizabeth. When Maxwell was young his family moved to Glenlair House, which his parents had built on the 1,500 acres Middlebie estate, all indications suggest that Maxwell had maintained an unquenchable curiosity from an early age. By the age of three, everything moved, shone, or made a noise drew the question, whats the go o that. And show me how it doos is never out of his mouth and he also investigates the hidden course of streams and bell-wires, the way the water gets from the pond through the wall. Recognising the potential of the boy, Maxwells mother Frances took responsibility for Jamess early education. At eight he could recite long passages of Milton and the whole of the 119th psalm, indeed, his knowledge of scripture was already very detailed, he could give chapter and verse for almost any quotation from the psalms. His mother was ill with abdominal cancer and, after an unsuccessful operation
13.
Fundamental theorem of calculus
–
The fundamental theorem of calculus is a theorem that links the concept of the derivative of a function with the concept of the functions integral. This part of the guarantees the existence of antiderivatives for continuous functions. This part of the theorem has key practical applications because it simplifies the computation of definite integrals. The fundamental theorem of calculus relates differentiation and integration, showing that two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration, the first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory. Isaac Barrow proved a more generalized version of the theorem, while his student Isaac Newton completed the development of the mathematical theory. Gottfried Leibniz systematized the knowledge into a calculus for infinitesimal quantities, for a continuous function y = f whose graph is plotted as a curve, each value of x has a corresponding area function A, representing the area beneath the curve between 0 and x. The function A may not be known, but it is given that it represents the area under the curve. The area under the curve between x and x + h could be computed by finding the area between 0 and x + h, then subtracting the area between 0 and x, in other words, the area of this “sliver” would be A − A. There is another way to estimate the area of this same sliver, as shown in the accompanying figure, h is multiplied by f to find the area of a rectangle that is approximately the same size as this sliver. So, A − A ≈ f h In fact, this becomes a perfect equality if we add the red portion of the excess area shown in the diagram. So, A − A = f h + Rearranging terms, as h approaches 0 in the limit, the last fraction can be shown to go to zero. This is true because the area of the red portion of region is less than or equal to the area of the tiny black-bordered rectangle. More precisely, | f − A − A h | = | Red Excess | h ≤ h | f − f | h = | f − f |, by the continuity of f, the latter expression tends to zero as h does. Therefore, the left-hand side tends to zero as h does and that is, the derivative of the area function A exists and is the original function f, so, the area function is simply an antiderivative of the original function. Computing the derivative of a function and “finding the area” under its curve are opposite operations and this is the crux of the Fundamental Theorem of Calculus. Intuitively, the theorem states that the sum of infinitesimal changes in a quantity over time adds up to the net change in the quantity
14.
Transport phenomena
–
In engineering, physics and chemistry, the study of transport phenomena concerns the exchange of mass, energy, and momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, while it draws its theoretical foundation from principles in a number of fields, most of the fundamental transport theory is a restatement of basic conservation laws. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero and this principle is useful for calculating many relevant quantities. For example, in mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume. Transport phenomena are ubiquitous throughout the engineering disciplines and it is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism. Transport phenomena encompass all agents of change in the universe. Moreover, they are considered to be building blocks which developed the universe. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems, in physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts, the laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, the constitutive equations describe how the quantity in question responds to various stimuli via transport. These equations also demonstrate the connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system, examples of transport processes include heat conduction, fluid flow, molecular diffusion, radiation and electric charge transfer in semiconductors. For example, in solid state physics, the motion and interaction of electrons, holes, another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy. The transport of mass, energy, and momentum can be affected by the presence of external sources, the rate of cooling of a solid that is conducting heat depends on whether a heat source is applied. The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air, an important principle in the study of transport phenomena is analogy between phenomena. Energy, the conduction of heat in a material is an example of heat diffusion
15.
Heat transfer
–
Heat transfer is the exchange of thermal energy between physical systems. The rate of transfer is dependent on the temperatures of the systems. The three fundamental modes of transfer are conduction, convection and radiation. Heat transfer, the flow of energy in the form of heat, is a process by which an internal energy is changed. Conduction is also known as diffusion, not to be confused with related to the mixing of constituents of a fluid. The direction of transfer is from a region of high temperature to another region of lower temperature. Heat transfer changes the energy of the systems from which. Heat transfer will occur in a direction that increases the entropy of the collection of systems, Heat transfer ceases when thermal equilibrium is reached, at which point all involved bodies and the surroundings reach the same temperature. Thermal expansion is the tendency of matter to change in volume in response to a change in temperature, Heat is defined in physics as the transfer of thermal energy across a well-defined boundary around a thermodynamic system. The thermodynamic free energy is the amount of work that a system can perform. Enthalpy is a potential, designated by the letter H. Joule is a unit to quantify energy, work, or the amount of heat, thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat-flow through a surface, in engineering contexts, the term heat is taken as synonymous to thermal energy. This usage has its origin in the interpretation of heat as a fluid that can be transferred by various causes. Thermal engineering concerns the generation, use, conversion, and exchange of heat transfer, as such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. The fundamental modes of transfer are, Advection Advection is the transport mechanism of a fluid from one location to another. Conduction or diffusion The transfer of energy between objects that are in physical contact, thermal conductivity is the property of a material to conduct heat and evaluated primarily in terms of Fouriers Law for heat conduction
16.
Mass transfer
–
Mass transfer is the net movement of mass from one location, usually meaning stream, phase, fraction or component, to another. Mass transfer occurs in many processes, such as absorption, evaporation, drying, precipitation, membrane filtration, mass transfer is used by different scientific disciplines for different processes and mechanisms. The phrase is used in engineering for physical processes that involve diffusive and convective transport of chemical species within physical systems. Some common examples of mass transfer processes are the evaporation of water from a pond to the atmosphere, the purification of blood in the kidneys and liver, mass transfer is often coupled to additional transport processes, for instance in industrial cooling towers. These towers couple heat transfer to transfer by allowing hot water to flow in contact with hotter air. It is a phenomenon in binary systems, and may play an important role in some types of supernovae. Mass transfer finds extensive application in engineering problems. It is used in engineering, separations engineering, heat transfer engineering. The driving force for mass transfer is typically a difference in chemical potential, a chemical species moves from areas of high chemical potential to areas of low chemical potential. Thus, the theoretical extent of a given mass transfer is typically determined by the point at which the chemical potential is uniform. This rate can be quantified through the calculation and application of mass transfer coefficients for an overall process and these mass transfer coefficients are typically published in terms of dimensionless numbers, often including Péclet numbers, Reynolds numbers, Sherwood numbers and Schmidt numbers, among others. There are notable similarities in the commonly used approximate differential equations for momentum, heat, a great deal of effort has been devoted to developing analogies among these three transport processes so as to allow prediction of one from any of the others
17.
Fluid dynamics
–
In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids. It has several subdisciplines, including aerodynamics and hydrodynamics, before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, the foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy. These are based on mechanics and are modified in quantum mechanics. They are expressed using the Reynolds transport theorem, in addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects, however, the continuum assumption assumes that fluids are continuous, rather than discrete. The fact that the fluid is made up of molecules is ignored. The unsimplified equations do not have a general solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve, some of the simplifications allow some simple fluid dynamics problems to be solved in closed form. Three conservation laws are used to solve fluid dynamics problems, the conservation laws may be applied to a region of the flow called a control volume. A control volume is a volume in space through which fluid is assumed to flow. The integral formulations of the laws are used to describe the change of mass, momentum. Mass continuity, The rate of change of fluid mass inside a control volume must be equal to the net rate of flow into the volume. Mass flow into the system is accounted as positive, and since the vector to the surface is opposite the sense of flow into the system the term is negated. The first term on the right is the net rate at which momentum is convected into the volume, the second term on the right is the force due to pressure on the volumes surfaces. The first two terms on the right are negated since momentum entering the system is accounted as positive, the third term on the right is the net acceleration of the mass within the volume due to any body forces. Surface forces, such as forces, are represented by F surf. The following is the form of the momentum conservation equation
18.
Dimensional analysis
–
Converting from one dimensional unit to another is often somewhat complex. Dimensional analysis, or more specifically the method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra. The concept of physical dimension was introduced by Joseph Fourier in 1822, Physical quantities that are measurable have the same dimension and can be directly compared to each other, even if they are originally expressed in differing units of measure. If physical quantities have different dimensions, they cannot be compared by similar units, hence, it is meaningless to ask whether a kilogram is greater than, equal to, or less than an hour. Any physically meaningful equation will have the dimensions on their left and right sides. Checking for dimensional homogeneity is an application of dimensional analysis. Dimensional analysis is routinely used as a check of the plausibility of derived equations and computations. It is generally used to categorize types of quantities and units based on their relationship to or dependence on other units. Many parameters and measurements in the sciences and engineering are expressed as a concrete number – a numerical quantity. Often a quantity is expressed in terms of other quantities, for example, speed is a combination of length and time. Compound relations with per are expressed with division, e. g.60 mi/1 h, other relations can involve multiplication, powers, or combinations thereof. A base unit is a unit that cannot be expressed as a combination of other units, for example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the units of length. Sometimes the names of units obscure that they are derived units, for example, an ampere is a unit of electric current, which is equivalent to electric charge per unit time and is measured in coulombs per second, so 1 A =1 C/s. Similarly, one newton is 1 kg⋅m/s2, percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as 1/100, derivatives with respect to a quantity add the dimensions of the variable one is differentiating with respect to on the denominator. Thus, position has the dimension L, derivative of position with respect to time has dimension LT−1 – length from position, time from the derivative, the second derivative has dimension LT−2. In economics, one distinguishes between stocks and flows, a stock has units of units, while a flow is a derivative of a stock, in some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions
19.
Scalar field
–
In mathematics and physics, a scalar field associates a scalar value to every point in a space. The scalar may either be a number or a physical quantity. Examples used in include the temperature distribution throughout space, the pressure distribution in a fluid. These fields are the subject of field theory. Mathematically, a field on a region U is a real or complex-valued function or distribution on U. A scalar field is a field of order zero. Physically, a field is additionally distinguished by having units of measurement associated with it. Scalar fields are contrasted with other physical quantities such as vector fields, more subtly, scalar fields are often contrasted with pseudoscalar fields. In physics, scalar fields often describe the energy associated with a particular force. The force is a field, which can be obtained as the gradient of the potential energy scalar field. Examples include, Potential fields, such as the Newtonian gravitational potential, a temperature, humidity or pressure field, such as those used in meteorology. In quantum field theory, a field is associated with spin-0 particles. The scalar field may be real or complex valued, complex scalar fields represent charged particles. These include the charged Higgs field of the Standard Model, as well as the charged pions mediating the nuclear interaction. This mechanism is known as the Higgs mechanism, a candidate for the Higgs boson was first detected at CERN in 2012. In scalar theories of gravitation scalar fields are used to describe the gravitational field, scalar-tensor theories represent the gravitational interaction through both a tensor and a scalar. Such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory, scalar fields like the Higgs field can be found within scalar-tensor theories, using as scalar field the Higgs field of the Standard Model. This field interacts gravitationally and Yukawa-like with the particles that get mass through it, scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor
20.
Weathervane
–
A weather vane, wind vane, or weathercock is an instrument for showing the direction of the wind. They are typically used as an ornament to the highest point of a building. Although partly functional, weather vanes are generally decorative, often featuring the traditional design with letters indicating the points of the compass. Other common motifs include ships, arrows and horses, not all weather vanes have pointers. The word vane comes from the Old English word fana meaning flag, below this was a frieze adorned with the eight Greek wind deities. The eight-metre-high structure also featured sundials, and a clock inside. It dates from around 50 BCE, the oldest existing weather vane with the shape of a rooster is the Gallo di Ramperto, made in 820 CE and now preserved in the Museo di Santa Giulia in Brescia, Lombardy. Pope Leo IV had a cock placed on the Old St. Peters Basilica or old Constantinian basilica. As a result of this, the cock gradually began to be used as a weather vane on church steeples, the Bayeux Tapestry of the 1070s depicts a man installing a cock on Westminster Abbey. One alternative theory about the origin of weathercocks on church steeples is that it was an emblem of the vigilance of the calling the people to prayer. Another theory says that the cock was not a Christian symbol, a few churches used weather vanes in the shape of the emblems of their patron saints. The City of London has two surviving examples, the weather vane of St Peter upon Cornhill is not in the shape of a rooster, but a key, while St Lawrence Jewrys weather vane is in the form of a gridiron. Early weather vanes had very ornamental pointers, but modern wind vanes are usually simple arrows that dispense with the directionals because the instrument is connected to a remote reading station, modern aerovanes combine the directional vane with an anemometer. Co-locating both instruments allows them to use the same axis and provides a coordinated readout, according to the Guinness World Records, the worlds largest weather vane is a Tío Pepe sherry advertisement located in Jerez, Spain. The city of Montague, Michigan also claims to have the largest standard-design weather vane, being a ship and arrow which measures 48 feet tall, a challenger for the title of worlds largest weather vane is located in Whitehorse, Yukon. The weather vane is a retired Douglas DC-3 CF-CPY atop a swiveling support, the weather vane only requires a 5 knot wind to rotate. The term weathervane is also a word for a politician who has frequent changes of opinion. The National Assembly of Quebec has banned use of this term as a slur after its use by members of the legislature
21.
Dot product
–
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. Sometimes it is called inner product in the context of Euclidean space, algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them, the dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance, the equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In such a presentation, the notions of length and angles are not primitive, so the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. For instance, in space, the dot product of vectors and is. In Euclidean space, a Euclidean vector is an object that possesses both a magnitude and a direction. A vector can be pictured as an arrow and its magnitude is its length, and its direction is the direction that the arrow points. The magnitude of a vector a is denoted by ∥ a ∥, the dot product of two Euclidean vectors a and b is defined by a ⋅ b = ∥ a ∥ ∥ b ∥ cos , where θ is the angle between a and b. In particular, if a and b are orthogonal, then the angle between them is 90° and a ⋅ b =0. The scalar projection of a Euclidean vector a in the direction of a Euclidean vector b is given by a b = ∥ a ∥ cos θ, where θ is the angle between a and b. In terms of the definition of the dot product, this can be rewritten a b = a ⋅ b ^. The dot product is thus characterized geometrically by a ⋅ b = a b ∥ b ∥ = b a ∥ a ∥. The dot product, defined in this manner, is homogeneous under scaling in each variable and it also satisfies a distributive law, meaning that a ⋅ = a ⋅ b + a ⋅ c. These properties may be summarized by saying that the dot product is a bilinear form, moreover, this bilinear form is positive definite, which means that a ⋅ a is never negative and is zero if and only if a =0. En are the basis vectors in Rn, then we may write a = = ∑ i a i e i b = = ∑ i b i e i. The vectors ei are a basis, which means that they have unit length and are at right angles to each other. Hence since these vectors have unit length e i ⋅ e i =1 and since they form right angles with each other, thus in general we can say that, e i ⋅ e j = δ i j
22.
Surface (mathematics)
–
In mathematics, a surface is a generalization of a plane which needs not be flat, that is, the curvature is not necessarily zero. This is analogous to a curve generalizing a straight line, there are several more precise definitions, depending on the context and the mathematical tools that are used for the study. Often, a surface is defined by equations that are satisfied by the coordinates of its points and this is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface, if the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the sphere is an algebraic surface, as it may be defined by the implicit equation x 2 + y 2 + z 2 −1 =0. A surface may also be defined as the image, in space of dimension at least 3. In this case, one says that one has a parametric surface, for example, the unit sphere may be parametrized by the Euler angles, also called longitude u and latitude v by x = cos cos y = sin cos z = sin . Parametric equations of surfaces are often irregular at some points, for example, all but two points of the unit sphere, are the image, by the above parametrization, of exactly one pair of Euler angles. For the remaining two points, one has cos v =0, and the longitude u may take any values, also, there are surfaces for which there cannot exits a single parametrization that covers the whole surface. Therefore, one often considers surfaces which are parametrized by several parametric equations and this allows defining surfaces in spaces of dimension higher than three, and even abstract surfaces, which are not contained in any other space. On the other hand, this excludes surfaces that have singularities, in classical geometry, a surface is generally defined as a locus of a point or a line. A ruled surface is the locus of a moving line satisfying some constraints, in modern terminology, a surface is a surface. In this article, several kinds of surfaces are considered and compared, a non-ambiguous terminology is thus necessary for distinguish them. Therefore, we call topological surfaces the surfaces that are manifolds of dimension two and we call differential surfaces the surfaces that are differentiable manifolds. Every differential surface is a surface, but the converse is false. For simplicity, unless stated, surface will mean a surface in the Euclidean space of dimension 3 or in R3. A surface, that is not supposed to be included in another space, is called an abstract surface, the graph of a continuous function of two variables, defined over a connected open subset of R2 is a topological surface. If the function is differentiable, the graph is a differential surface, a plane is together an algebraic surface and a differentiable surface
23.
Momentum
–
In classical mechanics, linear momentum, translational momentum, or simply momentum is the product of the mass and velocity of an object, quantified in kilogram-meters per second. It is dimensionally equivalent to impulse, the product of force and time, Newtons second law of motion states that the change in linear momentum of a body is equal to the net impulse acting on it. If the truck were lighter, or moving slowly, then it would have less momentum. Linear momentum is also a quantity, meaning that if a closed system is not affected by external forces. In classical mechanics, conservation of momentum is implied by Newtons laws. It also holds in special relativity and, with definitions, a linear momentum conservation law holds in electrodynamics, quantum mechanics, quantum field theory. It is ultimately an expression of one of the symmetries of space and time. Linear momentum depends on frame of reference, observers in different frames would find different values of linear momentum of a system. But each would observe that the value of linear momentum does not change with time, momentum has a direction as well as magnitude. Quantities that have both a magnitude and a direction are known as vector quantities, because momentum has a direction, it can be used to predict the resulting direction of objects after they collide, as well as their speeds. Below, the properties of momentum are described in one dimension. The vector equations are almost identical to the scalar equations, the momentum of a particle is traditionally represented by the letter p. It is the product of two quantities, the mass and velocity, p = m v, the units of momentum are the product of the units of mass and velocity. In SI units, if the mass is in kilograms and the velocity in meters per second then the momentum is in kilogram meters/second, in cgs units, if the mass is in grams and the velocity in centimeters per second, then the momentum is in gram centimeters/second. Being a vector, momentum has magnitude and direction, for example, a 1 kg model airplane, traveling due north at 1 m/s in straight and level flight, has a momentum of 1 kg m/s due north measured from the ground. The momentum of a system of particles is the sum of their momenta, if two particles have masses m1 and m2, and velocities v1 and v2, the total momentum is p = p 1 + p 2 = m 1 v 1 + m 2 v 2. If all the particles are moving, the center of mass will generally be moving as well, if the center of mass is moving at velocity vcm, the momentum is, p = m v cm. This is known as Eulers first law, if a force F is applied to a particle for a time interval Δt, the momentum of the particle changes by an amount Δ p = F Δ t
24.
Viscosity
–
The viscosity of a fluid is a measure of its resistance to gradual deformation by shear stress or tensile stress. For liquids, it corresponds to the concept of thickness, for example. Viscosity is a property of the fluid which opposes the motion between the two surfaces of the fluid in a fluid that are moving at different velocities. For a given velocity pattern, the stress required is proportional to the fluids viscosity, a fluid that has no resistance to shear stress is known as an ideal or inviscid fluid. Zero viscosity is observed only at low temperatures in superfluids. Otherwise, all fluids have positive viscosity, and are said to be viscous or viscid. A fluid with a high viscosity, such as pitch. The word viscosity is derived from the Latin viscum, meaning mistletoe, the dynamic viscosity of a fluid expresses its resistance to shearing flows, where adjacent layers move parallel to each other with different speeds. It can be defined through the situation known as a Couette flow. This fluid has to be homogeneous in the layer and at different shear stresses, if the speed of the top plate is small enough, the fluid particles will move parallel to it, and their speed will vary linearly from zero at the bottom to u at the top. Each layer of fluid will move faster than the one just below it, in particular, the fluid will apply on the top plate a force in the direction opposite to its motion, and an equal but opposite one to the bottom plate. An external force is required in order to keep the top plate moving at constant speed. The magnitude F of this force is found to be proportional to the u and the area A of each plate. The proportionality factor μ in this formula is the viscosity of the fluid, the ratio u/y is called the rate of shear deformation or shear velocity, and is the derivative of the fluid speed in the direction perpendicular to the plates. Isaac Newton expressed the forces by the differential equation τ = μ ∂ u ∂ y, where τ = F/A. This formula assumes that the flow is moving along parallel lines and this equation can be used where the velocity does not vary linearly with y, such as in fluid flowing through a pipe. Use of the Greek letter mu for the dynamic viscosity is common among mechanical and chemical engineers. However, the Greek letter eta is used by chemists, physicists
25.
Heat flux
–
Heat flux or thermal flux is the rate of heat energy transfer through a given surface per unit time. The SI derived unit of heat rate is joule per second, Heat flux density is the heat rate per unit area. In SI units, heat flux density is measured in, the dimensional unit is. Heat rate is a scalar quantity, while heat flux is a vectorial quantity. To define the heat flux at a point in space. Heat flux is often denoted ϕ q →, the subscript q specifying heat rate, Fouriers law is an important application of these concepts. For most solids in usual conditions, heat is transported mainly by conduction, the measurement of heat flux can be performed in a few different manners. A commonly known, but often impractical, method is performed by measuring a temperature difference over a piece of material with known thermal conductivity and this method is analogous to a standard way to measure an electric current, where one measures the voltage drop over a known resistor. Usually this method is difficult to perform since the resistance of the material being tested is often not known. Accurate values for the thickness and thermal conductivity would be required in order to determine thermal resistance. Using the thermal resistance, along with temperature measurements on either side of the material, differential thermopile heat flux sensors are often initially calibrated in order to relate their output signals to heat flux values. Once the heat flux sensor is calibrated it can then be used to measure heat flux without requiring the rarely known value of thermal resistance or thermal conductivity. One of the tools in a scientists or engineers toolbox is the energy balance, in real-world applications one cannot know the exact heat flux at every point on the surface, but approximation schemes can be used to calculate the integral, for example Monte Carlo integration. Radiant flux Latent heat flux Rate of heat flow Insolation Heat flux sensor Relativistic heat conduction
26.
Heat
–
In physics, heat is the amount of energy flowing from one body to another spontaneously due to their temperature difference, or by any means other than through work or the transfer of matter. Thus, energy exchanged as heat during a process changes the energy of each body by equal. The sign of the quantity of heat can indicate the direction of the transfer, for example from system A to system B, negation indicates energy flowing in the opposite direction. While heat flows spontaneously from hot to cold, it is possible to construct a heat pump or refrigeration system that does work to increase the difference in temperature between two systems, conversely, a heat engine reduces an existing temperature difference to do work on another system. Heat is a consequence of the motion of particles. When heat is transferred between two objects or systems, the energy of the object or systems particles increases, as this occurs, the arrangement between particles becomes more and more disordered. In other words, heat is related to the concept of entropy, historically, many energy units for measurement of heat have been used. The standards-based unit in the International System of Units is the joule, Heat is measured by its effect on the states of interacting bodies, for example, by the amount of ice melted or a change in temperature. The quantification of heat via the change of a body is called calorimetry. In calorimetry, sensible heat is defined with respect to a specific chosen state variable of the system, sensible heat causes a change of the temperature of the system while leaving the chosen state variable unchanged. Heat transfer that occurs at a constant system temperature but changes the state variable is called latent heat with respect to the variable, for infinitesimal changes, the total incremental heat transfer is then the sum of the latent and sensible heat. Physicist James Clerk Maxwell, in his 1871 classic Theory of Heat, was one of many who began to build on the established idea that heat has something to do with matter in motion. This was the idea put forth by Benjamin Thompson in 1798. One of Maxwells recommended books was Heat as a Mode of Motion, Maxwell outlined four stipulations for the definition of heat, It is something which may be transferred from one body to another, according to the second law of thermodynamics. It is a quantity, and so can be treated mathematically. It cannot be treated as a substance, because it may be transformed into something that is not a material substance. Heat is one of the forms of energy and this was the way of the historical pioneers of thermodynamics. Maxwell writes that convection as such is not a purely thermal phenomenon, in thermodynamics, convection in general is regarded as transport of internal energy
27.
Volume
–
Volume is the quantity of three-dimensional space enclosed by a closed surface, for example, the space that a substance or shape occupies or contains. Volume is often quantified numerically using the SI derived unit, the cubic metre, three dimensional mathematical shapes are also assigned volumes. Volumes of some simple shapes, such as regular, straight-edged, Volumes of a complicated shape can be calculated by integral calculus if a formula exists for the shapes boundary. Where a variance in shape and volume occurs, such as those that exist between different human beings, these can be calculated using techniques such as the Body Volume Index. One-dimensional figures and two-dimensional shapes are assigned zero volume in the three-dimensional space, the volume of a solid can be determined by fluid displacement. Displacement of liquid can also be used to determine the volume of a gas, the combined volume of two substances is usually greater than the volume of one of the substances. However, sometimes one substance dissolves in the other and the volume is not additive. In differential geometry, volume is expressed by means of the volume form, in thermodynamics, volume is a fundamental parameter, and is a conjugate variable to pressure. Any unit of length gives a unit of volume, the volume of a cube whose sides have the given length. For example, a cubic centimetre is the volume of a cube whose sides are one centimetre in length, in the International System of Units, the standard unit of volume is the cubic metre. The metric system also includes the litre as a unit of volume, thus 1 litre =3 =1000 cubic centimetres =0.001 cubic metres, so 1 cubic metre =1000 litres. Small amounts of liquid are often measured in millilitres, where 1 millilitre =0.001 litres =1 cubic centimetre. Capacity is defined by the Oxford English Dictionary as the applied to the content of a vessel, and to liquids, grain, or the like. Capacity is not identical in meaning to volume, though closely related, Units of capacity are the SI litre and its derived units, and Imperial units such as gill, pint, gallon, and others. Units of volume are the cubes of units of length, in SI the units of volume and capacity are closely related, one litre is exactly 1 cubic decimetre, the capacity of a cube with a 10 cm side. In other systems the conversion is not trivial, the capacity of a fuel tank is rarely stated in cubic feet, for example. The density of an object is defined as the ratio of the mass to the volume, the inverse of density is specific volume which is defined as volume divided by mass. Specific volume is an important in thermodynamics where the volume of a working fluid is often an important parameter of a system being studied
28.
Darcy's law
–
Darcys law is an equation that describes the flow of a fluid through a porous medium. The law was formulated by Henry Darcy based on the results of experiments on the flow of water through beds of sand, forming the basis of hydrogeology, although Darcys law was determined experimentally by Darcy, it has since been derived from the Navier-Stokes equations via homogenization. It is analogous to Fouriers law in the field of heat conduction, Ohms law in the field of electrical networks, or Ficks law in diffusion theory. Morris Muskat first refined Darcys equation for single phase flow by including viscosity in the single equation of Darcy. The generalized multiphase flow equations of Muskat et alios provide the foundation for reservoir engineering that exists to this day. The above equation for single phase flow is the equation for absolute permeability. The negative sign is needed because fluid flows from high pressure to low pressure, note that the elevation head must be taken into account if the inlet and outlet are at different elevations. If the change in pressure is negative, then the flow will be in the x direction. There have been proposals for a constitutive equation for absolute permeability. Dividing both sides of the equation by the area and using more general notation leads q = − κ μ ∇ p, where q is the flux and ∇p is the pressure gradient vector. This value of flux, often referred to as the Darcy flux or Darcy velocity, is not the velocity which the fluid traveling through the pores is experiencing, the fluid velocity is related to the Darcy flux by the porosity. The flux is divided by porosity to account for the fact only a fraction of the total formation volume is available for flow. The fluid velocity would be the velocity a conservative tracer would experience if carried by the fluid through the formation, a graphical illustration of the use of the steady-state groundwater flow equation is in the construction of flownets, to quantify the amount of groundwater flowing under a dam. Darcys law is valid for slow, viscous flow, fortunately. Typically any flow with a Reynolds number less than one is clearly laminar, experimental tests have shown that flow regimes with Reynolds numbers up to 10 may still be Darcian, as in the case of groundwater flow. For stationary, creeping, incompressible flow, i. e. In isotropic porous media the off-diagonal elements in the permeability tensor are zero, κij =0 for i ≠ j and the elements are identical, κii = κ. The above equation is an equation for single phase fluid flow in a porous medium
29.
Mass
–
In physics, mass is a property of a physical body. It is the measure of a resistance to acceleration when a net force is applied. It also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram, Mass is not the same as weight, even though mass is often determined by measuring the objects weight using a spring scale, rather than comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity and this is because weight is a force, while mass is the property that determines the strength of this force. In Newtonian physics, mass can be generalized as the amount of matter in an object, however, at very high speeds, special relativity postulates that energy is an additional source of mass. Thus, any body having mass has an equivalent amount of energy. In addition, matter is a defined term in science. There are several distinct phenomena which can be used to measure mass, active gravitational mass measures the gravitational force exerted by an object. Passive gravitational mass measures the force exerted on an object in a known gravitational field. The mass of an object determines its acceleration in the presence of an applied force, according to Newtons second law of motion, if a body of fixed mass m is subjected to a single force F, its acceleration a is given by F/m. A bodys mass also determines the degree to which it generates or is affected by a gravitational field and this is sometimes referred to as gravitational mass. The standard International System of Units unit of mass is the kilogram, the kilogram is 1000 grams, first defined in 1795 as one cubic decimeter of water at the melting point of ice. Then in 1889, the kilogram was redefined as the mass of the prototype kilogram. As of January 2013, there are proposals for redefining the kilogram yet again. In this context, the mass has units of eV/c2, the electronvolt and its multiples, such as the MeV, are commonly used in particle physics. The atomic mass unit is 1/12 of the mass of a carbon-12 atom, the atomic mass unit is convenient for expressing the masses of atoms and molecules. Outside the SI system, other units of mass include, the slug is an Imperial unit of mass, the pound is a unit of both mass and force, used mainly in the United States
30.
Photon
–
A photon is an elementary particle, the quantum of the electromagnetic field including electromagnetic radiation such as light, and the force carrier for the electromagnetic force. The photon has zero rest mass and is moving at the speed of light. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a photon may be refracted by a lens and exhibit wave interference with itself. The quanta in a light wave cannot be spatially localized, some defined physical parameters of a photon are listed. The modern concept of the photon was developed gradually by Albert Einstein in the early 20th century to explain experimental observations that did not fit the classical model of light. The benefit of the model was that it accounted for the frequency dependence of lights energy. The photon model accounted for observations, including the properties of black-body radiation. In that model, light was described by Maxwells equations, in 1926 the optical physicist Frithiof Wolfers and the chemist Gilbert N. Lewis coined the name photon for these particles. After Arthur H. Compton won the Nobel Prize in 1927 for his studies, most scientists accepted that light quanta have an independent existence. In the Standard Model of particle physics, photons and other particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass and it has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Recently, photons have been studied as elements of quantum computers, in 1900, the German physicist Max Planck was studying black-body radiation and suggested that the energy carried by electromagnetic waves could only be released in packets of energy. In his 1901 article in Annalen der Physik he called these packets energy elements, the word quanta was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1905, Albert Einstein suggested that waves could only exist as discrete wave-packets. He called such a wave-packet the light quantum, the name photon derives from the Greek word for light, φῶς. Arthur Compton used photon in 1928, referring to Gilbert N. Lewis, the name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolferss and Lewiss theories were contradicted by many experiments and never accepted, in physics, a photon is usually denoted by the symbol γ
31.
Magnitude (astronomy)
–
In astronomy, magnitude is a logarithmic measure of the brightness of an object, measured in a specific wavelength or passband, usually in the visible or near-infrared spectrum. An imprecise but systematic determination of the magnitude of objects was introduced in ancient times by Hipparchus, astronomers use two different definitions of magnitude, apparent magnitude and absolute magnitude. This distance is 10 parsecs for stars and 1 astronomical unit for planets, a minor planets size is typically estimated based on its absolute magnitude in combination with its presumed albedo. The brighter an object appears, the lower the value of its magnitude, with the brightest objects reaching negative values. The Sun has an apparent magnitude of −27, the full moon −13, the brightest planet Venus measures −5, and Sirius, an apparent magnitude can also be assigned to man-made objects in Earth orbit. The brightest satellite flares are ranked at −9, and the International Space Station, ISS, the scale is logarithmic, and defined such that each step of one magnitude changes the brightness by a factor of the fifth root of 100, or approximately 2.512. For example, a magnitude 1 star is exactly a hundred times brighter than a magnitude 6 star, the magnitude system dates back roughly 2000 years to the Greek astronomer Hipparchus who classified stars by their apparent brightness, which they saw as size. To the unaided eye, a prominent star such as Sirius or Arcturus appears larger than a less prominent star such as Mizar. For all the other Stars, which are seen by the Help of a Telescope. Note that the brighter the star, the smaller the magnitude, Bright first magnitude stars are 1st-class stars, the system was a simple delineation of stellar brightness into six distinct groups but made no allowance for the variations in brightness within a group. He concluded that first magnitude stars measured 2 arc minutes in apparent diameter, with second through sixth magnitude stars measuring 1 1⁄2′, 1 1⁄12′, 3⁄4′, 1⁄2′, the development of the telescope showed that these large sizes were illusory—stars appeared much smaller through the telescope. However, early telescopes produced a spurious disk-like image of a star that was larger for brighter stars, early photometric measurements demonstrated that first magnitude stars are about 100 times brighter than sixth magnitude stars. Thus in 1856 Norman Pogson of Oxford proposed that a scale of 5√100 ≈2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightness. Every interval of one magnitude equates to a variation in brightness of 5√100 or roughly 2.512 times. Consequently, a first magnitude star is about 2.5 times brighter than a second star,2.52 brighter than a third magnitude star,2.53 brighter than a fourth magnitude star. This is the modern system, which measures the brightness, not the apparent size. Using this logarithmic scale, it is possible for a star to be brighter than “first class”, so Arcturus or Vega are magnitude 0, and Sirius is magnitude −1.46. As mentioned above, the scale appears to work in reverse, the larger the negative value, the brighter
32.
Stellar classification
–
In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with absorption lines, each line indicates an ion of a certain chemical element, with the line strength indicating the abundance of that ion. The relative abundance of the different ions varies with the temperature of the photosphere, the spectral class of a star is a short code summarizing the ionization state, giving an objective measure of the photospheres temperature and density. Most stars are classified under the Morgan–Keenan system using the letters O, B, A, F, G, K, and M. Each letter class is subdivided using a numeric digit with 0 being hottest and 9 being coolest. The sequence has been expanded with classes for other stars and star-like objects that do not fit in the system, such as class D for white dwarfs. In the MK system, a luminosity class is added to the class using Roman numerals. This is based on the width of absorption lines in the stars spectrum. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a temperature around 5,800 K, the conventional color description takes into account only the peak of the stellar spectrum. This means that the assignment of colors of the spectrum can be misleading. There are no green, indigo, or violet stars, likewise, the brown dwarfs do not literally appear brown. The modern classification system is known as the Morgan–Keenan classification, each star is assigned a spectral class from the older Harvard spectral classification and a luminosity class using Roman numerals as explained below, forming the stars spectral type. The spectral classes O through M, as well as more specialized classes discussed later, are subdivided by Arabic numerals. For example, A0 denotes the hottest stars in the A class, fractional numbers are allowed, for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2, the conventional color descriptions are traditional in astronomy, and represent colors relative to the mean color of an A-class star, which is considered to be white. The apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, red supergiants are cooler and redder than dwarfs of the same spectral type, and stars with particular spectral features such as carbon stars may be far redder than any black body. O-, B-, and A-type stars are called early type
33.
Energy
–
In physics, energy is the property that must be transferred to an object in order to perform work on – or to heat – the object, and can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the transferred to an object by the mechanical work of moving it a distance of 1 metre against a force of 1 newton. Mass and energy are closely related, for example, with a sensitive enough scale, one could measure an increase in mass after heating an object. Living organisms require available energy to stay alive, such as the humans get from food. Civilisation gets the energy it needs from energy resources such as fuels, nuclear fuel. The processes of Earths climate and ecosystem are driven by the radiant energy Earth receives from the sun, the total energy of a system can be subdivided and classified in various ways. It may also be convenient to distinguish gravitational energy, thermal energy, several types of energy, electric energy. Many of these overlap, for instance, thermal energy usually consists partly of kinetic. Some types of energy are a mix of both potential and kinetic energy. An example is energy which is the sum of kinetic. Whenever physical scientists discover that a phenomenon appears to violate the law of energy conservation. Heat and work are special cases in that they are not properties of systems, in general we cannot measure how much heat or work are present in an object, but rather only how much energy is transferred among objects in certain ways during the occurrence of a given process. Heat and work are measured as positive or negative depending on which side of the transfer we view them from, the distinctions between different kinds of energy is not always clear-cut. In contrast to the definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two, in 1807, Thomas Young was possibly the first to use the term energy instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described kinetic energy in 1829 in its modern sense, the law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat and these developments led to the theory of conservation of energy, formalized largely by William Thomson as the field of thermodynamics
34.
Isothermal process
–
An isothermal process is a change of a system, in which the temperature remains constant, ΔT =0. In contrast, a process is where a system exchanges no heat with its surroundings. In other words, in a process, the value ΔT =0 and therefore ΔU =0 but Q ≠0, while in an adiabatic process. Isothermal processes can occur in any kind of system that has some means of regulating the temperature, including highly structured machines, some parts of the cycles of some heat engines are carried out isothermally. In the thermodynamic analysis of reactions, it is usual to first analyze what happens under isothermal conditions. Phase changes, such as melting or evaporation, are also isothermal processes when, as is usually the case, isothermal processes are often used and a starting point in analyzing more complex, non-isothermal processes. Isothermal processes are of special interest for ideal gases and this is a consequence of Joules second law which states that the internal energy of a fixed amount of an ideal gas depends only on its temperature. Thus, in a process the internal energy of an ideal gas is constant. This is a result of the fact that in a gas there are no intermolecular forces. Note that this is only for ideal gases, the internal energy depends on pressure as well as on temperature for liquids, solids. In the isothermal compression of a gas there is work is done on the system to decrease the volume, doing work on the gas increases the internal energy and will tend to increase the temperature. To maintain the constant temperature energy must leave the system as heat, if the gas is ideal, the amount of energy entering the environment is equal to the work done on the gas, because internal energy does not change. For details of the calculations, see calculation of work, for an adiabatic process, in which no heat flows into or out of the gas because its container is well insulated, Q =0. If there is no work done, i. e. a free expansion. For an ideal gas, this means that the process is also isothermal, thus, specifying that a process is isothermal is not sufficient to specify a unique process. For the special case of a gas to which Boyles law applies, the value of the constant is nRT, where n is the number of moles of gas present and R is the ideal gas constant. In other words, the gas law pV = nRT applies. This means that p = n R T V = constant V holds, the family of curves generated by this equation is shown in the graph in Figure 1
35.
Isobaric process
–
An isobaric process is a thermodynamic process in which the pressure stays constant, ΔP =0. The heat transferred to the system work, but also changes the internal energy of the system. This article uses the sign convention for work, where positive work is work done on the system. Using this convention, by the first law of thermodynamics, Q = Δ U − W where W is work, U is internal energy, and Q is heat. Pressure-volume work by the system is defined as, W = − ∫ p d V where Δ means change over the whole process. Since pressure is constant, this means that W = − p Δ V. Applying the ideal gas law, this becomes W = − n R Δ T assuming that the quantity of gas stays constant, e. g. there is no phase transition during a chemical reaction. According to the theorem, the change in internal energy is related to the temperature of the system by Δ U = n c V Δ T. Substituting the last two equations into the first equation produces, Q = n c V Δ T + n R Δ T = n Δ T = n c P Δ T, where cP is specific heat at a constant pressure. To find the specific heat capacity of the gas involved. The property γ is either called the index or the heat capacity ratio. Some published sources might use k instead of γ, molar isochoric specific heat, c V = R γ −1. Molar isobaric specific heat, c p = γ R γ −1, the values for γ are γ = 7/5 for diatomic gases like air and its major components, and γ = 5/3 for monatomic gases like the noble gases. If the process moves towards the right, then it is an expansion, if the process moves towards the left, then it is a compression. The motivation for the specific conventions of thermodynamics comes from early development of heat engines. When designing an engine, the goal is to have the system produce. The source of energy in an engine, is a heat input. If the volume compresses, then W <0 and that is, during isobaric compression the gas does negative work, or the environment does positive work
36.
Nabla symbol
–
The nabla is a triangular symbol like an inverted Greek delta, ∇ or ∇. The nabla symbol is available in standard HTML as &nabla, in Unicode, it is the character at code point U+2207, or 8711 in decimal notation. The mathematics of ∇ received its full exposition at the hands of P. G. Tait, after receiving Smiths suggestion, Tait and James Clerk Maxwell referred to the operator as nabla in their extensive private correspondence, most of these references are of a humorous character. The one published use of the word by Maxwell is in the title to his humorous Tyndallic Ode, which is dedicated to the Chief Musician upon Nabla, that is, Tait. William Thomson introduced the term to an American audience in an 1884 lecture, the notes were published in Britain, physical mathematics is very largely the mathematics of ∇. The name Nabla seems, therefore, ludicrously inefficient, heaviside and Josiah Willard Gibbs are credited with the development of the version of vector calculus most popular today. It is also an upside down triangle, there seems, however, to be no universally recognized name for it, although owing to the frequent occurrence of the symbol some name is a practical necessity. ∇V is read simply as ‘del V’ and this book is responsible for the form in which the mathematics of the operator in question is now usually expressed—most notably in undergraduate physics, and especially electrodynamics, textbooks. The nabla is used in vector calculus as part of the names of three distinct differential operators, the gradient, the divergence, and the curl, the last of these uses the cross product and thus makes sense only in three dimensions, the first two are fully general. The symbol is used in differential geometry to denote a connection. Other uses include the backward difference operator in the calculus of differences. In the computer science field of interpretation, the nabla is the usual symbol for the widening operator. NA Digest, Volume 98, Issue 03, earliest Uses of Symbols of Calculus. A survey of the use of ∇ in vector analysis Tai