Mechanics is that area of science concerned with the behaviour of physical bodies when subjected to forces or displacements, the subsequent effects of the bodies on their environment. The scientific discipline has its origins in Ancient Greece with the writings of Aristotle and Archimedes. During the early modern period, scientists such as Galileo and Newton laid the foundation for what is now known as classical mechanics, it is a branch of classical physics that deals with particles that are either at rest or are moving with velocities less than the speed of light. It can be defined as a branch of science which deals with the motion of and forces on objects; the field is yet less understood in terms of quantum theory. Classical mechanics came first and quantum mechanics is a comparatively recent development. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica. Both are held to constitute the most certain knowledge that exists about physical nature.
Classical mechanics has often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them. Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each pertains to specific situations; the correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult in quantum mechanics and hence remains useful and well used.
Modern descriptions of such behavior begin with a careful definition of such quantities as displacement, velocity, acceleration and force. Until about 400 years ago, motion was explained from a different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth. Cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance traveled from some starting position and the time that it took, he showed that the speed of falling objects increases during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction is discounted; the English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein’s theory of relativity.
For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, Newton's three laws of motion remain the cornerstone of dynamics, the study of what causes motion. In analogy to the distinction between quantum and classical mechanics, Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics; the differences between relativistic and Newtonian mechanics become significant and dominant as the velocity of a massive body approaches the speed of light. For instance, in Newtonian mechanics, Newton's laws of motion specify that F = ma, whereas in relativistic mechanics and Lorentz transformations, which were first discovered by Hendrik Lorentz, F = γma. Relativistic corrections are needed for quantum mechanics, although general relativity has not been integrated; the two theories remain incompatible, a hurdle which must be overcome in developing a theory of everything. The main theory of mechanics in antiquity was Aristotelian mechanics.
A developer in this tradition is Hipparchus. In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, discussed by Hipparchus and Philoponus. Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing, he said that an impetus is imparted to a projectile by the thrower, viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between'force' and'inclination', argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination, transferred to the object, that object will be in motion until the mayl is spent, he claimed that projectile in a vacuum would not stop unless it is acted upon. This conception of motion is consistent with Newton's first law of inertia. Which states that an object in motion will stay in mo
In mathematics, a negative number is a real number, less than zero. Negative numbers represent opposites. If positive represents a movement to the right, negative represents a movement to the left. If positive represents above sea level negative represents below sea level. If positive represents a deposit, negative represents a withdrawal, they are used to represent the magnitude of a loss or deficiency. A debt, owed may be thought of as a negative asset, a decrease in some quantity may be thought of as a negative increase. If a quantity may have either of two opposite senses one may choose to distinguish between those senses—perhaps arbitrarily—as positive and negative. In the medical context of fighting a tumor, an expansion could be thought of as a negative shrinkage. Negative numbers are used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit scales for temperature; the laws of arithmetic for negative numbers ensure that the common sense idea of an opposite is reflected in arithmetic.
For example, − = 3 because the opposite of an opposite is the original value. Negative numbers are written with a minus sign in front. For example, −3 represents a negative quantity with a magnitude of three, is pronounced "minus three" or "negative three". To help tell the difference between a subtraction operation and a negative number the negative sign is placed higher than the minus sign. Conversely, a number, greater than zero is called positive; the positivity of a number may be emphasized by placing a plus sign before it, e.g. +3. In general, the negativity or positivity of a number is referred to as its sign; every real number other than zero is either negative. The positive whole numbers are referred to as natural numbers, while the positive and negative whole numbers are referred to as integers. In bookkeeping, amounts owed are represented by red numbers, or a number in parentheses, as an alternative notation to represent negative numbers. Negative numbers appeared for the first time in history in the Nine Chapters on the Mathematical Art, which in its present form dates from the period of the Chinese Han Dynasty, but may well contain much older material.
Liu Hui established rules for subtracting negative numbers. By the 7th century, Indian mathematicians such as Brahmagupta were describing the use of negative numbers. Islamic mathematicians further developed the rules of subtracting and multiplying negative numbers and solved problems with negative coefficients. Western mathematicians accepted the idea of negative numbers around the middle of the 19th century. Prior to the concept of negative numbers, mathematicians such as Diophantus considered negative solutions to problems "false" and equations requiring negative solutions were described as absurd; some mathematicians like Leibniz agreed that negative numbers were false, but still used them in calculations. Negative numbers can be thought of as resulting from the subtraction of a larger number from a smaller. For example, negative three is the result of subtracting three from zero: 0 − 3 = −3. In general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers.
For example, 5 − 8 = −3since 8 − 5 = 3. The relationship between negative numbers, positive numbers, zero is expressed in the form of a number line: Numbers appearing farther to the right on this line are greater, while numbers appearing farther to the left are less, thus zero appears in the middle, with the positive numbers to the right and the negative numbers to the left. Note that a negative number with greater magnitude is considered less. For example though 8 is greater than 5, written 8 > 5negative 8 is considered to be less than negative 5: −8 < −5. It follows that any negative number is less than any positive number, so −8 < 5 and −5 < 8. In the context of negative numbers, a number, greater than zero is referred to as positive, thus every real number other than zero is either positive or negative, while zero itself is not considered to have a sign. Positive numbers are sometimes written with a plus sign in front, e.g. +3 denotes a positive three. Because zero is neither positive nor negative, the term nonnegative is sometimes used to refer to a number, either positive or zero, while nonpositive is used to refer to a number, either negative or zero.
Zero is a neutral number. Goal difference in association football and hockey. Plus-minus differential in ice hockey: the difference in total goals scored for the team and against the team when a particular player is on the ice is the player’s +/− rating. Players can have a negative rating. Run differential in baseball: the run differential is negative if the team allows more runs than they scored. British football clubs are deducted points if they enter administration, thus have a negative points total until they have earned at least that many points that season. Lap times in Formula 1 may be given as the difference compared to a previous lap, will be positive if slower and negative if faster. In some athletics events, such as sprint races, the hurdles, the triple jump and the long jump, the wind assistance is measured and recorde
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More each point of an n-dimensional manifold has a neighbourhood, homeomorphic to the Euclidean space of dimension n. In this more precise terminology, a manifold is referred to as an n-manifold. One-dimensional manifolds include circles, but not figure eights. Two-dimensional manifolds are called surfaces. Examples include the plane, the sphere, the torus, which can all be embedded in three dimensional real space, but the Klein bottle and real projective plane, which will always self-intersect when immersed in three-dimensional real space. Although a manifold locally resembles Euclidean space, meaning that every point has a neighbourhood homeomorphic to an open subset of Euclidean space, globally it may not: manifolds in general are not homeomorphic to Euclidean space. For example, the surface of the sphere is not homeomorphic to the Euclidean plane, because it has the global topological property of compactness that Euclidean space lacks, but in a region it can be charted by means of map projections of the region into the Euclidean plane.
When a region appears in two neighbouring charts, the two representations do not coincide and a transformation is needed to pass from one to the other, called a transition map. The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described and understood in terms of the simpler local topological properties of Euclidean space. Manifolds arise as solution sets of systems of equations and as graphs of functions. Manifolds can be equipped with additional structure. One important class of manifolds is the class of differentiable manifolds. A Riemannian metric on a manifold allows angles to be measured. Symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. A surface is a two dimensional manifold, meaning that it locally resembles the Euclidean plane near each point. For example, the surface of a globe can be described by a collection of maps, which together form an atlas of the globe.
Although no individual map is sufficient to cover the entire surface of the globe, any place in the globe will be in at least one of the charts. Many places will appear in more than one chart. For example, a map of North America will include parts of South America and the Arctic circle; these regions of the globe will be described in full in separate charts, which in turn will contain parts of North America. There is a relation between adjacent charts, called a transition map that allows them to be patched together to cover the whole of the globe. Describing the coordinate charts on surfaces explicitly requires knowledge of functions of two variables, because these patching functions must map a region in the plane to another region of the plane. However, one-dimensional examples of manifolds can be described with functions of a single variable only. Manifolds have applications in computer-graphics and augmented-reality given the need to associate pictures to coordinates. In an augmented reality setting, a picture can be seen as something associated with a coordinate and by using sensors for detecting movements and rotation one can have knowledge of how the picture is oriented and placed in space.
After a line, the circle is the simplest example of a topological manifold. Topology ignores bending, so a small piece of a circle is treated the same as a small piece of a line. Consider, for instance, the top part of the unit circle, x2 + y2 = 1, where the y-coordinate is positive. Any point of this arc can be uniquely described by its x-coordinate. So, projection onto the first coordinate is a continuous, invertible, mapping from the upper arc to the open interval: χ t o p = x; such functions along with the open regions they map are called charts. There are charts for the bottom and right parts of the circle: χ b o t t o m = x χ l e f t = y χ r i g h t = y. Together, these parts cover the four charts form an atlas for the circle; the top and right charts, χ t o
In physics, motion is the change in position of an object with respect to its surroundings in a given interval of time. Motion is mathematically described in terms of displacement, velocity, acceleration and speed. Motion of a body is observed by attaching a frame of reference to an observer and measuring the change in position of the body relative to that frame. If the position of a body is not changing with respect to a given frame of reference, the body is said to be at rest, immobile, stationary, or to have constant position with reference to its surroundings. An object's motion can not change. Momentum is a quantity, used for measuring the motion of an object. An object's momentum is directly related to the object's mass and velocity, the total momentum of all objects in an isolated system does not change with time, as described by the law of conservation of momentum; as there is no absolute frame of reference, absolute motion cannot be determined. Thus, everything in the universe can be considered to be moving.
Motion applies to objects and matter particles, to radiation, radiation fields and radiation particles, to space, its curvature and space-time. One can speak of motion of shapes and boundaries. So, the term motion, in general, signifies a continuous change in the configuration of a physical system. For example, one can talk about motion of a wave or about motion of a quantum particle, where the configuration consists of probabilities of occupying specific positions. In physics, motion is described through two sets of contradictory laws of mechanics. Motions of all large-scale and familiar objects in the universe are described by classical mechanics. Whereas the motion of small atomic and sub-atomic objects is described by quantum mechanics. Classical mechanics is used for describing the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets and galaxies, it produces accurate results within these domains, is one of the oldest and largest in science and technology.
Classical mechanics is fundamentally based on Newton's laws of motion. These laws describe the relationship between the forces acting on a body and the motion of that body, they were first compiled by Sir Isaac Newton in his work Philosophiæ Naturalis Principia Mathematica, first published on July 5, 1687. Newton's three laws are: A body either is at rest or moves with constant velocity and unless an outer force is applied to it. An object will travel in one direction. Whenever one body exerts a force F onto a second body, the second body exerts the force −F on the first body. F and − F are equal in opposite in sense. So, the body which exerts F will go backwards. Newton's three laws of motion were the first to provide a mathematical model for understanding orbiting bodies in outer space; this explanation unified motion of objects on earth. Classical mechanics was further enhanced by Albert Einstein's special relativity and general relativity. Special relativity is concerned with the motion of objects with a high velocity, approaching the speed of light.
Uniform Motion: When an object moves with a constant speed at a particular direction at regular intervals of time it's known as the uniform motion. For example: a bike moving in a straight line with a constant speed. Equations of Uniform Motion: If v = final velocity, u = initial velocity, a = acceleration, t = time, s = displacement, then: v = u + a t v 2 = u 2 + 2 a s s = u t + a t 2 2 Quantum mechanics is a set of principles describing physical reality at the atomic level of matter and the subatomic particles; these descriptions include the simultaneous wave-like and particle-like behavior of both matter and radiation energy as described in the wave–particle duality. In classical mechanics, accurate measurements and predictions of the state of objects can be calculated, such as location and velocity. In the quantum mechanics, due to the Heisenberg uncertainty principle, the complete state of a subatomic particle, such as its location and velocity, cannot be determined. In addition to describing the motion of atomic level phenomena, quantum mechanics is useful in understanding some large-scale phenomenon such as superfluidity, superconductivity, biological systems, including the function of smell receptors and the structures of proteins.
Humans, like all known things in the universe, are in constant motion. Many of these "imperceptible motions" are only perceivable with the help of special tools and careful observation; the larger scales of imperceptible motions are difficult for humans to perceive
Expansion of the universe
The expansion of the universe is the increase of the distance between two distant parts of the universe with time. It is an intrinsic expansion; the universe does not require space to exist "outside" it. Technically, neither space nor objects in space move. Instead it is the metric governing the geometry of spacetime itself that changes in scale. Although light and objects within spacetime cannot travel faster than the speed of light, this limitation does not restrict the metric itself. To an observer it appears that space is expanding and all but the nearest galaxies are receding into the distance. During the inflationary epoch about 10−32 of a second after the Big Bang, the universe expanded, its volume increased by a factor of at least 1078, equivalent to expanding an object 1 nanometer in length to one 10.6 light years long. A much slower and gradual expansion of space continued after this, until at around 9.8 billion years after the Big Bang it began to expand more and is still doing so. The metric expansion of space is of a kind different from the expansions and explosions seen in daily life.
It seems to be a property of the universe as a whole rather than a phenomenon that applies just to one part of the universe or can be observed from "outside" it. Metric expansion is a key feature of Big Bang cosmology, is modeled mathematically with the Friedmann-Lemaître-Robertson-Walker metric and is a generic property of the universe we inhabit. However, the model is valid only on large scales, because gravitational attraction binds matter together enough that metric expansion cannot be observed at this time, on a smaller scale; as such, the only galaxies receding from one another as a result of metric expansion are those separated by cosmologically relevant scales larger than the length scales associated with the gravitational collapse that are possible in the age of the universe given the matter density and average expansion rate. Physicists have postulated the existence of dark energy, appearing as a cosmological constant in the simplest gravitational models as a way to explain the acceleration.
According to the simplest extrapolation of the currently-favored cosmological model, the Lambda-CDM model, this acceleration becomes more dominant into the future. In June 2016, NASA and ESA scientists reported that the universe was found to be expanding 5% to 9% faster than thought earlier, based on studies using the Hubble Space Telescope. While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important; these situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "separation" is different from that used in an inertial frame. This can be seen. Light, emitted today from galaxies beyond the cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past.
Because of the high rate of expansion, it is possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and professional physicists. Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within education and communication of scientific concepts. In 1912, Vesto Slipher discovered that light from remote galaxies was redshifted, interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used Einstein field equations to provide theoretical evidence that the universe is expanding. In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, presented the first observational evidence for a linear relationship between distance to galaxies and their recessional velocity.
Edwin Hubble observationally confirmed Lemaître's findings two years later. Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other. Based on large quantities of experimental observation and theoretical work, the scientific consensus is that space itself is expanding, that it expanded rapidly within the first fraction of a second after the Big Bang; this kind of expansion is known as "metric expansion". In mathematics and physics, a "metric" means a measure of distance, the term implies that the sense of distance within the universe is itself changing; the modern explanation for the metric expansion of space was proposed by physicist Alan Guth in 1979 while investigating the problem of why no magnetic monopoles are seen today. Guth found in his investigation that if the universe contained a field that has a positive-energy false vacuum state according to general relativity it would generate an exponential expansion of space. I
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T
A social network is a social structure made up of a set of social actors, sets of dyadic ties, other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities as well as a variety of theories explaining the patterns observed in these structures; the study of these structures uses social network analysis to identify local and global patterns, locate influential entities, examine network dynamics. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships; these approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s.
Social network analysis is now one of the major paradigms in contemporary sociology, is employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science; the social network is a theoretical construct useful in the social sciences to study relationships between individuals, organizations, or entire societies. The term is used to describe a social structure determined by such interactions; the ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is ignored although this may not be the case in practice.
Because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, communication studies, geography, information science, organizational studies, social psychology and sociolinguistics. In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief or impersonal and instrumental social links. Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.
Major developments in the field can be seen in the 1930s by several groups in psychology and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups classrooms and work groups. In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure, influential in network analysis. In sociology, the early work of Talcott Parsons set the stage for taking a relational approach to understanding social structure.
Drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, others and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks.
In general, social networks are self-organizing, em