A signal-flow graph or signal-flowgraph, invented by Claude Shannon, but called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, branches represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs, which includes as well that of oriented graphs; this mathematical theory of digraphs exists, of course, quite apart from its applications. SFGs are most used to represent signal flow in a physical system and its controller, forming a cyber-physical system. Among their other uses are the representation of signal flow in various electronic networks and amplifiers, digital filters, state-variable filters and some other types of analog filters. In nearly all literature, a signal-flow graph is associated with a set of linear equations. Wai-Kai Chen wrote: "The concept of a signal-flow graph was worked out by Shannon in dealing with analog computers.
The greatest credit for the formulation of signal-flow graphs is extended to Mason. He showed how to use the signal-flow graph technique to solve some difficult electronic problems in a simple manner; the term signal flow graph was used because of its original application to electronic problems and the association with electronic signals and flowcharts of the systems under study."Lorens wrote: "Previous to Mason's work, C. E. Shannon worked out a number of the properties of what are now known as flow graphs; the paper had a restricted classification and few people had access to the material.""The rules for the evaluation of the graph determinant of a Mason Graph were first given and proven by Shannon using mathematical induction. His work remained unknown after Mason published his classical work in 1953. Three years Mason rediscovered the rules and proved them by considering the value of a determinant and how it changes as variables are added to the graph. " Robichaud et al. identify the domain of application of SFGs as follows: "All the physical systems analogous to these networks constitute the domain of application of the techniques developed.
Trent has shown that all the physical systems which satisfy the following conditions fall into this category. The finite lumped system is composed of a number of simple parts, each of which has known dynamical properties which can be defined by equations using two types of scalar variables and parameters of the system. Variables of the first type represent quantities which can be measured, at least conceptually, by attaching an indicating instrument to two connection points of the element. Variables of the second type characterize quantities which can be measured by connecting a meter in series with the element. Relative velocities and positions, pressure differentials and voltages are typical quantities of the first class, whereas electric currents, rates of heat flow, are variables of the second type. Firestone has been the first to distinguish these two types of variables with the names across variables and through variables. Variables of the first type must obey a mesh law, analogous to Kirchhoff's voltage law, whereas variables of the second type must satisfy an incidence law analogous to Kirchhoff's current law.
Physical dimensions of appropriate products of the variables of the two types must be consistent. For the systems in which these conditions are satisfied, it is possible to draw a linear graph isomorphic with the dynamical properties of the system as described by the chosen variables; the techniques can be applied directly to these linear graphs as well as to electrical networks, to obtain a signal flow graph of the system." The following illustration and its meaning were introduced by Mason to illustrate basic concepts: In the simple flow graphs of the figure, a functional dependence of a node is indicated by an incoming arrow, the node originating this influence is the beginning of this arrow, in its most general form the signal flow graph indicates by incoming arrows only those nodes that influence the processing at the receiving node, at each node, i, the incoming variables are processed according to a function associated with that node, say Fi. The flowgraph in represents a set of explicit relationships: x 1 = an independent variable x 2 = F 2 x 3 = F 3 Node x1 is an isolated node because no arrow is incoming.
These relationships define for every node a function. Each non-source node combines the input signals in some manner, broadcasts a resulting signal along each outgoing branch. "A flow graph, as defined by Mason, implies a set of functional relation
Signal processing is a subfield of mathematics and electrical engineering that concerns the analysis and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound and biological measurements. For example, signal processing techniques are used to improve signal transmission fidelity, storage efficiency, subjective quality, to emphasize or detect components of interest in a measured signal. According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. Oppenheim and Schafer further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. Analog signal processing is for signals that have not been digitized, as in legacy radio, telephone and television systems; this involves linear electronic circuits as well as non-linear ones. The former are, for instance, passive filters, active filters, additive mixers and delay lines.
Non-linear circuits include compandors, voltage-controlled filters, voltage-controlled oscillators and phase-locked loops. Continuous-time signal processing is for signals; the methods of signal processing include time domain, frequency domain, complex frequency domain. This technology discusses the modeling of linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals Discrete-time signal processing is for sampled signals, defined only at discrete points in time, as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers; this technology was a predecessor of digital signal processing, is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration.
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the Fast Fourier transform, finite impulse response filter, Infinite impulse response filter, adaptive filters such as the Wiener and Kalman filters. Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatio-temporal domains. Nonlinear systems can produce complex behaviors including bifurcations, chaos and subharmonics which cannot be produced or analyzed using linear methods. Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks.
Statistical techniques are used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, construct techniques based on this model to reduce the noise in the resulting image. Audio signal processing – for electrical signals representing sound, such as speech or music Speech signal processing – for processing and interpreting spoken words Image processing – in digital cameras and various imaging systems Video processing – for interpreting moving pictures Wireless communication – waveform generations, filtering, equalization Control systems Array processing – for processing signals from arrays of sensors Process control – a variety of signals are used, including the industry standard 4-20 mA current loop Seismology Financial signal processing – analyzing financial data using signal processing techniques for prediction purposes. Feature extraction, such as image understanding and speech recognition. Quality improvement, such as noise reduction, image enhancement, echo cancellation.
Including audio compression, image compression, video compression. Genomics, Genomic signal processing In communication systems, signal processing may occur at: OSI layer 1 in the seven layer OSI model, the Physical Layer. Filters – for example analog or digital Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, later rebuilding the original signal or an approximation thereof. Signal compressors Digital signal processors Differential equations Recurrence relation Transform theory Time-frequency analysis – for processing non-stationary signals Spectral estimation – for determining the spectral content of a
Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. The field includes studies of earthquake environmental effects such as tsunamis as well as diverse seismic sources such as volcanic, oceanic and artificial processes such as explosions. A related field that uses geology to infer information regarding past earthquakes is paleoseismology. A recording of earth motion as a function of time is called a seismogram. A seismologist is a scientist. Scholarly interest in earthquakes can be traced back to antiquity. Early speculations on the natural causes of earthquakes were included in the writings of Thales of Miletus, Anaximenes of Miletus and Zhang Heng. In 132 CE, Zhang Heng of China's Han dynasty designed the first known seismoscope. In the 17th century, Athanasius Kircher argued that earthquakes were caused by the movement of fire within a system of channels inside the Earth. Martin Lister and Nicolas Lemery proposed that earthquakes were caused by chemical explosions within the earth.
The Lisbon earthquake of 1755, coinciding with the general flowering of science in Europe, set in motion intensified scientific attempts to understand the behaviour and causation of earthquakes. The earliest responses include work by John Michell. Michell determined that earthquakes originate within the Earth and were waves of movement caused by "shifting masses of rock miles below the surface". From 1857, Robert Mallet laid the foundation of instrumental seismology and carried out seismological experiments using explosives, he is responsible for coining the word "seismology". In 1897, Emil Wiechert's theoretical calculations led him to conclude that the Earth's interior consists of a mantle of silicates, surrounding a core of iron. In 1906 Richard Dixon Oldham identified the separate arrival of P-waves, S-waves and surface waves on seismograms and found the first clear evidence that the Earth has a central core. In 1910, after studying the April 1906 San Francisco earthquake, Harry Fielding Reid put forward the "elastic rebound theory" which remains the foundation for modern tectonic studies.
The development of this theory depended on the considerable progress of earlier independent streams of work on the behaviour of elastic materials and in mathematics. In 1926, Harold Jeffreys was the first to claim, based on his study of earthquake waves, that below the mantle, the core of the Earth is liquid. In 1937, Inge Lehmann determined that within the earth's liquid outer core there is a solid inner core. By the 1960s, earth science had developed to the point where a comprehensive theory of the causation of seismic events had come together in the now well-established theory of plate tectonics. Seismic waves are elastic waves that propagate in fluid materials, they can be divided into body waves. There are pressure waves or primary waves and shear or secondary waves. P-waves are longitudinal waves that involve compression and expansion in the direction that the wave is moving and are always the first waves to appear on a seismogram as they are the fastest moving waves through solids. S-waves are transverse waves.
S-waves are slower than P-waves. Therefore, they appear than P-waves on a seismogram. Fluids cannot support perpendicular motion, so S-waves only travel in solids. Surface waves are the result of P- and S-waves interacting with the surface of the Earth; these waves are dispersive. The two main surface wave types are Rayleigh waves, which have both compressional and shear motions, Love waves, which are purely shear. Rayleigh waves result from the interaction of P-waves and vertically polarized S-waves with the surface and can exist in any solid medium. Love waves are formed by horizontally polarized S-waves interacting with the surface, can only exist if there is a change in the elastic properties with depth in a solid medium, always the case in seismological applications. Surface waves travel more than P-waves and S-waves because they are the result of these waves traveling along indirect paths to interact with Earth's surface; because they travel along the surface of the Earth, their energy decays less than body waves, thus the shaking caused by surface waves is stronger than that of body waves.
The primary surface waves are the largest signals on earthquake seismograms. Surface waves are excited when their source is close to the surface, as in a shallow earthquake or a near surface explosion, are much weaker for deep earthquake sources. Both body and surface waves are traveling waves; this ringing is a mixture of normal modes with discrete frequencies and periods of an hour or shorter. Motion caused by a large earthquake can be observed for up to a month after the event; the first observations of normal modes were made in the 1960s as the advent of higher fidelity instruments coincided with two of the largest earthquakes of the 20th century – the 1960 Valdivia earthquake and the 1964 Alaska earthquake. Since the normal modes of the Earth have given us some of the strongest constraints on the deep structure of
Linear time-invariant system
Linear time-invariant theory known as LTI system theory, investigates the response of a linear and time-invariant system to an arbitrary input signal. Trajectories of these systems are measured and tracked as they move through time, but in applications like image processing and field theory, the LTI systems have trajectories in spatial dimensions. Thus, these systems are called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term. A good example of LTI systems are electrical circuits that can be made up of resistors and inductors.. It has been used in applied mathematics and has direct applications in NMR spectroscopy, circuits, signal processing, control theory, other technical areas; the defining properties of any LTI system are time invariance. Linearity means that the relationship between the input and the output of the system is a linear map: If input x 1 produces response y 1, input x 2 produces response y 2 the scaled and summed input a 1 x 1 + a 2 x 2 produces the scaled and summed response a 1 y 1 + a 2 y 2 where a 1 and a 2 are real scalars.
It follows that this can be extended to an arbitrary number of terms, so for real numbers c 1, c 2, …, c k,Input ∑ k c k x k produces output ∑ k c k y k. In particular, where c ω and x ω are scalars and inputs that vary over a continuum indexed by ω, thus if an input function can be represented by a continuum of input functions, combined "linearly", as shown the corresponding output function can be represented by the corresponding continuum of output functions and summed in the same way. Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds; that is, if the output due to input x is y the output due to input x is y. Hence, the system is time invariant because the output does not depend on the particular time the input is applied; the fundamental result in LTI system theory is that any LTI system can be characterized by a single function called the system's impulse response. The output of the system is the convolution of the input to the system with the system's impulse response.
This method of analysis is called the time domain point-of-view. The same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, convolution is defined on sequences. Equivalently, any LTI system can be characterized in the frequency domain by the system's transfer function, the Laplace transform of the system's impulse response; as a result of the properties of these transforms, the output of the system in the frequency domain is the product of the transfer function and the transform of the input. In other words, convolution in the time domain is equivalent to multiplication in the frequency domain. For all LTI systems, the eigenfunctions, the basis functions of the transforms, are complex exponentials; this is, if the input to a system is the complex waveform A s e s t for some complex amplitude A s and complex frequency s, the output will be some complex constant times the input, say B s e s t for some new complex amplitude B s. The ratio B s / A.
Since sinusoids are a sum of complex exponentials with complex-conjugate frequencies, if the input to the system is a sinusoid the output of th
Nuclear magnetic resonance spectroscopy
Nuclear magnetic resonance spectroscopy, most known as NMR spectroscopy or magnetic resonance spectroscopy, is a spectroscopic technique to observe local magnetic fields around atomic nuclei. The sample is placed in a magnetic field and the NMR signal is produced by excitation of the nuclei sample with radio waves into nuclear magnetic resonance, detected with sensitive radio receivers; the intramolecular magnetic field around an atom in a molecule changes the resonance frequency, thus giving access to details of the electronic structure of a molecule and its individual functional groups. As the fields are unique or characteristic to individual compounds, in modern organic chemistry practice, NMR spectroscopy is the definitive method to identify monomolecular organic compounds. Biochemists use NMR to identify proteins and other complex molecules. Besides identification, NMR spectroscopy provides detailed information about the structure, reaction state, chemical environment of molecules; the most common types of NMR are proton and carbon-13 NMR spectroscopy, but it is applicable to any kind of sample that contains nuclei possessing spin.
NMR spectra are unique, well-resolved, analytically tractable and highly predictable for small molecules. Different functional groups are distinguishable, identical functional groups with differing neighboring substituents still give distinguishable signals. NMR has replaced traditional wet chemistry tests such as color reagents or typical chromatography for identification. A disadvantage is that a large amount, 2–50 mg, of a purified substance is required, although it may be recovered through a workup. Preferably, the sample should be dissolved in a solvent, because NMR analysis of solids requires a dedicated magic angle spinning machine and may not give well-resolved spectra; the timescale of NMR is long, thus it is not suitable for observing fast phenomena, producing only an averaged spectrum. Although large amounts of impurities do show on an NMR spectrum, better methods exist for detecting impurities, as NMR is inherently not sensitive - though at higher frequencies, sensitivity is higher.
Correlation spectroscopy is a development of ordinary NMR. In two-dimensional NMR, the emission is centered around a single frequency, correlated resonances are observed; this allows identifying the neighboring substituents of the observed functional group, allowing unambiguous identification of the resonances. There are more complex 3D and 4D methods and a variety of methods designed to suppress or amplify particular types of resonances. In nuclear Overhauser effect spectroscopy, the relaxation of the resonances is observed; as NOE depends on the proximity of the nuclei, quantifying the NOE for each nucleus allows for construction of a three-dimensional model of the molecule. NMR spectrometers are expensive. Modern NMR spectrometers have a strong and expensive liquid helium-cooled superconducting magnet, because resolution directly depends on magnetic field strength. Less expensive machines using permanent magnets and lower resolution are available, which still give sufficient performance for certain application such as reaction monitoring and quick checking of samples.
There are benchtop nuclear magnetic resonance spectrometers. NMR can be observed than a millitesla. Low-resolution NMR produces broader peaks which can overlap one another causing issues in resolving complex structures; the use of higher strength magnetic fields result in clear resolution of the peaks and is the standard in industry. The Purcell group at Harvard University and the Bloch group at Stanford University independently developed NMR spectroscopy in the late 1940s and early 1950s. Edward Mills Purcell and Felix Bloch shared the 1952 Nobel Prize in Physics for their discoveries; when placed in a magnetic field, NMR active nuclei absorb electromagnetic radiation at a frequency characteristic of the isotope. The resonant frequency, energy of the radiation absorbed, the intensity of the signal are proportional to the strength of the magnetic field. For example, in a 21 Tesla magnetic field, hydrogen atoms resonate at 900 MHz, it is common to refer to a 21 T magnet as a 900 MHz magnet since hydrogen is the most common nucleus detected, however different nuclei will resonate at different frequencies at this field strength in proportion to their nuclear magnetic moments.
An NMR spectrometer consists of a spinning sample-holder inside a strong magnet, a radio-frequency emitter and a receiver with a probe that goes inside the magnet to surround the sample, optionally gradient coils for diffusion measurements, electronics to control the system. Spinning the sample is necessary to average out diffusional motion, however some experiments call for a stationary sample when solution movement is an important variable. For instance, measurements of diffusion constants are done using a stationary sample with spinning off, flow cells can be used for online analysis of process flows; the vast majority of molecules in a solution are solvent molecules, most regular solvents are hydrocarbons and so contain NMR-active protons. In order to avoid detecting only signals from solvent hydrogen atoms, deuterated solvents are used where 99+% of the protons are replaced with deuterium; the most used deuterated solvent is deuterochloroform, although other solvents may be used depending on the solubility of a sample.
Deuterium oxide and deuterated DMSO (DMSO-d
In control engineering, a state-space representation is a mathematical model of a physical system as a set of input and state variables related by first-order differential equations or difference equations. State variables are variables whose values evolve through time in a way that depends on the values they have at any given time and depends on the externally imposed values of input variables. Output variables’ values depend on the values of the state variables; the "state space" is the Euclidean space. The state of the system can be represented as a vector within that space. To abstract from the number of inputs and states, these variables are expressed as vectors. Additionally, if the dynamical system is linear, time-invariant, finite-dimensional the differential and algebraic equations may be written in matrix form; the state-space method is characterized by significant algebraization of general system theory, which makes it possible to use Kronecker vector-matrix structures. The capacity of these structures can be efficiently applied to research systems with modulation or without it.
The state-space representation provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With p inputs and q outputs, we would otherwise have to write down q × p Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions; the state-space model is used in many different areas. In econometrics, the state-space model can be used for forecasting stock prices and numerous other variables; the internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system, n, is equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction.
It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, may provide a description of a system, stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors; the state variables defined must be linearly independent, i.e. no state variable can be written as a linear combination of the other state variables or the system will not be able to be solved. The most general state-space representation of a linear system with p inputs, q outputs and n state variables is written in the following form: x ˙ = A x + B u y = C x + D u where: x is called the "state vector", x ∈ R n.
Control theory in control systems engineering is a subfield of mathematics that deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without delay or overshoot and ensuring control stability. To do this, a controller with the requisite corrective behaviour is required; this controller monitors the controlled process variable, compares it with the reference or set point. The difference between actual and desired value of the process variable, called the error signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are studied are controllability and observability. On this is based the advanced type of automation that revolutionized manufacturing, aircraft and other industries; this is feedback control, continuous and involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range by means of a "final control element", such as a control valve.
Extensive use is made of a diagrammatic style known as the block diagram. In it the transfer function known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system. Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria. Although a major application of control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this; as the general theory of feedback systems, control theory is useful. Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled On Governors.
This described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem. A notable application of dynamic control was in the area of manned flight; the Wright brothers made their first successful test flights on December 17, 1903 and were distinguished by their ability to control their flights for substantial periods. Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, applied the bang-bang principle to the development of automatic flight control equipment for aircraft.
Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics. A Centrifugal governor is used to regulate the windmill velocity. Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship; the Space Race depended on accurate spacecraft control, control theory has seen an increasing use in fields such as economics. Fundamentally, there are two types of control loops: closed loop control. In open loop control, the control action from the controller is independent of the "process output". A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building; the control action is the timed switching on/off of the boiler, the process variable is the building temperature, but neither is linked.
In closed loop control, the control action from the controller is dependent on feedback from the process in the form of the value of the process variable. In the case of the boiler analogy, a closed loop would include a thermostat to compare the building temperature with the temperature set on the thermostat; this generates a controller output to maintain the building at the desired temperature by switching the boiler on and off. A closed loop controller, has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the "Reference input" or "set point". For this reason, closed loop controllers are called feedback controllers; the definition of a closed loop control system according to the British Standard Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."