In mathematics, deconvolution is an algorithm-based process used to reverse the effects of convolution on recorded data. The concept of deconvolution is used in the techniques of signal processing and image processing; because these techniques are in turn used in many scientific and engineering disciplines, deconvolution finds many applications. In general, the objective of deconvolution is to find the solution of a convolution equation of the form: f ∗ g = h Usually, h is some recorded signal, f is some signal that we wish to recover, but has been convolved with some other signal g before we recorded it; the function g might represent the transfer function of an instrument or a driving force, applied to a physical system. If we know g, or at least know the form of g we can perform deterministic deconvolution. However, if we do not know g in advance we need to estimate it; this is most done using methods of statistical estimation. In physical measurements, the situation is closer to + ε = h In this case ε is noise that has entered our recorded signal.
If we assume that a noisy signal or image is noiseless when we try to make a statistical estimate of g, our estimate will be incorrect. In turn, our estimate of ƒ will be incorrect; the lower the signal-to-noise ratio, the worse our estimate of the deconvolved signal will be. That is the reason why inverse filtering the signal is not a good solution. However, if we have at least some knowledge of the type of noise in the data, we may be able to improve the estimate of ƒ through techniques such as Wiener deconvolution. Deconvolution is performed by computing the Fourier Transform of the recorded signal h and the transfer function g, apply deconvolution in the Frequency domain, which in the case of absence of noise is merely: F = H / G F, G, H being the Fourier Transforms of f, g, h respectively. Inverse Fourier Transform F to find the estimated deconvolved signal f; the foundations for deconvolution and time-series analysis were laid by Norbert Wiener of the Massachusetts Institute of Technology in his book Extrapolation and Smoothing of Stationary Time Series.
The book was based on work Wiener had done during World War II but, classified at the time. Some of the early attempts to apply these theories were in the fields of weather forecasting and economics; the concept of deconvolution had an early application in reflection seismology. In 1950, Enders Robinson was a graduate student at MIT, he worked with others at MIT, such as Norbert Wiener, Norman Levinson, economist Paul Samuelson, to develop the "convolutional model" of a reflection seismogram. This model assumes that the recorded seismogram s is the convolution of an Earth-reflectivity function e and a seismic wavelet w from a point source, where t represents recording time. Thus, our convolution equation is s =; the seismologist is interested in e. By the convolution theorem, this equation may be Fourier transformed to S = E W in the frequency domain. By assuming that the reflectivity is white, we can assume that the power spectrum of the reflectivity is constant, that the power spectrum of the seismogram is the spectrum of the wavelet multiplied by that constant.
Thus, | S | ≈ k | W |. If we assume that the wavelet is minimum phase, we can recover it by calculating the minimum phase equivalent of the power spectrum we just found; the reflectivity may be recovered by designing and applying a Wiener filter that shapes the estimated wavelet to a Dirac delta function. The result may be seen as a series of scaled, shifted delta functions: e = ∑ i = 1 N r i δ,where N is the number of reflection events, τ i τ i are the reflection times of each event, r i are the reflection coefficients. In practice, since we are dealing with noisy, finite bandwidth, finite length, discretely sampled datasets, the above procedure only yields an approximation of the filter required to deconvolve the data. However, by formulating the problem as the solution of a Toeplitz matrix and using Levinson recursion, we can quickly estimate a filter with the smallest mean squared error possible. We can do deconvolution directly in the frequency domain and get similar results; the technique is related to linear prediction.
In optics and imaging, the term "deconvolution" is used to refer to the process of reversing the optical distortion that takes place in an optical microscope, electron microscope, telescope, or other imaging instrument, thus creating clearer images. It is done in the digital
Digital artifact in information science, is any undesired or unintended alteration in data introduced in a digital process by an involved technique and/or technology. In anthropology and archeology a digital artifact is an artifact, of a digital nature or creation. For example, a gif is such an artifact. Digital artifact can be of any content types including text, video, animation or a combination. In information science, digital artifacts result from: Hardware malfunction: In computer graphics, visual artifacts may be generated whenever a hardware component such as the processor, memory chip, cabling malfunctions, etc. corrupts data. Examples of malfunctions include physical damage, insufficient voltage and GPU overclocking. Common types of hardware artifacts are texture corruption and T-vertices in 3D graphics, pixelization in MPEG compressed video. Software malfunction: Artifacts may be caused by algorithm flaws such as decoding/encoding audio or video, or a poor pseudo-random number generator that would introduce artifacts distinguishable from the desired noise into statistical models.
Compression: Controlled amounts of unwanted information may be generated as a result of the use of lossy compression techniques. One example is the artifacts seen in JPEG and MPEG compression algorithms that produce compression artifacts. Aliasing: Digital imprecision generated in the process of converting analog information into digital space is due to the limited granularity of digital numbering space. In computer graphics, aliasing is seen as pixelation. Rolling shutter, the line scanning of an object, moving too fast for the image sensor to capture a unitary image. Error diffusion: poorly-weighted kernel coefficients result in undesirable visual artifacts. DPReview: Glossary: Artifacts AMORES project
Inpainting is the process of reconstructing lost or deteriorated parts of images and videos. In the museum world, in the case of a valuable painting, this task would be carried out by a skilled art conservator or art restorer. In the digital world, inpainting refers to the application of sophisticated algorithms to replace lost or corrupted parts of the image data. There are many applications of this technique. In photography and cinema, it is used for film restoration, it is used for removing red-eye, the stamped date from photographs and removing objects to creative effect. This technique can be used to replace the lost blocks in the coding and transmission of images, for example, in a streaming video, it can be used to remove logos in videos. Deep learning neural network based inpainting can be used for decensoring images. Inpainting is rooted in the restoration of images. Traditionally, inpainting has been done by professional restorers; the underlying methodology of their work is as follows: The global picture determines how to fill in the gap.
The purpose of inpainting is to restore the unity of the work. The structure of the gap surroundings is supposed to be continued into the gap. Contour lines that arrive at the gap boundary are prolonged into the gap; the different regions inside a gap, as defined by the contour lines, are filled with colors matching for those of its boundary. The small details are painted, i.e. “texture” is added. Since the wide applications of digital camera and the digitalization of old photos, inpainting has become an automatic process, performed on digital images. More than scratch removing, the inpainting techniques are applied to object removal, text removal and other automatic modifications of images and videos. Furthermore, they can be observed in applications like image compression and super resolution. Three main groups of 2D image inpainting algorithms can be found in literature; the first one to be noted is structural inpainting, the second one is texture inpainting and the last one is a combination of these two techniques.
All these inpainting methods have one thing in common - they use the information of the known or undestroyed image areas in order to fill the gap. Structural inpainting uses geometric approaches for filling in the missing information in the region which should be inpainted; these algorithms focus on the consistency of the geometric structure. Like everything else the structural inpainting methods have both disadvantages; the main problem is. Texture has a repetitive pattern which means that a missing portion cannot be restored by continuing the level lines into the gap. Combined structural and textural inpainting approaches try to perform texture and structure filling in regions of missing image information. Most parts of an image consist of structure; the boundaries between image regions accumulate structural information, a complex phenomenon. This is the result; that is why, the state of the art inpainting method attempts to combine structural and textural inpainting. A more traditional method is to use differential equations with Dirichlet boundary conditions for continuity.
This works well. Other methods follow isophote directions. Recent investigations included the exploration of the wavelet transform properties to perform inpainting in the space-frequency domain, obtaining a better performance when compared to the frequency-based inpainting techniques. Model based inpainting follows the Bayesian approach for which missing information is best fitted or estimated from the combination of the models of the underlying images as well as the image data being observed. In deterministic language, this has led to various variational inpainting models. Manual computer methods include using a clone tool or healing tool, to copy existing parts of the image to restore a damaged texture. Texture synthesis may be used. Exemplar-based image inpainting attempts to automate the clone tool process, it fills "holes" in the image by searching for similar patches in a nearby source region of the image, copying the pixels from the most similar patch into the hole. By performing the fill at the patch level as opposed to the pixel level, the algorithm reduces blurring artifacts caused by prior techniques.
Infrared cleaning Noise reduction Seam carving Image reconstruction Interactive Point-and-Click Segmentation for Object Removal in Digital Images, ICCV-HCI 2005 Inpainting and the Fundamental Problem of Image Processing, SIAM Image Inpainting, by the Image Processing Group at UCLA. Image and Video Inpainting by Guillermo Sapiro Mathematica Inpaint function. Inpainting of images on meshes or point clouds, TIP 2014 Image completion using planar structure guidance, SIGGRAPH 2014
Electron cryotomography is an imaging technique used to produce high-resolution three-dimensional views of samples biological macromolecules and cells. CryoET is a specialized application of transmission electron cryomicroscopy in which samples are imaged as they are tilted, resulting in a series of 2D images that can be combined to produce a 3D reconstruction, similar to a CT scan of the human body. In contrast to other electron tomography techniques, samples are immobilized in non-crystalline ice and imaged under cryogenic conditions, allowing them to be imaged without dehydration or chemical fixation, which could otherwise disrupt or distort biological structures. In electron microscopy, samples are imaged in an ultra-high vacuum; such a vacuum is incompatible with biological samples such as cells. In room-temperature EM techniques, samples are therefore prepared by dehydration. Another approach to stabilize biological samples, however, is to freeze them; as in other electron cryomicroscopy techniques, samples for CryoET (typically small cells are prepared in standard aqueous media and applied to an EM grid.
The grid is plunged into a cryogen so efficient that water molecules do not have time to rearrange into a crystalline lattice. The resulting water state is called "vitreous ice" and preserves native cellular structures, such as lipid membranes, that would be destroyed by freezing. Plunge-frozen samples are subsequently stored and imaged at liquid-nitrogen temperatures so that the water never warms enough to crystallize. Samples are imaged in a transmission electron microscope; as in other electron tomography techniques, the sample is tilted to different angles relative to the electron beam, an image is acquired at each angle. This tilt-series of images can be computationally reconstructed into a three-dimensional view of the object of interest; this is called tomographic reconstruction. In transmission electron microscopy, because electrons interact with matter, resolution is limited by the thickness of the sample; the thickness of the sample increases as the sample is tilted, thicker samples can completely block the electron beam, making the image dark or black.
Therefore, for CryoET, samples should be less than ~500 nm thick to achieve "macromolecular" resolution. For this reason, most ECT studies have focused on purified macromolecular complexes, viruses, or small cells such as those of many species of Bacteria and Archaea. Larger cells, tissues, can be prepared for CryoET by thinning, either by cryo-sectioning or by focused ion beam milling. In cryo-sectioning, frozen blocks of cells or tissue are sectioned into thin samples with a cryo-microtome. In FIB-milling, plunge-frozen samples are exposed to a focused beam of ions gallium, that whittle away material from the top and bottom of a sample, leaving a thin lamella suitable for ECT imaging; the strong interaction of electrons with matter results in an anisotropic resolution effect. As the sample is tilted during imaging, the electron beam interacts with a greater cross-sectional area at higher tilt angles. In practice, tilt angles greater than 60–70° do not yield much information and are therefore not used.
This results in a "missing wedge" of information in the final tomogram that decreases resolution parallel to the electron beam. For structures that are present in multiple copies in one or multiple tomograms, higher resolution can be obtained by subtomogram averaging. Similar to single particle analysis, subtomogram averaging computationally combines images of identical objects to increase the signal-to-noise ratio. A major obstacle in CryoET is identifying structures of interest within complicated cellular environments. One solution is to apply correlated cryo-fluorescence light microscopy, super-resolution light microscopy, CryoET. In these techniques, a sample containing a fluorescently-tagged protein of interest is plunge-frozen and first imaged in a light microscope equipped with a special stage to allow the sample to be kept at sub-crystallization temperatures; the location of the fluorescent signal is identified and the sample is transferred to the CryoTEM, where the same location is imaged at high resolution by CryoET.
Electron cryomicroscopy Electron microscopy Electron tomography Transmission electron cryomicroscopy Transmission electron microscopy Getting started in cryo-EM course
Machine learning is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in a wide variety of applications, such as email filtering, computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is related to computational statistics, which focuses on making predictions using computers; the study of mathematical optimization delivers methods and application domains to the field of machine learning. Data mining is a field of study within machine learning, focuses on exploratory data analysis through unsupervised learning.
In its application across business problems, machine learning is referred to as predictive analytics. The name machine learning was coined in 1959 by Arthur Samuel. Tom M. Mitchell provided a quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we can do?". In Turing's proposal the various characteristics that could be possessed by a thinking machine and the various implications in constructing one are exposed. Machine learning tasks are classified into several broad categories.
In supervised learning, the algorithm builds a mathematical model from a set of data that contains both the inputs and the desired outputs. For example, if the task were determining whether an image contained a certain object, the training data for a supervised learning algorithm would include images with and without that object, each image would have a label designating whether it contained the object. In special cases, the input may be only available, or restricted to special feedback. Semi-supervised learning algorithms develop mathematical models from incomplete training data, where a portion of the sample input doesn't have labels. Classification algorithms and regression algorithms are types of supervised learning. Classification algorithms are used. For a classification algorithm that filters emails, the input would be an incoming email, the output would be the name of the folder in which to file the email. For an algorithm that identifies spam emails, the output would be the prediction of either "spam" or "not spam", represented by the Boolean values true and false.
Regression algorithms are named for their continuous outputs, meaning they may have any value within a range. Examples of a continuous value are the length, or price of an object. In unsupervised learning, the algorithm builds a mathematical model from a set of data which contains only inputs and no desired output labels. Unsupervised learning algorithms are used to find structure in the data, like grouping or clustering of data points. Unsupervised learning can discover patterns in the data, can group the inputs into categories, as in feature learning. Dimensionality reduction is the process of reducing the number of "features", or inputs, in a set of data. Active learning algorithms access the desired outputs for a limited set of inputs based on a budget, optimize the choice of inputs for which it will acquire training labels; when used interactively, these can be presented to a human user for labeling. Reinforcement learning algorithms are given feedback in the form of positive or negative reinforcement in a dynamic environment, are used in autonomous vehicles or in learning to play a game against a human opponent.
Other specialized algorithms in machine learning include topic modeling, where the computer program is given a set of natural language documents and finds other documents that cover similar topics. Machine learning algorithms can be used to find the unobservable probability density function in density estimation problems. Meta learning algorithms learn their own inductive bias based on previous experience. In developmental robotics, robot learning algorithms generate their own sequences of learning experiences known as a curriculum, to cumulatively acquire new skills through self-guided exploration and social interaction with humans; these robots use guidance mechanisms such as active learning, motor synergies, imitation. Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning" in 1959 while at IBM; as a scientific endeavour, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data.
They attempted to approach the problem with various symbolic methods, as well as what were termed "neural networks". Probabilistic reasoning was employed in automated medical
In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction; the research of 3D reconstruction has always been a difficult goal. Using 3D reconstruction one can determine any object’s 3D profile, as well as knowing the 3D coordinate of any point on the profile; the 3D reconstruction of objects is a scientific problem and core technology of a wide variety of fields, such as Computer Aided Geometric Design, computer graphics, computer animation, computer vision, medical imaging, computational science, virtual reality, digital media, etc. For instance, the lesion information of the patients can be presented in 3D on the computer, which offers a new and accurate approach in diagnosis and thus has vital clinical value. Digital elevation models can be reconstructed using methods such as airborne laser altimetry or synthetic aperture radar.
Active methods, i.e. range data methods, given the depth map, reconstruct the 3D profile by numerical approximation approach and build the object in scenario based on model. These methods interfere with the reconstructed object, either mechanically or radiometrically using rangefinders, in order to acquire the depth map, e.g. structured light, laser range finder and other active sensing techniques. A simple example of a mechanical method would use a depth gauge to measure a distance to a rotating object put on a turntable. More applicable radiometric methods emit radiance towards the object and measure its reflected part. Examples range from moving light sources, colored visible light, time-of-flight lasers to microwaves or ultrasound. See 3D scanning for more details. Passive methods of 3D reconstruction do not interfere with the reconstructed object; the sensor is an image sensor in a camera sensitive to visible light and the input to the method is a set of digital images or video. In this case we talk about image-based reconstruction and the output is a 3D model.
By comparison to active methods, passive methods can be applied to a wider range of situations. Monocular cues methods refer to use image from one viewpoint to proceed 3D construction, it makes use of 2D characteristics to measure 3D shape, that’s why it is named Shape-From-X, where X can be silhouettes, texture etc. 3D reconstruction through monocular cues is simple and quick, only one appropriate digital image is needed thus only one camera is adequate. Technically, it avoids stereo correspondence, complex. Shape-from-shading Due to the analysis of the shade information in the image, by using Lambertian reflectance, the depth of normal information of the object surface is restored to reconstruct. Photometric Stereo This approach is more sophisticated than the shape-of-shading method. Images taken in different lighting conditions are used to solve the depth information, it is worth mentioning. Shape-from-texture Suppose such an object with smooth surface covered by replicated texture units, its projection from 3D to 2D causes distortion and perspective.
Distortion and perspective measured in 2D images provide the hint for inversely solving depth of normal information of the object surface. Binocular Stereo Vision obtains the 3-dimensional geometric information of an object from multiple images based on the research of human visual system; the results are presented in form of depth maps. Images of an object acquired by two cameras in different viewing angles, or by one single camera at different time in different viewing angles, are used to restore its 3D geometric information and reconstruct its 3D profile and location; this is more direct than Monocular methods such as shape-from-shading. Binocular stereo vision method requires two identical cameras with parallel optical axis to observe one same object, acquiring two images from different points of view. In terms of trigonometry relations, depth information can be calculated from disparity. Binocular stereo vision method is well developed and stably contributes to favorable 3D reconstruction, leading to a better performance when compared to other 3D construction.
It is computationally intensive, besides it performs rather poorly when baseline distance is large. The approach of using Binocular stereo vision to acquire object’s 3D geometric information is on the basis of visual disparity; the following picture provides a simple schematic diagram of horizontally sighted Binocular Stereo Vision, where b is the baseline between projective centers of two cameras. The origin of the camera’s coordinate system is at the optical center of the camera’s lens as shown in the figure; the camera’s image plane is behind the optical center of the camera’s lens. However, to simplify the calculation, images are drawn in front of the optical center of the lens by f; the u-axis and v-axis of the image’s coordinate system O1uv are in the same direction with x-axis and y-axis of the camera’s coordinate system respectively. The origin of the image’s coordinate system is located on the intersection of imaging plane and the optical axis. Suppose such world point P whose corresponding image points are P1 and P2 on the left and right image plane.
Assume two cameras are in the same plane y-coordinates of P1 and P2 are identical, i.e.v1=v2. According to trigonometry relations
Synthetic-aperture radar is a form of radar, used to create two-dimensional images or three-dimensional reconstructions of objects, such as landscapes. SAR uses the motion of the radar antenna over a target region to provide finer spatial resolution than conventional beam-scanning radars. SAR is mounted on a moving platform, such as an aircraft or spacecraft, has its origins in an advanced form of side looking airborne radar; the distance the SAR device travels over a target in the time taken for the radar pulses to return to the antenna creates the large synthetic antenna aperture. The larger the aperture, the higher the image resolution will be, regardless of whether the aperture is physical or synthetic – this allows SAR to create high-resolution images with comparatively small physical antennas. To create a SAR image, successive pulses of radio waves are transmitted to "illuminate" a target scene, the echo of each pulse is received and recorded; the pulses are transmitted and the echoes received using a single beam-forming antenna, with wavelengths of a meter down to several millimeters.
As the SAR device on board the aircraft or spacecraft moves, the antenna location relative to the target changes with time. Signal processing of the successive recorded radar echoes allows the combining of the recordings from these multiple antenna positions; this process forms the synthetic antenna aperture and allows the creation of higher-resolution images than would otherwise be possible with a given physical antenna. As of 2010, airborne systems provide resolutions of about 10 cm, ultra-wideband systems provide resolutions of a few millimeters, experimental terahertz SAR has provided sub-millimeter resolution in the laboratory. SAR is capable of high-resolution remote sensing, independent of flight altitude, independent of weather, as SAR can select frequencies to avoid weather caused signal attenuation. SAR has day and night imaging capability as illumination is provided by the SAR. SAR images have wide application in remote sensing and mapping of surfaces of the Earth and other planets.
Applications of SAR include topography, glaciology and forestry, including forest height, deforestation. Volcano and earthquake monitoring use differential interferometry. SAR is useful in environment monitoring such as oil spills, urban growth, global change and military surveillance, including strategic policy and tactical assessment. SAR can be implemented as inverse SAR by observing a moving target over a substantial time with a stationary antenna. A synthetic-aperture radar is an imaging radar mounted on a moving platform. Electromagnetic waves are transmitted sequentially, the echoes are collected and the system electronics digitizes and stores the data for subsequent processing; as transmission and reception occur at different times, they map to different positions. The well ordered combination of the received signals builds a virtual aperture, much longer than the physical antenna width; that is the source of the term "synthetic aperture," giving it the property of an imaging radar. The range direction is parallel to the flight track and perpendicular to the azimuth direction, known as the along-track direction because it is in line with the position of the object within the antenna's field of view.
The 3D processing is done in two stages. The azimuth and range direction are focused for the generation of 2D high-resolution images, after which a digital elevation model is used to measure the phase differences between complex images, determined from different look angles to recover the height information; this height information, along with the azimuth-range coordinates provided by 2-D SAR focusing, gives the third dimension, the elevation. The first step requires only standard processing algorithms, for the second step, additional pre-processing such as image co-registration and phase calibration is used. In addition, multiple baselines can be used to extend 3D imaging to the time dimension. 4D and multi-D SAR imaging allows imaging of complex scenarios, such as urban areas, has improved performance with respect to classical interferometric techniques such as persistent scatterer interferometry. The SAR algorithm, as given here applies to phased arrays. A three-dimensional array of scene elements is defined, which will represent the volume of space within which targets exist.
Each element of the array is a cubical voxel representing the probability of a reflective surface being at that location in space. The SAR algorithm gives each voxel a density of zero. For each captured waveform, the entire volume is iterated. For a given waveform and voxel, the distance from the position represented by that voxel to the antenna used to capture that waveform is calculated; that distance represents a time delay into the waveform. The sample value at that position in the waveform is added to the voxel's density value; this represents a possible echo from a target at that position. Note there are several optional approaches here, depending on the precision of the waveform timing, among other things. For example, if phase cannot be determined, only the envelope magnitude of the waveform sample might be added to the voxel. If waveform polarization and phase are known and are accurate enough these values might be added to a more complex voxel that holds such measurements separately. After all waveforms have been iterated o