A fractal landscape is a surface generated using a stochastic algorithm designed to produce fractal behaviour that mimics the appearance of natural terrain. In other words, the result of the procedure is not a deterministic fractal surface, but rather a random surface that exhibits fractal behaviour. Many natural phenomena exhibit some form of statistical self-similarity that can be modeled by fractal surfaces. Moreover, variations in surface texture provide important visual cues to the orientation and slopes of surfaces, the use of self-similar fractal patterns can help create natural looking visual effects; the modeling of the Earth's rough surfaces via fractional Brownian motion was first proposed by Benoît Mandelbrot. Because the intended result of the process is to produce a landscape, rather than a mathematical function, processes are applied to such landscapes that may affect the stationarity and the overall fractal behavior of such a surface, in the interests of producing a more convincing landscape.
According to R. R. Shearer, the generation of natural looking surfaces and landscapes was a major turning point in art history, where the distinction between geometric, computer generated images and natural, man made art became blurred; the first use of a fractal-generated landscape in a film was in 1982 for the movie Star Trek II: The Wrath of Khan. Loren Carpenter refined the techniques of Mandelbrot to create an alien landscape. Whether or not natural landscapes behave in a fractal manner has been the subject of some research. Technically speaking, any surface in three-dimensional space has a topological dimension of 2, therefore any fractal surface in three-dimensional space has a Hausdorff dimension between 2 and 3. Real landscapes however, have varying behaviour at different scales; this means that an attempt to calculate the'overall' fractal dimension of a real landscape can result in measures of negative fractal dimension, or of fractal dimension above 3. In particular, many studies of natural phenomena those thought to exhibit fractal behaviour.
For instance, Richardson's examination of the western coastline of Britain showed fractal behaviour of the coastline over only two orders of magnitude. In general, there is no reason to suppose that the geological processes that shape terrain on large scales exhibit the same mathematical behaviour as those that shape terrain on smaller scales. Real landscapes have varying statistical behaviour from place to place, so for example sandy beaches don't exhibit the same fractal properties as mountain ranges. A fractal function, however, is statistically stationary, meaning that its bulk statistical properties are the same everywhere. Thus, any real approach to modeling landscapes requires the ability to modulate fractal behaviour spatially. Additionally real landscapes have few natural minima, whereas a fractal function has as many minima as maxima, on average. Real landscapes have features originating with the flow of water and ice over their surface, which simple fractals cannot model, it is because of these considerations that the simple fractal functions are inappropriate for modeling landscapes.
More sophisticated techniques use different fractal dimensions for different scales, thus can better model the frequency spectrum behaviour of real landscapes A way to make such a landscape is to employ the random midpoint displacement algorithm, in which a square is subdivided into four smaller equal squares and the center point is vertically offset by some random amount. The process is repeated on the four new squares, so on, until the desired level of detail is reached. There are many fractal procedures capable of creating terrain data, the term "fractal landscape" has become more generic. Fractal plants can be procedurally generated using L-systems in computer-generated scenes. Brownian surface Bryce Diamond-square algorithm Grome Outerra Terragen Octree Quadtree Lewis, J. P. "Is the Fractal Model Appropriate for Terrain?". Richardson, L. F.. "The Problem of Continuity". General Systems Yearbook. 6: 139–187. Van Lawick van Pabst, Joost. "Dynamic Terrain Generation Based on Multifractal Techniques".
Archived from the original on 2011-07-24. Musgrave, Ken. "Methods for Realistic Landscape Imaging". A Web-Wide World by Ken Perlin, 1998.
Anisotropy, is the property of being directionally dependent, which implies different properties in different directions, as opposed to isotropy. It can be defined as a difference, when measured along different axes, in a material's physical or mechanical properties An example of anisotropy is light coming through a polarizer. Another is wood, easier to split along its grain than across it. In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet. Anisotropic filtering is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced. A chemical anisotropic filter, as used to filter particles, is a filter with smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions remove smaller particles, resulting in greater flow-through and more efficient filtration.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene; this abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change. In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g. to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon. Images of a gravity-bound or man-made environment are anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity. Physicists from University of California, Berkeley reported about their detection of the cosine anisotropy in cosmic microwave background radiation in 1977.
Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has been seen in the alignment of galaxies' rotation axes and polarisation angles of quasars. Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may show "filamentation", directional. An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids; some materials conduct heat in a way, isotropic, independent of spatial orientation around the heat source. Heat conduction is more anisotropic, which implies that detailed geometric modeling of diverse materials being thermally managed is required.
The materials used to transfer and reject heat from the heat source in electronics are anisotropic. Many crystals are anisotropic to light, exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along; some materials can have multiple such optical axes. Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength have a dominant alignment; this alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth. Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; this property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity.
Formation evaluation instruments measure this conductivity/resistivity and the results are used to help find oil and gas in wells. The hydraulic conductivity of aquifers is anisotropic for the same reason; when calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account, otherwise the results may be subject to error. Most common rock-forming minerals are anisotropic, including feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet. Anisotropy is a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, wh
Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth; the measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists. Models reconcile theory with the measurements. In autecological studies, the growth of bacteria in batch culture can be modeled with four different phases: lag phase, log phase or exponential phase, stationary phase, death phase. During lag phase, bacteria adapt themselves to growth conditions, it is the period where the individual bacteria are not yet able to divide. During the lag phase of the bacterial growth cycle, synthesis of RNA, enzymes and other molecules occurs.
During the lag phase cells change little because the cells do not reproduce in a new medium. This period of little to no cell division is called the lag phase and can last for 1 hour to several days. During this phase cells are not dormant; the log phase is a period characterized by cell doubling. The number of new bacteria appearing per unit time is proportional to the present population. If growth is not limited, doubling will continue at a constant rate so both the number of cells and the rate of population increase doubles with each consecutive time period. For this type of exponential growth, plotting the natural logarithm of cell number against time produces a straight line; the slope of this line is the specific growth rate of the organism, a measure of the number of divisions per cell per unit time. The actual rate of this growth depends upon the growth conditions, which affect the frequency of cell division events and the probability of both daughter cells surviving. Under controlled conditions, cyanobacteria can double their population four times a day and they can triple their population.
Exponential growth cannot continue indefinitely, because the medium is soon depleted of nutrients and enriched with wastes. The stationary phase is due to a growth-limiting factor such as the depletion of an essential nutrient, and/or the formation of an inhibitory product such as an organic acid. Stationary phase results from a situation in which death rate are equal; the number of new cells created is limited by the growth factor and as a result the rate of cell growth matches the rate of cell death. The result is a “smooth,” horizontal linear part of the curve during the stationary phase. Mutations can occur during stationary phase. Bridges et al. presented evidence that DNA damage is responsible for many of the mutations arising in the genomes of stationary phase or starving bacteria. Endogenously generated. At death phase, bacteria die; this could be caused by lack of nutrients, environmental temperature above or below the tolerance band for the species, or other injurious conditions. This basic batch culture growth model draws out and emphasizes aspects of bacterial growth which may differ from the growth of macrofauna.
It emphasizes clonality, asexual binary division, the short development time relative to replication itself, the low death rate, the need to move from a dormant state to a reproductive state or to condition the media, the tendency of lab adapted strains to exhaust their nutrients. In reality in batch culture, the four phases are not well defined; the cells do not reproduce in synchrony without explicit and continual prompting and their exponential phase growth is not a constant rate, but instead a decaying rate, a constant stochastic response to pressures both to reproduce and to go dormant in the face of declining nutrient concentrations and increasing waste concentrations. Near the end of the logarithmic phase of a batch culture, competence for natural genetic transformation may be induced, as in Bacillus subtilis and in other bacteria. Natural genetic transformation is a form of DNA transfer that appears to be an adaptation for repairing DNA damages. Batch culture is the most common laboratory growth method in which bacterial growth is studied, but it is only one of many.
It is temporally structured. The bacterial culture is incubated in a closed vessel with a single batch of medium. In some experimental regimes, some of the bacterial culture is periodically removed and added to fresh sterile medium. In the extreme case, this leads to the continual renewal of the nutrients; this is a chemostat known as continuous culture. It is ideally spatially unstructured and temporally unstructured, in a steady state defined by the rates of nutrient supply and bacterial growth. In comparison to batch culture, bacteria are maintained in exponential growth phase, the growth rate of the bacteria is known. Related devices include auxostats; when Escherichia coli is growing slowly with a doubling time of 16 hours in a chemostat most cells ha
Physics is the natural science that studies matter, its motion, behavior through space and time, that studies the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, its main goal is to understand how the universe behaves. Physics is one of the oldest academic disciplines and, through its inclusion of astronomy the oldest. Over much of the past two millennia, chemistry and certain branches of mathematics, were a part of natural philosophy, but during the scientific revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, the boundaries of physics which are not rigidly defined. New ideas in physics explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in academic disciplines such as mathematics and philosophy. Advances in physics enable advances in new technologies.
For example, advances in the understanding of electromagnetism and nuclear physics led directly to the development of new products that have transformed modern-day society, such as television, domestic appliances, nuclear weapons. Astronomy is one of the oldest natural sciences. Early civilizations dating back to beyond 3000 BCE, such as the Sumerians, ancient Egyptians, the Indus Valley Civilization, had a predictive knowledge and a basic understanding of the motions of the Sun and stars; the stars and planets were worshipped, believed to represent gods. While the explanations for the observed positions of the stars were unscientific and lacking in evidence, these early observations laid the foundation for astronomy, as the stars were found to traverse great circles across the sky, which however did not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, all Western efforts in the exact sciences are descended from late Babylonian astronomy.
Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey. Natural philosophy has its origins in Greece during the Archaic period, when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause, they proposed ideas verified by reason and observation, many of their hypotheses proved successful in experiment. The Western Roman Empire fell in the fifth century, this resulted in a decline in intellectual pursuits in the western part of Europe. By contrast, the Eastern Roman Empire resisted the attacks from the barbarians, continued to advance various fields of learning, including physics. In the sixth century Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. In sixth century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noting its flaws.
He introduced the theory of impetus. Aristotle's physics was not scrutinized until John Philoponus appeared, unlike Aristotle who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics John Philoponus wrote: “But this is erroneous, our view may be corroborated by actual observation more than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a small one, and so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the other”John Philoponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries during the Scientific Revolution.
Galileo cited Philoponus in his works when arguing that Aristotelian physics was flawed. In the 1300s Jean Buridan, a teacher in the faculty of arts at the University of Paris, developed the concept of impetus, it was a step toward the modern ideas of momentum. Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further placing emphasis on observation and a priori reasoning, developing early forms of the scientific method; the most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn al-Haytham, in which he conclusively disproved the ancient Greek idea about vision, but came up with a new theory. In the book, he presented a study of the phenomenon of the camera obscura (his thousand-year-old
Diffusion-limited aggregation is the process whereby particles undergoing a random walk due to Brownian motion cluster together to form aggregates of such particles. This theory, proposed by T. A. Witten Jr. and L. M. Sander in 1981, is applicable to aggregation in any system where diffusion is the primary means of transport in the system. DLA can be observed in many systems such as electrodeposition, Hele-Shaw flow, mineral deposits, dielectric breakdown; the clusters formed in DLA processes are referred to as Brownian trees. These clusters are an example of a fractal. In 2D these fractals exhibit a dimension of 1.71 for free particles that are unrestricted by a lattice, however computer simulation of DLA on a lattice will change the fractal dimension for a DLA in the same embedding dimension. Some variations are observed depending on the geometry of the growth, whether it be from a single point radially outward or from a plane or line for example. Two examples of aggregates generated using a microcomputer by allowing random walkers to adhere to an aggregate are shown on the right.
Computer simulation of DLA is one of the primary means of studying this model. Several methods are available to accomplish this. Simulations can be done on a lattice of any desired geometry of embedding dimension or the simulation can be done more along the lines of a standard molecular dynamics simulation where a particle is allowed to random walk until it gets within a certain critical range whereupon it is pulled onto the cluster. Of critical importance is that the number of particles undergoing Brownian motion in the system is kept low so that only the diffusive nature of the system is present; the intricate and organic forms that can be generated with diffusion-limited aggregation algorithms have been explored by artists. Simutils, part of the toxiclibs open source library for the Java programming language developed by Karsten Schmidt, allows users to apply the DLA process to pre-defined guidelines or curves in the simulation space and via various other parameters dynamically direct the growth of 3D forms.