A fractal antenna is an antenna that uses a fractal, self-similar design to maximize the effective length, or increase the perimeter, of material that can receive or transmit electromagnetic radiation within a given total surface area or volume. Such fractal antennas are referred to as multilevel and space filling curves, but the key aspect lies in their repetition of a motif over two or more scale sizes, or "iterations". For this reason, fractal antennas are compact, multiband or wideband, have useful applications in cellular telephone and microwave communications. A fractal antenna's response differs markedly from traditional antenna designs, in that it is capable of operating with good-to-excellent performance at many different frequencies simultaneously. Standard antennas have to be "cut" for the frequency for which they are to be used—and thus the standard antennas only work well at that frequency; this makes the fractal antenna an excellent choice for multiband applications. In addition the fractal nature of the antenna shrinks its size, without the use of any components, such as inductors or capacitors.
The first fractal "antennas" were, in fact, fractal "arrays", with fractal arrangements of antenna elements, not recognized as having self-similarity as their attribute. Log-periodic antennas are arrays, around since the 1950s, they are a common form used in TV antennas, are arrow-head in shape. Antenna elements made from self-similar shapes were first created by Nathan Cohen a professor at Boston University, starting in 1988. Cohen's efforts with a variety of fractal antenna designs were first published in 1995. Cohen's publication marked the inaugural scientific publication on fractal antennas. Most varieties of fractal antennas are so-called "fractal element antennas". Many fractal element antennas use the fractal structure as a virtual combination of capacitors and inductors; this makes the antenna so that it has many different resonances which can be chosen and adjusted by choosing the proper fractal design. This complexity arises because the current on the structure has a complex arrangement caused by the inductance and self capacitance.
In general, although their effective electrical length is longer, the fractal element antennas are themselves physically smaller, again due to this reactive loading. Thus fractal element antennas are shrunken compared to conventional designs, do not need additional components, assuming the structure happens to have the desired resonant input impedance. In general the fractal dimension of a fractal antenna is a poor predictor of its performance and application. Not all fractal antennas set of applications. Computer search methods and antenna simulations are used to identify which fractal antenna designs best meet the need of the application. Although the first validation of the technology was published as early as 1995, recent independent studies show advantages of the fractal element technology in real-life applications, such as RFID and cell phones. One researcher has stated to the contrary that fractals do not perform any better than "meandering line" antennas. Quoting researcher Steven Best: "Differing antenna geometries, fractal or otherwise, do not, in a manner different than other geometries, uniquely determine the EM behavior of the antenna."
However, in the last few years, dozens of studies have shown superior performance with fractals, the below reference of frequency invariance conclusively demonstrates that geometry is a key aspect in uniquely determining the EM behavior of frequency independent antennas. A different and useful attribute of some fractal element antennas is their self-scaling aspect. In 1957, V. H. Rumsey presented results that angle-defined scaling was one of the underlying requirements to make antennas "invariant" at a number, or range of, frequencies. Work by Y. Mushiake in Japan starting in 1948 demonstrated similar results of frequency independent antennas having self-complementarity, it was believed that antennas had to be defined by angles for this to be true, but in 1999 it was discovered that self-similarity was one of the underlying requirements to make antennas frequency and bandwidth invariant. In other words, the self-similar aspect was the underlying requirement, along with origin symmetry, for frequency'independence'.
Angle-defined antennas are self-similar, but other self-similar antennas are frequency independent although not angle-defined. This analysis, based on Maxwell's equations, showed fractal antennas offer a closed-form and unique insight into a key aspect of electromagnetic phenomena. To wit: the invariance property of Maxwell's equations; this is now known as the HCR Principle. Mushiake's earlier work on self complementarity was shown to be limited to impedance smoothness, as expected from Babinet's Principle, but not frequency invariance. In addition to their use as antennas, fractals have found application in other antenna system components including loads and ground planes. Confusion by those who claim "grain of rice"-sized fractal antennas arises, because such fractal structures serve the purpose of loads and counterpoises, rather than bona fide antennas. Fractal inductors and fractal tuned circuits were discovered and invented with fractal element antennas. An emerging example of such is in metamaterials.
A recent invention demonstrates using close-packed fractal resonators to make the first wideband metamaterial invisibility cloak at microwave f
In mathematics, the Menger sponge is a fractal curve. It is a three-dimensional generalization of the one-dimensional Cantor set and two-dimensional Sierpinski carpet, it was first described by Karl Menger in 1926, in his studies of the concept of topological dimension. The construction of a Menger sponge can be described. Divide every face of the cube into 9 squares, like a Rubik's Cube; this will sub-divide the cube into 27 smaller cubes. Remove the smaller cube in the middle of each face, remove the smaller cube in the center of the larger cube, leaving 20 smaller cubes; this is a level-1 Menger sponge. Repeat steps 2 and 3 for each of the remaining smaller cubes, continue to iterate ad infinitum; the second iteration gives a level-2 sponge, the third iteration gives a level-3 sponge, so on. The Menger sponge; the nth stage of the Menger sponge, Mn, is made up of 20n smaller cubes, each with a side length of n. The total volume of Mn is thus n; the total surface area of Mn is given by the expression 2n + 4n.
Therefore the construction's volume approaches zero. Yet any chosen surface in the construction will be punctured as the construction continues, so that the limit is neither a solid nor a surface; each face of the construction becomes a Sierpinski carpet, the intersection of the sponge with any diagonal of the cube or any midline of the faces is a Cantor set. The cross section of the sponge through its centroid and perpendicular to a space diagonal is a regular hexagon punctured with hexagrams arranged in six-fold symmetry; the number of these hexagrams, in descending size, is given by a n = 9 a n − 1 − 12 a n − 2, with a 0 = 1, a 1 = 6. The sponge's Hausdorff dimension is log 20/log 3 ≅ 2.727. The Lebesgue covering dimension of the Menger sponge is the same as any curve. Menger showed, in the 1926 construction, that the sponge is a universal curve, in that every curve is homeomorphic to a subset of the Menger sponge, where a curve means any compact metric space of Lebesgue covering dimension one.
In a similar way, the Sierpinski carpet is a universal curve for all curves that can be drawn on the two-dimensional plane. The Menger sponge constructed in three dimensions extends this idea to graphs that are not planar, might be embedded in any number of dimensions; the Menger sponge is a closed set. It has Lebesgue measure 0; because it contains continuous paths, it is an uncountable set. Formally, a Menger sponge can be defined as follows: M:= ⋂ n ∈ N M n where M0 is the unit cube and M n + 1:=. MegaMenger is a project aiming to build the largest fractal model, pioneered by Matt Parker of Queen Mary University of London and Laura Taalman of James Madison University; each small cube is made from 6 interlocking folded business cards, giving a total of 960 000 for a level-four sponge. The outer surfaces are covered with paper or cardboard panels printed with a Sierpinski carpet design to be more aesthetically pleasing. In 2014, twenty level-three Menger sponges were constructed, which
Brownian motion or pedesis is the random motion of particles suspended in a fluid resulting from their collision with the fast-moving molecules in the fluid. This pattern of motion alternates random fluctuations in a particle's position inside a fluid sub-domain with a relocation to another sub-domain; each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such fluid there exists no preferential direction of flow as in transport phenomena. More the fluid's overall linear and angular momenta remain null over time, it is important to note that the kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations sum up to the caloric component of a fluid's internal energy. This motion is named after the botanist Robert Brown, the most eminent microscopist of his time. In 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water, the triangular shaped pollen burst at the corners, emitting particles which he noted jiggled around in the water in random fashion.
He was not able to determine the mechanisms. Atoms and molecules had long been theorized as the constituents of matter, Albert Einstein published a paper in 1905 that explained in precise detail how the motion that Brown had observed was a result of the pollen being moved by individual water molecules, making one of his first big contributions to science; this explanation of Brownian motion served as convincing evidence that atoms and molecules exist, was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter"; the direction of the force of atomic bombardment is changing, at different times the particle is hit more on one side than another, leading to the random nature of the motion. The many-body interactions that yield the Brownian pattern cannot be solved by a model accounting for every involved molecule. In consequence only probabilistic models applied to molecular populations can be employed to describe it.
Two such models of the statistical mechanics, due to Einstein and Smoluchowski are presented below. Another, pure probabilistic class of models is the class of the stochastic process models. There exist both simpler and more complicated stochastic processes which in extreme may describe the Brownian Motion; the Roman Lucretius's scientific poem "On the Nature of Things" has a remarkable description of Brownian motion of dust particles in verses 113–140 from Book II. He uses this as a proof of the existence of atoms: "Observe what happens when sunbeams are admitted into a building and shed light on its shadowy places. You will see a multitude of tiny particles mingling in a multitude of ways... their dancing is an actual indication of underlying movements of matter that are hidden from our sight... It originates with the atoms; those small compound bodies that are least removed from the impetus of the atoms are set in motion by the impact of their invisible blows and in turn cannon against larger bodies.
So the movement mounts up from the atoms and emerges to the level of our senses, so that those bodies are in motion that we see in sunbeams, moved by blows that remain invisible." Although the mingling motion of dust particles is caused by air currents, the glittering, tumbling motion of small dust particles is, caused chiefly by true Brownian dynamics. While Jan Ingenhousz described the irregular motion of coal dust particles on the surface of alcohol in 1785, the discovery of this phenomenon is credited to the botanist Robert Brown in 1827. Brown was studying pollen grains of the plant Clarkia pulchella suspended in water under a microscope when he observed minute particles, ejected by the pollen grains, executing a jittery motion. By repeating the experiment with particles of inorganic matter he was able to rule out that the motion was life-related, although its origin was yet to be explained; the first person to describe the mathematics behind Brownian motion was Thorvald N. Thiele in a paper on the method of least squares published in 1880.
This was followed independently by Louis Bachelier in 1900 in his PhD thesis "The theory of speculation", in which he presented a stochastic analysis of the stock and option markets. The Brownian motion model of the stock market is cited, but Benoit Mandelbrot rejected its applicability to stock price movements in part because these are discontinuous. Albert Einstein and Marian Smoluchowski brought the solution of the problem to the attention of physicists, presented it as a way to indirectly confirm the existence of atoms and molecules, their equations describing Brownian motion were subsequently verified by the experimental work of Jean Baptiste Perrin in 1908. There are two parts to Einstein's theory: the first part consists in the formulation of a diffusion equation for Brownian particles, in which the diffusion coefficient is related to the mean squared displacement of a Brownian particle, while the second part consists in relating the diffusion coefficient to measurable physical quantities.
In this way Einstein was able to determine the size of atoms, how many atoms there are in a mole, or the molecular weight in grams, of a gas. In accordance to Avogadro's law this volume is the same for all ideal gases, 22.414 liters at standard temperature and pressure. The number of atoms contained
Wi-Fi is technology for radio wireless local area networking of devices based on the IEEE 802.11 standards. Wi‑Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that complete after many years of testing the 802.11 committee interoperability certification testing. Devices that can use Wi-Fi technologies include, among others and laptops, video game consoles and tablets, smart TVs, digital audio players, digital cameras and drones. Wi-Fi compatible devices can connect to the Internet via a wireless access point; such an access point has a range of about 20 meters indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with radio bands and speeds. Wi-Fi most uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands. Each channel can be time-shared by multiple networks.
These wavelengths work best for line-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over 1 Gbit/s. Anyone within range with a wireless network interface controller can attempt to access a network. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network. ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, the IEEE 802.11 protocols, respectively. A 1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use.
These frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. The Australian radio-astronomer Dr John O'Sullivan with his colleagues Terence Percival, Graham Daniels, Diet Ostry, John Deane developed a key patent used in Wi-Fi as a by-product of a Commonwealth Scientific and Industrial Research Organisation research project, "a failed experiment to detect exploding mini black holes the size of an atomic particle". Dr O'Sullivan and his colleagues are credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to "unsmear" the signal; the first version of the 802.11 protocol was released in 1997, provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, this proved to be popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents; this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012 with global firms in the United States required to pay the CSIRO licensing rights estimated to be worth an additional $1 billion in royalties. In 2016, the wireless local area network Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia; the name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name, "a little catchier than'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance who presided over the selection of the name "Wi-Fi", has stated that Interbrand invented Wi-Fi as a pun on the word hi-fi, a term for high-quality audio technology.
Interbrand created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability; the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created. While inspired by the term hi-fi, the name was never "Wireless Fidelity"; the Wi-Fi Alliance was called the "Wireless Fidelity Alliance Inc" in some publications. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, LTE; the name is sometimes written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related organization and their website has stated "WiFi is a short name for Wireless Fidelity". To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller; the combination of computer and interface controllers is called a station.
A service set is the set of all the devices associated with a particular Wi-Fi network. The service set can be local, extended or mesh; each service set has an associated identifier, the 32-byte Service Set Identifier, which identifies the partic
In the mathematical field of topology, a homeomorphism, topological isomorphism, or bicontinuous function is a continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, they are the mappings that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, from a topological viewpoint they are the same; the word homeomorphism comes from the Greek words ὅμοιος = similar or same and μορφή = shape, introduced to mathematics by Henri Poincaré in 1895. Speaking, a topological space is a geometric object, the homeomorphism is a continuous stretching and bending of the object into a new shape. Thus, a square and a circle are homeomorphic to each other. However, this description can be misleading; some continuous deformations are not homeomorphisms, such as the deformation of a line into a point. Some homeomorphisms are not continuous deformations, such as the homeomorphism between a trefoil knot and a circle.
An often-repeated mathematical joke is that topologists can't tell the difference between a coffee cup and a donut, since a sufficiently pliable donut could be reshaped to the form of a coffee cup by creating a dimple and progressively enlarging it, while preserving the donut hole in a cup's handle. A function f: X → Y between two topological spaces is a homeomorphism if it has the following properties: f is a bijection, f is continuous, the inverse function f − 1 is continuous. A homeomorphism is sometimes called a bicontinuous function. If such a function exists, X and Y are homeomorphic. A self-homeomorphism is a homeomorphism from a topological space onto itself. "Being homeomorphic" is an equivalence relation on topological spaces. Its equivalence classes are called homeomorphism classes; the open interval is homeomorphic to the real numbers R for any a < b.. The unit 2-disc D 2 and the unit square in R2 are homeomorphic. An example of a bicontinuous mapping from the square to the disc is, in polar coordinates, ↦.
The graph of a differentiable function is homeomorphic to the domain of the function. A differentiable parametrization of a curve is a homeomorphism between the domain of the parametrization and the curve. A chart of a manifold is an homeomorphism between an open subset of the manifold and an open subset of a Euclidean space; the stereographic projection is a homeomorphism between the unit sphere in R3 with a single point removed and the set of all points in R2. If G is a topological group, its inversion map. For any x ∈ G, the left translation y ↦ x y, the right translation y ↦ y x, the inner automorphism y ↦ x y x − 1 are homeomorphisms. Rm and Rn are not homeomorphic for m ≠ n; the Euclidean real line is not homeomorphic to the unit circle as a subspace of R2, since the unit circle is compact as a subspace of Euclidean R2 but the real line is not compact. The one-dimensional intervals and are not homeomorphic because no continuous bijection could be made; the third requirement, that f − 1 be continuous, is essential.
Consider for instance the function f: [ 0, 2 π ) → S 1 defined by f =. This function is bijective and continuous, but not a homeomorphism ( S
In the mathematical field of point-set topology, a continuum is a nonempty compact connected metric space, or, less a compact connected Hausdorff space. Continuum theory is the branch of topology devoted to the study of continua. A continuum that contains more than one point is called nondegenerate. A subset A of a continuum X such that A itself is a continuum is called a subcontinuum of X. A space homeomorphic to a subcontinuum of the Euclidean plane R2 is called a planar continuum. A continuum X is homogeneous if for every two points x and y in X, there exists a homeomorphism h: X → X such that h = y. A Peano continuum is a continuum, locally connected at each point. An indecomposable continuum is a continuum that cannot be represented as the union of two proper subcontinua. A continuum X is hereditarily indecomposable; the dimension of a continuum means its topological dimension. A one-dimensional continuum is called a curve. An arc is a space homeomorphic to the closed interval. If h: → X is a homeomorphism and h = p and h = q p and q are called the endpoints of X.
An arc is the most familiar type of a continuum. It is one-dimensional, arcwise connected, locally connected; the topologist's sine curve is a subset of the plane, the union of the graph of the function f = sin, 0 < x ≤ 1 with the segment −1 ≤ y ≤ 1 of the y-axis. It is a one-dimensional continuum, not arcwise connected, it is locally disconnected at the points along the y-axis; the Warsaw circle is obtained by "closing up" the topologist's sine curve by an arc connecting and. It is a one-dimensional continuum whose homotopy groups are all trivial, but it is not a contractible space. An n-cell is a space homeomorphic to the closed ball in the Euclidean space Rn, it is the simplest example of an n-dimensional continuum. An n-sphere is a space homeomorphic to the standard n-sphere in the -dimensional Euclidean space, it is an n-dimensional homogeneous continuum, not contractible, therefore different from an n-cell. The Hilbert cube is an infinite-dimensional continuum. Solenoids are among the simplest examples of indecomposable homogeneous continua.
They are neither locally connected. The Sierpinski carpet known as the Sierpinski universal curve, is a one-dimensional planar Peano continuum that contains a homeomorphic image of any one-dimensional planar continuum; the pseudo-arc is a homogeneous hereditarily indecomposable planar continuum. There are two fundamental techniques for constructing continua, by means of nested intersections and inverse limits. If is a nested family of continua, i.e. Xn ⊇ Xn+1 their intersection is a continuum. If is an inverse sequence of continua Xn, called the coordinate spaces, together with continuous maps fn: Xn+1 → Xn, called the bonding maps its inverse limit is a continuum. A finite or countable product of continua is a continuum. Linear continuum Menger sponge Shape theory Sam B. Nadler, Jr, Continuum theory. An introduction. Pure and Applied Mathematics, Marcel Dekker. ISBN 0-8247-8659-9. Open problems in continuum theory Examples in continuum theory Continuum Theory and Topological Dynamics, M. Barge and J. Kennedy, in Open Problems in Topology, J. van Mill and G.
M. Reed Elsevier Science Publishers B. V. 1990. Hyperspacewiki
Lebesgue covering dimension
In mathematics, the Lebesgue covering dimension or topological dimension of a topological space is one of several different ways of defining the dimension of the space in a topologically invariant way. The first formal definition of covering dimension was given by Eduard Čech, based on an earlier result of Henri Lebesgue. A modern definition is. An open cover of a topological space X is a family of open sets whose union contains X; the ply or order of a cover is the smallest number n such that each point of the space belongs to at most n sets in the cover. A refinement of a cover C is another cover, each of whose sets is a subset of a set in C; the covering dimension of a topological space X is defined to be the minimum value of n, such that every open cover C of X has an open refinement with ply n + 1 or below. If no such minimal n exists, the space is said to be of infinite covering dimension; as a special case, a topological space is zero-dimensional with respect to the covering dimension if every open cover of the space has a refinement consisting of disjoint open sets so that any point in the space is contained in one open set of this refinement.
Any given open cover of the unit circle will have a refinement consisting of a collection of open arcs. The circle has dimension one, by this definition, because any such cover can be further refined to the stage where a given point x of the circle is contained in at most two open arcs; that is, whatever collection of arcs we begin with, some can be discarded or shrunk, such that the remainder still covers the circle but with simple overlaps. Any open cover of the unit disk in the two-dimensional plane can be refined so that any point of the disk is contained in no more than three open sets, while two are in general not sufficient; the covering dimension of the disk is thus two. More the n-dimensional Euclidean space E n has covering dimension n. A non-technical illustration of these examples is given below. Homeomorphic spaces have the same covering dimension; that is, the covering dimension is a topological invariant. The Lebesgue covering dimension coincides with the affine dimension of a finite simplicial complex.
The covering dimension of a normal space is equal to the large inductive dimension. Covering dimension of a normal space X is ≤ n if and only if for any closed subset A of X, if f: A → S n is continuous there is an extension of f to g: X → S n. Here, S n is the n dimensional sphere. A normal space X satisfies the inequality 0 ≤ dim X ≤ n if and only if for every locally finite open cover U = α ∈ A of the space X there exists an open cover V of the space X which can be represented as the union of n + 1 families V 1, V 2, …, V n + 1, where V i = α ∈ A, such that each V i contains disjoint sets and V i, α ⊂ U α for each i and α; the covering dimension of a paracompact Hausdorff space X is greater or equal to its cohomological dimension, one has H i = 0 for every sheaf A of abelian groups on X and every i larger than the covering dimension of X. Carathéodory's extension theorem Geometric set cover problem Dimension theory Metacompact space Point-finite collection Godement, Topologie algébrique et théorie des faisceaux, Paris: Hermann, MR 0345092 Munkres, James R..
Topology. Prentice-Hall. ISBN 0-13-181629-2. Karl Menger, General Spaces and Cartesian Spaces, Communications to the Amsterdam Academy of Sciences. English translation reprinted in Classics on Fractals, Gerald A. Edgar, Addison-Wesley ISBN 0-201-58701-7 Karl Menger, Dimensionstheorie, B. G Teubner Publishers, Leipzig. A. R. Pears, Dimension Theory of General Spaces, Cambridge University Press. ISBN 0-521-20515-8 V. V. Fedorchuk, The Fundamentals of Dimension Th