Marching cubes is a computer graphics algorithm, published in the 1987 SIGGRAPH proceedings by Lorensen and Cline, for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field. The applications of this algorithm are concerned with medical visualizations such as CT and MRI scan data images, special effects or 3-D modelling with what is called metaballs or other metasurfaces. An analogous two-dimensional method is called the marching squares algorithm; the algorithm was developed by William E. Lorensen and Harvey E. Cline as a result of their research for General Electric. At General Electric they worked on a way to efficiently visualize data from MRI devices; the premise of algorithm is to divide the input volume into a discrete set of cubes. By assuming linear reconstruction filtering, each cube, which contains a piece of a given isosurface, can be identified because the sample values at the cube vertices must span the target isosurface value. For each cube containing a section of the isosurface, a triangular mesh that approximates the behavior of the trilinear interpolant in the interior cube is generated.
The first published version of the algorithm exploited rotational and reflective symmetry and sign changes to build the table with 15 unique cases. However, due to the existence of ambiguities in the trilinear interpolant behavior in the cube faces and interior, the meshes extracted by the Marching Cubes presented discontinuities and topological issues. Given a cube of the grid, a face ambiguity occurs; that is, the vertices of one diagonal on this face are positive and the vertices on the other are negative. Observe that in this case, the signs of the face vertices are insufficient to determine the correct way to triangulate the isosurface. An interior ambiguity occurs when the signs of the cube vertices are insufficient to determine the correct surface triangulation, i.e. when multiple triangulations are possible for the same cube configuration. The popularity of the Marching Cubes and its widespread adoption resulted in several improvements in the algorithm to deal with the ambiguities and to track the behavior of the interpolant.
Durst in 1988 was the first to note that the triangulation table proposed by Lorensen and Cline was incomplete, that certain Marching Cubes cases allow multiple triangulations. Nielson and Hamann in 1991 observed the existence of ambiguities in the interpolant behavior on the face of the cube, they proposed a test called Asymptotic Decider to track the interpolant on the faces of the cube. In fact, as observed by Natarajan in 1994, this ambiguity problem occurs inside the cube. In his work, the author proposed a disambiguation test based on the interpolant critical points, added four new cases to the Marching Cubes triangulation table. At this point with all the improvements proposed to the algorithm and its triangulation table, the meshes generated by the Marching Cubes still had topological incoherencies; the Marching Cubes 33, proposed by Chernyaev in 1995, is one of the first isosurface extraction algorithms intended to preserve the topology of the trilinear interpolant. In his work, Chernyaev extends to 33 the number of cases in the triangulation lookup table.
He proposes a different approach to solve the interior ambiguities, based on the Asymptotic Decider. In 2003, Nielson proved that Chernyaev's lookup table is complete and can represent all the possible behaviors of the trilinear interpolant, Lewiner et al. proposed an implementation to the algorithm. In 2003 Lopes and Brodlie extended the tests proposed by Natarajan. In 2013, Custodio et al. noted and corrected algorithmic inaccuracies that compromised the topological correctness of the mesh generated by the Marching Cubes 33 algorithm proposed by Chernyaev. The algorithm proceeds through the scalar field, taking eight neighbor locations at a time determining the polygon needed to represent the part of the isosurface that passes through this cube; the individual polygons are fused into the desired surface. This is done by creating an index to a precalculated array of 256 possible polygon configurations within the cube, by treating each of the 8 scalar values as a bit in an 8-bit integer. If the scalar's value is higher than the iso-value the appropriate bit is set to one, while if it is lower, it is set to zero.
The final value, after all eight scalars are checked, is the actual index to the polygon indices array. Each vertex of the generated polygons is placed on the appropriate position along the cube's edge by linearly interpolating the two scalar values that are connected by that edge; the gradient of the scalar field at each grid point is the normal vector of a hypothetical isosurface passing from that point. Therefore, these normals may be interpolated along the edges of each cube to find the normals of the generated vertices which are essential for shading the resulting mesh with some illumination model. An implementation of the marching cubes algorithm was patented as United States Patent 4,710,876. Another similar algorithm was developed, called marching tetrahedra, in order to circumvent the patent as well as solve a minor ambiguity problem of marching cubes with some cube configurations; the patent expired in 2005, it is now legal for the graphics community to use it without royalties since more than 17 years have passed from its issue date.
Image-based meshing Marching tetrahedrons Lorensen, W. E.. "Marching cubes: A high resolution 3d surface construct
In mathematics and physics, a scalar field associates a scalar value to every point in a space – physical space. The scalar may either be a physical quantity. In a physical context, scalar fields are required to be independent of the choice of reference frame, meaning that any two observers using the same units will agree on the value of the scalar field at the same absolute point in space regardless of their respective points of origin. Examples used in physics include the temperature distribution throughout space, the pressure distribution in a fluid, spin-zero quantum fields, such as the Higgs field; these fields are the subject of scalar field theory. Mathematically, a scalar field on a region U is a real or complex-valued function or distribution on U; the region U may be a set in some Euclidean space, Minkowski space, or more a subset of a manifold, it is typical in mathematics to impose further conditions on the field, such that it be continuous or continuously differentiable to some order.
A scalar field is a tensor field of order zero, the term "scalar field" may be used to distinguish a function of this kind with a more general tensor field, density, or differential form. Physically, a scalar field is additionally distinguished by having units of measurement associated with it. In this context, a scalar field should be independent of the coordinate system used to describe the physical system—that is, any two observers using the same units must agree on the numerical value of a scalar field at any given point of physical space. Scalar fields are contrasted with other physical quantities such as vector fields, which associate a vector to every point of a region, as well as tensor fields and spinor fields. More subtly, scalar fields are contrasted with pseudoscalar fields. In physics, scalar fields describe the potential energy associated with a particular force; the force is a vector field, which can be obtained as the gradient of the potential energy scalar field. Examples include: Potential fields, such as the Newtonian gravitational potential, or the electric potential in electrostatics, are scalar fields which describe the more familiar forces.
A temperature, humidity or pressure field, such as those used in meteorology. In quantum field theory, a scalar field is associated with spin-0 particles; the scalar field may be complex valued. Complex scalar fields represent charged particles; these include the charged Higgs field of the Standard Model, as well as the charged pions mediating the strong nuclear interaction. In the Standard Model of elementary particles, a scalar Higgs field is used to give the leptons and massive vector bosons their mass, via a combination of the Yukawa interaction and the spontaneous symmetry breaking; this mechanism is known as the Higgs mechanism. A candidate for the Higgs boson was first detected at CERN in 2012. In scalar theories of gravitation scalar fields are used to describe the gravitational field. Scalar-tensor theories represent the gravitational interaction through a scalar; such attempts are for example the Jordan theory as a generalization of the Kaluza–Klein theory and the Brans–Dicke theory. Scalar fields like the Higgs field can be found within scalar-tensor theories, using as scalar field the Higgs field of the Standard Model.
This field interacts Yukawa-like with the particles that get mass through it. Scalar fields are found within superstring theories as dilaton fields, breaking the conformal symmetry of the string, though balancing the quantum anomalies of this tensor. Scalar fields are supposed to cause the accelerated expansion of the universe, helping to solve the horizon problem and giving a hypothetical reason for the non-vanishing cosmological constant of cosmology. Massless scalar fields in this context are known as inflatons. Massive scalar fields are proposed, using for example Higgs-like fields. Vector fields; some examples of vector fields include the electromagnetic field and the Newtonian gravitational field. Tensor fields, which associate a tensor to every point in space. For example, in general relativity gravitation is associated with the tensor field called Einstein tensor. In Kaluza–Klein theory, spacetime is extended to five dimensions and its Riemann curvature tensor can be separated out into ordinary four-dimensional gravitation plus an extra set, equivalent to Maxwell's equations for the electromagnetic field, plus an extra scalar field known as the "dilaton".
The dilaton scalar is found among the massless bosonic fields in string theory. Scalar field theory Vector-valued function
In computer science, a lookup table is an array that replaces runtime computation with a simpler array indexing operation. The savings in terms of processing time can be significant, since retrieving a value from memory is faster than undergoing an "expensive" computation or input/output operation; the tables may be precalculated and stored in static program storage, calculated as part of a program's initialization phase, or stored in hardware in application-specific platforms. Lookup tables are used extensively to validate input values by matching against a list of valid items in an array and, in some programming languages, may include pointer functions to process the matching input. FPGAs make extensive use of reconfigurable, hardware-implemented, lookup tables to provide programmable hardware functionality. Before the advent of computers, lookup tables of values were used to speed up hand calculations of complex functions, such as in trigonometry and statistical density functions. In ancient India, Aryabhata created one of the first sine tables, which he encoded in a Sanskrit-letter-based number system.
In 493 AD, Victorius of Aquitaine wrote a 98-column multiplication table which gave the product of every number from 2 to 50 times and the rows were "a list of numbers starting with one thousand, descending by hundreds to one hundred descending by tens to ten by ones to one, the fractions down to 1/144" Modern school children are taught to memorize "times tables" to avoid calculations of the most used numbers. Early in the history of computers, input/output operations were slow – in comparison to processor speeds of the time, it made sense to reduce expensive read operations by a form of manual caching by creating either static lookup tables or dynamic prefetched arrays to contain only the most occurring data items. Despite the introduction of systemwide caching that now automates this process, application level lookup tables can still improve performance for data items that if change. Lookup tables were one of the earliest functionalities implemented in computer spreadsheets, with the initial version of VisiCalc including a LOOKUP function among its original 20 functions.
This has been followed by subsequent spreadsheets, such as Microsoft Excel, complemented by specialized VLOOKUP and HLOOKUP functions to simplify lookup in a vertical or horizontal table. This is known as a linear search or brute-force search, each element being checked for equality in turn and the associated value, if any, used as a result of the search; this is the slowest search method unless occurring values occur early in the list. For a one-dimensional array or linked list, the lookup is to determine whether or not there is a match with an'input' data value. An example of a "divide and conquer algorithm", binary search involves each element being found by determining which half of the table a match may be found in and repeating until either success or failure; this is only possible if the list is sorted but gives good performance if the list is lengthy. For a trivial hash function lookup, the unsigned raw data value is used directly as an index to a one-dimensional table to extract a result.
For small ranges, this can be amongst the fastest lookup exceeding binary search speed with zero branches and executing in constant time. One discrete problem, expensive to solve on many computers is that of counting the number of bits which are set to 1 in a number, sometimes called the population function. For example, the decimal number "37" is "00100101" in binary, so it contains three bits that are set to binary "1". A simple example of C code, designed to count the 1 bits in a int, might look like this: This simple algorithm can take hundreds of cycles on a modern architecture, because it makes many branches in the loop - and branching is slow; this can be ameliorated using some other compiler optimizations. There is however a simple and much faster algorithmic solution - using a trivial hash function table lookup. Construct a static table, bits_set, with 256 entries giving the number of one bits set in each possible byte value. Use this table to find the number of ones in each byte of the integer using a trivial hash function lookup on each byte in turn, sum them.
This requires no branches, just four indexed memory accesses faster than the earlier code. The above source can be improved by'recasting"x' as a 4 byte unsigned char array and, coded in-line as a single statement instead of being a function. Note that this simple algorithm can be too slow now, because the original code might run faster from the cache of modern processors, lookup tables do not fit well in caches and can cause a slower access to memory. In data analysis applications, such as image processing, a lookup table is used to transform the input data into a more desirable output format. For example, a grayscale picture of the planet Saturn will be transformed into a color image to emphasize the differences in its rings. A classic example of reducing run-time computations using lookup tables is to obtain the result of a trigonometry calculation, such as the sine of a value. Calculating trigonometric functions can slow a computing application; the same application can finish much sooner when it firs
Computer graphics are pictures and films created using computers. The term refers to computer-generated image data created with the help of specialized graphical hardware and software, it is a vast and developed area of computer science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing, it is abbreviated as CG, though sometimes erroneously referred to as computer-generated imagery. Some topics in computer graphics include user interface design, sprite graphics, vector graphics, 3D modeling, shaders, GPU design, implicit surface visualization with ray tracing, computer vision, among others; the overall methodology depends on the underlying sciences of geometry and physics. Computer graphics is responsible for displaying art and image data and meaningfully to the consumer, it is used for processing image data received from the physical world. Computer graphics development has had a significant impact on many types of media and has revolutionized animation, advertising, video games, graphic design in general.
The term computer graphics has been used in a broad sense to describe "almost everything on computers, not text or sound". The term computer graphics refers to several different things: the representation and manipulation of image data by a computer the various technologies used to create and manipulate images the sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content, see study of computer graphicsToday, computer graphics is widespread; such imagery is found in and on television, weather reports, in a variety of medical investigations and surgical procedures. A well-constructed graph can present complex statistics in a form, easier to understand and interpret. In the media "such graphs are used to illustrate papers, theses", other presentation material. Many tools have been developed to visualize data. Computer generated imagery can be categorized into several different types: two dimensional, three dimensional, animated graphics; as technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still used.
Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, scientific visualization more concerned with "the visualization of three dimensional phenomena, where the emphasis is on realistic renderings of volumes, illumination sources, so forth with a dynamic component"; the precursor sciences to the development of modern computer graphics were the advances in electrical engineering and television that took place during the first half of the twentieth century. Screens could display art since the Lumiere brothers' use of mattes to create special effects for the earliest films dating from 1895, but such displays were limited and not interactive; the first cathode ray tube, the Braun tube, was invented in 1897 – it in turn would permit the oscilloscope and the military control panel – the more direct precursors of the field, as they provided the first two-dimensional electronic displays that responded to programmatic or user input.
Computer graphics remained unknown as a discipline until the 1950s and the post-World War II period – during which time the discipline emerged from a combination of both pure university and laboratory academic research into more advanced computers and the United States military's further development of technologies like radar, advanced aviation, rocketry developed during the war. New kinds of displays were needed to process the wealth of information resulting from such projects, leading to the development of computer graphics as a discipline. Early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed a personal experiment in which a small program he wrote captured the movement of his finger and displayed its vector on a display scope. One of the first interactive video games to feature recognizable, interactive graphics – Tennis for Two – was created for an oscilloscope by William Higinbotham to entertain visitors in 1958 at Brookhaven National Laboratory and simulated a tennis match.
In 1959, Douglas T. Ross innovated again while working at MIT on transforming mathematic statements into computer generated 3D machine tool vectors by taking the opportunity to create a display scope image of a Disney cartoon character. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, established strong ties with Stanford University through its founders, who were alumni; this began the decades-long transformation of the southern San Francisco Bay Area into the world's leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware. Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MIT's Lincoln Laboratory; the TX-2 integrated a number of new man-machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland's revolutionary Sketchpad software.
Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and recall them later. The light pen itself had a small photoelectric cell in its tip. T
In geometry, a heptagon is a seven-sided polygon or 7-gon. The heptagon is sometimes referred to as the septagon, using "sept-" together with the Greek suffix "-agon" meaning angle. A regular heptagon, in which all sides and all angles are equal, has internal angles of 5π/7 radians, its Schläfli symbol is. The area of a regular heptagon of side length a is given by: A = 7 4 a 2 cot π 7 ≃ 3.634 a 2. This can be seen by subdividing the unit-sided heptagon into seven triangular "pie slices" with vertices at the center and at the heptagon's vertices, halving each triangle using the apothem as the common side; the apothem is half the cotangent of π / 7, the area of each of the 14 small triangles is one-fourth of the apothem. The exact algebraic expression, starting from the cubic polynomial x3 + x2 − 2x − 1 is given in complex numbers by: A = a 2 4 7 3, in which the imaginary parts offset each other leaving a real-valued expression; this expression cannot be algebraically rewritten without complex components, since the indicated cubic function is casus irreducibilis.
The area of a regular heptagon inscribed in a circle of radius R is 7 R 2 2 sin 2 π 7, while the area of the circle itself is π R 2. As 7 is a Pierpont prime but not a Fermat prime, the regular heptagon is not constructible with compass and straightedge but is constructible with a marked ruler and compass; this type of construction is called a neusis construction. It is constructible with compass and angle trisector; the impossibility of straightedge and compass construction follows from the observation that 2 cos 2 π 7 ≈ 1.247 is a zero of the irreducible cubic x3 + x2 − 2x − 1. This polynomial is the minimal polynomial of 2cos, whereas the degree of the minimal polynomial for a constructible number must be a power of 2. An approximation for practical use with an error of about 0.2% is shown in the drawing. It is attributed to Albrecht Dürer. Let A lie on the circumference of the circumcircle. Draw arc BOC. B D = 1 2 B C gives an approximation for the edge of the heptagon; this approximation uses 3 2 ≈ 0.86603 for the side of the heptagon inscribed in the unit circle while the exact value is 2 sin π 7 ≈ 0.86777.
Example to illustrate the error: At a circumscribed circle radius r = 1 m, the absolute error of the 1st side would be -1.7 mm The regular heptagon belongs to the D7h point group, order 28. The symmetry elements are: a 7-fold proper rotation axis C7, a 7-fold improper rotation axis,S7, 7 vertical mirror planes, σv, 7 2-fold rotation axes, C2, in the plane of the heptagon and a horizontal mirror plane, σh in the heptagon's plane; the regular heptagon's side a, shorter diagonal b, longer diagonal c, with a<b<c, satisfy a 2 = c, b 2 = a, c 2 = b, 1 a = 1 b + 1 c and hence a b + a c
Three-dimensional space is a geometric setting in which three values are required to determine the position of an element. This is the informal meaning of the term dimension. In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space; when n = 3, the set of all such locations is called three-dimensional Euclidean space. It is represented by the symbol ℝ3; this serves as a three-parameter model of the physical universe. However, this space is only one example of a large variety of spaces in three dimensions called 3-manifolds. In this classical example, when the three values refer to measurements in different directions, any three directions can be chosen, provided that vectors in these directions do not all lie in the same 2-space. Furthermore, in this case, these three values can be labeled by any combination of three chosen from the terms width, height and length. In mathematics, analytic geometry describes every point in three-dimensional space by means of three coordinates.
Three coordinate axes are given, each perpendicular to the other two at the origin, the point at which they cross. They are labeled x, y, z. Relative to these axes, the position of any point in three-dimensional space is given by an ordered triple of real numbers, each number giving the distance of that point from the origin measured along the given axis, equal to the distance of that point from the plane determined by the other two axes. Other popular methods of describing the location of a point in three-dimensional space include cylindrical coordinates and spherical coordinates, though there are an infinite number of possible methods. See Euclidean space. Below are images of the above-mentioned systems. Two distinct points always determine a line. Three distinct points determine a unique plane. Four distinct points can either coplanar or determine the entire space. Two distinct lines can either be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a unique plane, so skew lines are lines that do not meet and do not lie in a common plane.
Two distinct planes are parallel. Three distinct planes, no pair of which are parallel, can either meet in a common line, meet in a unique common point or have no point in common. In the last case, the three lines of intersection of each pair of planes are mutually parallel. A line can intersect that plane in a unique point or be parallel to the plane. In the last case, there will be lines in the plane. A hyperplane is a subspace of one dimension less than the dimension of the full space; the hyperplanes of a three-dimensional space are the two-dimensional subspaces. In terms of cartesian coordinates, the points of a hyperplane satisfy a single linear equation, so planes in this 3-space are described by linear equations. A line can be described by a pair of independent linear equations, each representing a plane having this line as a common intersection. Varignon's theorem states that the midpoints of any quadrilateral in ℝ3 form a parallelogram, so, are coplanar. A sphere in 3-space consists of the set of all points in 3-space at a fixed distance r from a central point P.
The solid enclosed by the sphere is called a ball. The volume of the ball is given by V = 4 3 π r 3. Another type of sphere arises from a 4-ball, whose three-dimensional surface is the 3-sphere: points equidistant to the origin of the euclidean space ℝ4. If a point has coordinates, P x2 + y2 + z2 + w2 = 1 characterizes those points on the unit 3-sphere centered at the origin. In three dimensions, there are nine regular polytopes: the five convex Platonic solids and the four nonconvex Kepler-Poinsot polyhedra. A surface generated by revolving a plane curve about a fixed line in its plane as an axis is called a surface of revolution; the plane curve is called the generatrix of the surface. A section of the surface, made by intersecting the surface with a plane, perpendicular to the axis, is a circle. Simple examples occur. If the generatrix line intersects the axis line, the surface of revolution is a right circular cone with vertex the point of intersection. However, if the generatrix and axis are parallel, the surface of revolution is a circular cylinder.
In analogy with the conic sections, the set of points whose cartesian coordinates satisfy the general equation of the second degree, namely, A x 2 + B y 2 + C z 2 + F x y + G y z + H x z + J x + K y + L z + M = 0, where A, B, C, F, G, H, J, K, L and M are real numbers and not all of A, B, C, F, G and H are zero is called a quadric surface. There are six types of non-degenerate quadric surfaces: Ellipsoid Hyperboloid of one sheet Hyperboloid of two sheets Elliptic cone Elliptic paraboloid Hyperbolic paraboloidThe degenerate quadric surfaces are the empty set, a single point, a single li
In computing, bit numbering is the convention used to identify the bit positions in a binary number or a container of such a value. The bit number is incremented by one for each subsequent bit position. In computing, the least significant bit is the bit position in a binary integer giving the units value, that is, determining whether the number is or odd; the LSB is sometimes referred to as the low-order bit or right-most bit, due to the convention in positional notation of writing less significant digits further to the right. It is analogous to the least significant digit of a decimal integer, the digit in the ones position, it is common to assign each bit a position number, ranging from zero to N-1, where N is the number of bits in the binary representation used. This is the exponent for the corresponding bit weight in base-2. Although a few CPU manufacturers assign bit numbers the opposite way, the term least significant bit itself remains unambiguous as an alias for the unit bit. By extension, the least significant bits are the bits of the number closest to, including, the LSB.
The least significant bits have the useful property of changing if the number changes slightly. For example, if 1 is added to 3, the result will be 4 and three of the least significant bits will change. By contrast, the three most significant bits stay unchanged. Least significant bits are employed in pseudorandom number generators, steganographic tools, hash functions and checksums. In digital steganography, sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file. In the context of an image, if a user were to manipulate the last two bits of a color in a pixel, the value of the color would change at most +/- 3 value places, to be indistinguishable by the human eye; the user may recover this information by extracting the least significant bits of the manipulated pixels to recover the original message. This allows for the transfer of digital information to be kept concealed. LSB can stand for least significant byte; the meaning is parallel to the above: it is the byte in that position of a multi-byte number which has the least potential value.
If the abbreviation's meaning least significant byte isn't obvious from context, it should be stated explicitly to avoid confusion with least significant bit. To avoid this ambiguity, the less abbreviated terms "lsbit" or "lsbyte" may be used. In computing, the most significant bit is the bit position in a binary number having the greatest value; the MSB is sometimes referred to as the high-order bit or left-most bit due to the convention in positional notation of writing more significant digits further to the left. The MSB can correspond to the sign bit of a signed binary number in one's or two's complement notation, "1" meaning negative and "0" meaning positive, it is common to assign each bit a position number ranging from zero to N-1 where N is the number of bits in the binary representation used. This is the exponent for the corresponding bit weight in base-2. Although a few CPU manufacturers assign bit numbers the opposite way, the MSB unambiguously remains the most significant bit; this may be one of the reasons why the term MSB is used instead of a bit number, although the primary reason is that different number representations use different numbers of bits.
By extension, the most significant bits are the bits closest to, including, the MSB. MSB can stand for "most significant byte"; the meaning is parallel to the above: it is the byte in that position of a multi-byte number which has the greatest potential value. To avoid this ambiguity, the less abbreviated terms "MSbit" or "MSbyte" are used; this table illustrates an example of decimal value of 149 and the location of LSB. In this particular example, the position of unit value is located in bit position 0. MSB stands for Most Significant Bit. Position of LSB is independent of how the bit position is transmitted, a question more of a topic of Endianness; the expressions Most Significant Bit First and Least Significant Bit First are indications on the ordering of the sequence of the bits in the bytes sent over a wire in a transmission protocol or in a stream. Most Significant Bit First means that the most significant bit will arrive first: hence e.g. the hexadecimal number 0x12, 00010010 in binary representation, will arrive as the sequence 0 0 0 1 0 0 1 0.
Least Significant Bit First means that the least significant bit will arrive first: hence e.g. the same hexadecimal number 0x12, again 00010010 in binary representation, will arrive as the sequence 0 1 0 0 1 0 0 0. When the bit numbering starts at zero for the least significant bit the numbering scheme is called "LSB 0"; this bit numbering method has the advantage that for any unsigned number the value of the number can be calculated by using exponentiation with the bit number and a base of 2. The value of an unsigned binary integer is therefore ∑ i = 0 N − 1 b i ⋅ 2 i where bi denotes the value of the bit w