Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs, verifying that the software product is fit for use. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test: meets the requirements that guided its design and development, responds to all kinds of inputs, performs its functions within an acceptable time, it is sufficiently usable, can be installed and run in its intended environments, achieves the general result its stakeholders desire; as the number of possible tests for simple software components is infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources.
As a result, software testing attempts to execute a program or application with the intent of finding software bugs. The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can create new ones. Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors. Software testing can be conducted as soon; the overall approach to software development determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and implemented in testable programs. In contrast, under an agile approach, requirements and testing are done concurrently. Although testing can determine the correctness of software under the assumption of some specific hypotheses, testing cannot identify all the defects within the software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles—principles or mechanisms by which someone might recognize a problem.
These oracles may include specifications, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions; the scope of software testing includes the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing aids the process of attempting to make this assessment. Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, i.e. unrecognized requirements that result in errors of omission by the program designer. Requirement gaps can be non-functional requirements such as testability, maintainability, usability and security. Software faults occur through the following processes. A programmer makes an error. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will result in failures. For example, defects in the dead code will never result in failures.
A defect can turn into a failure. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single defect may result in a wide range of failure symptoms. A fundamental problem with software testing is that testing under all combinations of inputs and preconditions is not feasible with a simple product; this means that the number of defects in a software product can be large and defects that occur infrequently are difficult to find in testing. More non-functional dimensions of quality —usability, performance, reliability—can be subjective. Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or
Order of magnitude
An order of magnitude is an approximate measure of the number of digits that a number has in the commonly-used base-ten number system. It is equal to the whole number floor of logarithm. For example, the order of magnitude of 1500 is 3, because 1500 = 1.5 × 103. Differences in order of magnitude can be measured on a base-10 logarithmic scale in “decades”. Examples of numbers of different magnitudes can be found at Orders of magnitude; the order of magnitude of a number is the smallest power of 10 used to represent that number. To work out the order of magnitude of a number N, the number is first expressed in the following form: N = a × 10 b where 10 10 ≤ a < 10. B represents the order of magnitude of the number; the order of magnitude can be any integer. The table below enumerates the order of magnitude of some numbers in light of this definition: The geometric mean of 10 b and 10 b + 1 is 10 × 10 b, meaning that a value of 10 b represents a geometric "halfway point" within the range of possible values of a.
Some use a simpler definition where 0.5 < a ≤ 5 because the arithmetic mean of 10 b and 10 b + c approaches 5 × 10 b + c − 1 for increasing c. This definition has the effect of lowering the values of b slightly: Yet others restrict a to values where 1 ≤ a < 10, making the order of magnitude of a number equal to its exponent part in scientific notation. Orders of magnitude are used to make approximate comparisons. If numbers differ by one order of magnitude, x is about ten times different in quantity than y. If values differ by two orders of magnitude, they differ by a factor of about 100. Two numbers of the same order of magnitude have the same scale: the larger value is less than ten times the smaller value; the order of magnitude of a number is, intuitively speaking, the number of powers of 10 contained in the number. More the order of magnitude of a number can be defined in terms of the common logarithm as the integer part of the logarithm, obtained by truncation. For example, the number 4000000 has a logarithm of 6.602.
When truncating, a number of this order of magnitude is between 106 and 107. In a similar example, with the phrase "He had a seven-figure income", the order of magnitude is the number of figures minus one, so it is easily determined without a calculator to 6. An order of magnitude is an approximate position on a logarithmic scale. An order-of-magnitude estimate of a variable, whose precise value is unknown, is an estimate rounded to the nearest power of ten. For example, an order-of-magnitude estimate for a variable between about 3 billion and 30 billion is 10 billion. To round a number to its nearest order of magnitude, one rounds its logarithm to the nearest integer, thus 4000000, which has a logarithm of 6.602, has 7 as its nearest order of magnitude, because "nearest" implies rounding rather than truncation. For a number written in scientific notation, this logarithmic rounding scale requires rounding up to the next power of ten when the multiplier is greater than the square root of ten. For example, the nearest order of magnitude for 1.7×108 is 8, whereas the nearest order of magnitude for 3.7×108 is 9.
An order-of-magnitude estimate is sometimes called a zeroth order approximation. An order-of-magnitude difference between two values is a factor of 10. For example, the mass of the planet Saturn is 95 times that of Earth, so Saturn is two orders of magnitude more massive than Earth. Order-of-magnitude differences are called decades. Other orders of magnitude may be calculated using bases other than 10; the ancient Greeks ranked the nighttime brightness of celestial bodies by 6 levels in which each level was the fifth root of one hundred as bright as the nearest weaker level of brightness, thus the brightest level being 5 orders of magnitude brighter than the weakest indicates that it is 5 or a factor of 100 times brighter. The different decimal numeral systems of the world use a larger base to better envision the size of the number, have created names for the powers of this larger base; the table shows what number the order of magnitude aim at for base 10 and for base 1000000. It can be seen that the order of magnitude is included in the number name in this example, because bi- means 2 and tri- means 3, the suffix -illion tells that the base is 1000000.
But the number names billion, trillion themselves (here with other meaning than in the first cha
The Stanford dragon is a computer graphics 3D test model created with a Cyberware 3030 Model Shop Color 3D Scanner at Stanford University. The dragon consists of data describing 871,414 triangles determined by 3D scanning a real figurine; the data set is used to test various graphics algorithms, including polygonal simplification and surface smoothing. It first appeared in 1996; the model is available in different file formats on the Internet for free. List of common 3D test models Stanford bunny The Stanford 3D Scanning Repository provides the Stanford dragon model for download Large Geometric Models Archive at Georgia Tech provides the Stanford dragon model for download in standard file formats MrBluesummers.com 3dsMax Resources Download the Stanford dragon model in OBJ format
In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object. Energy is a conserved quantity; the SI unit of energy is the joule, the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton. Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field, the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, the thermal energy due to an object's temperature. Mass and energy are related. Due to mass–energy equivalence, any object that has mass when stationary has an equivalent amount of energy whose form is called rest energy, any additional energy acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
Living organisms require exergy to stay alive, such as the energy. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy; the processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth. The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, is a function of the position of an object within a field or may be stored in the field itself. While these two categories are sufficient to describe all forms of energy, it is convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.
The word energy derives from the Ancient Greek: translit. Energeia, lit.'activity, operation', which appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was accepted; the modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. In 1807, Thomas Young was the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, in 1853, William Rankine coined the term "potential energy".
The law of conservation of energy was first postulated in the early 19th century, applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the generation of heat; these developments led to the theory of conservation of energy, formalized by William Thomson as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, Walther Nernst, it led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
In 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water insulated from heat transfer, it showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units, the unit of energy is the joule, named after James Prescott Joule, it is a derived unit. It is equal to the energy expended in applying a force of one newton through a distance of one metre; however energy is expressed in many other units not part of the SI, such as ergs, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate is the watt, a joule per second. Thus, one joule is one watt-second, 3600 joules equal one wa
The Chinese room argument holds that a program cannot give a computer a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds and Programs", published in Behavioral and Brain Sciences in 1980, it has been discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room; the argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. The argument is intended to refute a position Searle calls Strong AI: The appropriately programmed computer with the right inputs and outputs would thereby have a mind in the same sense human beings have minds. Although it was presented in reaction to the statements of artificial intelligence researchers, it is not an argument against the goals of AI research, because it does not limit the amount of intelligence a machine can display.
The argument applies only to digital computers running programs and does not apply to machines in general. Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese, it takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being; the question Searle wants to answer is this: does the machine "understand" Chinese? Or is it simulating the ability to understand Chinese? Searle calls the first position "strong AI" and the latter "weak AI".
Searle supposes that he is in a closed room and has a book with an English version of the computer program, along with sufficient papers, pencils and filing cabinets. Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, produce Chinese characters as output. If the computer had passed the Turing test this way, it follows, says Searle, that he would do so as well by running the program manually. Searle asserts that there is no essential difference between the roles of the computer and himself in the experiment; each follows a program, step-by-step, producing a behavior, interpreted by the user as demonstrating intelligent conversation. However, Searle himself would not be able to understand the conversation. Therefore, he argues, it follows that the computer would not be able to understand the conversation either. Searle argues that, without "understanding", we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in anything like the normal sense of the word.
Therefore, he concludes. Gottfried Leibniz made a similar argument in 1714 against mechanism. Leibniz used the thought experiment of expanding the brain. Leibniz found it difficult to imagine that a "mind" capable of "perception" could be constructed using only mechanical processes. In the 1961 short story "The Game" by Anatoly Dneprov, a stadium of people act as switches and memory cells implementing a program to translate a sentence of Portuguese, a language that none of them knows. In 1974, Lawrence Davis imagined duplicating the brain using telephone lines and offices staffed by people, in 1978 Ned Block envisioned the entire population of China involved in such a brain simulation; this thought experiment is called the China brain the "Chinese Nation" or the "Chinese Gym". The Chinese Room Argument was introduced in Searle's 1980 paper "Minds and Programs", published in Behavioral and Brain Sciences, it became the journal's "most influential target article", generating an enormous number of commentaries and responses in the ensuing decades, Searle has continued to defend and refine the argument in many papers, popular articles and books.
David Cole writes that "the Chinese Room argument has been the most discussed philosophical argument in cognitive science to appear in the past 25 years". Most of the discussion consists of attempts to refute it. "The overwhelming majority", notes BBS editor Stevan Harnad, "still think that the Chinese Room Argument is dead wrong". The sheer volume of the literature that has grown up around it inspired Pat Hayes to comment that the field of cognitive science ought to be redefined as "the ongoing research program of showing Searle's Chinese Room Argument to be false". Searle's argument has become "something of a classic in cognitive science", according to Harnad. Varol Akman agrees, has described the original paper as "an exemplar of philosophical clarity and purity". Although the Chinese Room argument was presented in reaction to the statements of AI researchers, philosophers have come to view it as an important part of the philosophy of mind, it is a challenge to functionalism and the computational theory of mind, is related to such questions as the mind–body problem, the problem of other minds, the symbol-grounding problem, the hard problem of consciou
Test Card F
Test Card F is a test card, created by the BBC and used on television in the United Kingdom and in countries elsewhere in the world for more than four decades. Like other test cards, it was shown while no programmes were being broadcast, it was the first to be transmitted in colour in the UK and the first to feature a person, has become an iconic British image subject to parody. The central image on the card shows eight-year-old Carole Hersee playing noughts and crosses with a clown doll, Bubbles the Clown, surrounded by various greyscales and colour test signals used to assess the quality of the transmitted picture, it was first broadcast on 2 July 1967 on BBC2. The card was developed by George Hersee, father of the girl in the central image, it was broadcast during downtime on BBC1 until that channel began to broadcast 24 hours a day in November 1997, on BBC Two until its downtime was replaced by Pages from Ceefax in 1998, after which it was only seen during engineering work, was last seen in this role in 1999.
The card was seen on ITV in the 1970s used in conjunction with Test Card G. In the digital age, Test Card F and its variants are infrequently broadcast, as downtime in schedules has become a thing of the past. Several variations of TCF have been screened, among them Test Card J, Test Card W and its high definition variant, sometimes erroneously referred to as Test Card X. Up until the UK's digital switchover, the testcard made an annual appearance during the annual RBS Test Transmissions and, until 2013, during the BBC HD preview loop, which used Test Card W. Virtually all the designs and patterns on the card have some significance. Along the top are 95% saturation colour-bars in descending order of luminance—white, cyan, magenta, red and black. There are triangles on each of the four sides of the card to check for correct overscanning of the picture. Standard greyscale and frequency response tests are found on the left and right of the central picture. On the updated Test Card J, the X on the noughts-and-crosses board is an indicator for aligning the centre of the screen.
The blocks of colour on the sides would cause the picture to tear horizontally if the sync circuits were not adjusted properly. The spaced lines in various parts of the screen allowed focus to be checked from centre to edge. All parts of the greyscale would not be distinct; the black bar on a white background revealed signal reflections. The castellations along the top and bottom revealed possible setup problems. A child was depicted so that wrong skin colour would be obvious and not subject to changing make-up fashions; the juxtaposed garish colours of the clown were such that a common transmission error called chrominance/luminance delay inequality would make the clown's yellow buttons turn white. Modern circuitry using large-scale integration is much less susceptible to most of these problems; some of them are associated with cathode ray tubes. The test card has become far less important; the name of the broadcasting channel appeared in the space underneath the letter F—a serif F denoting an original optical version of the test card.
Test Card F was a photographic slide made up of two transparencies in perfect registration—one containing the colour information and the other the monochrome background. The card was converted to electronic form in 1984. A sound of some kind is transmitted in the background, it is sometimes music a composition commissioned by the station itself or "royalty-free" stock music. Composers whose music has been used include Roger Roger, Johnny Pearson, Neil Richardson, Frank Chacksfield, Syd Dale, John Cameron, Brian Bennett, Keith Mansfield, Alan Hawkshaw. However, during more recent years in which the Test Card is only played during engineering tests on the BBC, it is more common to hear a steady tone of various pitches accompanied by a female talking clock. Test Card music had ceased to be heard in the 1980s. Along with his Test Card F co-star Carole Hersee, Bubbles has appeared for an estimated total of 70,000 hours on television, equivalent to nearly eight whole years, more than any living person other than Carole.
Bubbles's original body colour was blue and white, but the BBC engineers decided that green was needed within the scene as the other two television primary colours and blue, were shown. A green wrap was made to cover his body and this can be seen in Test Card J and Test Card W, along with more of his body shown in the photograph — revealing the fact that he is holding a piece of chalk, not visible. However, the shade of green material chosen was too subtle for the engineers' liking and so Bubbles' body colour in Test Card F was retouched to make it more saturated and to give it a higher luminance value on screen. Test Card F was used in 30 countries outside the UK. Notable overseas users included: NRK in Norway in the 1970s SVT in Sweden in the 1970s STW-9 in Perth, Australia TCN-9 in Sydney, Australia Rad
The Utah teapot, or the Newell teapot, is a 3D test model that has become a standard reference object and an in-joke within the computer graphics community. It is a mathematical model of an ordinary teapot that appears solid and convex. A teapot primitive is considered the equivalent of a "Hello, World" program, as a way to create an easy 3D scene with a somewhat complex model acting as a basic geometry reference for scene and light setup; some programming libraries, such as the OpenGL Utility Toolkit have functions dedicated to drawing teapots. The teapot model was created in 1975 by early computer graphics researcher Martin Newell, a member of the pioneering graphics program at the University of Utah. For his work, Newell needed a simple mathematical model of a familiar object, his wife, Sandra Newell, suggested modelling their tea service since they were sitting down for tea at the time. He sketched the teapot free-hand using a pencil. Following that, he went back to the computer laboratory and edited bézier control points on a Tektronix storage tube, again by hand.
The teapot shape contained a number of elements that made it ideal for the graphics experiments of the time: it was round, contained saddle points, had a genus greater than zero because of the hole in the handle, could project a shadow on itself, could be displayed without a surface texture. Newell made the mathematical data that described the teapot's geometry publicly available, soon other researchers began to use the same data for their computer graphics experiments; these researchers needed something with the same characteristics that Newell had, using the teapot data meant they did not have to laboriously enter geometric data for some other object. Although technical progress has meant that the act of rendering the teapot is no longer the challenge it was in 1975, the teapot continued to be used as a reference object for advanced graphics techniques. Over the following decades, editions of computer graphics journals featured versions of the teapot: faceted or smooth-shaded, bumpy, refractive leopard-skin and furry teapots were created.
Having no surface to represent its base, the original teapot model was not intended to be seen from below. Versions of the data set fixed this; the real teapot is ~33% taller than the computer model. Jim Blinn stated that he scaled the model on the vertical axis during a demo in the lab to demonstrate that they could manipulate it, they preferred the appearance of this new version and decided to save the file out of that preference. The original, physical teapot was purchased from ZCMI in 1974, it was donated to the Boston Computer Museum in 1984 where it was on display until 1990. It now resides in the ephemera collection at the Computer History Museum in Mountain View, California where it is catalogued as "Teapot used for Computer Graphics rendering" and bears the catalogue number X00398.1984. Versions of the teapot are still sold today by Friesland Porzellan in Germany, who were the original makers of the teapot as they were once part of the Melitta Group. Versions of the teapot model — or sample scenes containing it — are distributed with or available for nearly every current rendering and modelling program and many graphic APIs, including AutoCAD, Lightwave 3D, MODO, POV-Ray, 3ds Max, the OpenGL and Direct3D helper libraries.
Some RenderMan-compliant renderers support the teapot as a built-in geometry by calling RiGeometry. Along with the expected cubes and spheres, the GLUT library provides the function glutSolidTeapot as a graphics primitive, as does its Direct3D counterpart D3DX; however version 11 of DirectX does not provide this functionality anymore. Mac OS X Tiger and Leopard include the teapot as part of Quartz Composer. BeOS included a small demo of a rotating 3D teapot, intended to show off the platform's multimedia facilities. Teapot scenes are used for renderer self-tests and benchmarks. One famous ray-traced image, by James Arvo and David Kirk in 1987, shows six stone columns, five of which are surmounted by the Platonic solids; the sixth column supports a teapot. The image is titled "The Six Platonic Solids", with Arvo and Kirk calling the teapot "the newly discovered Teapotahedron"; this image appeared on the covers of computer graphic journals. The Utah teapot sometimes appears in the "Pipes" screensaver shipped with Microsoft Windows, but only in versions prior to Windows XP, has been included in the "polyhedra" XScreenSaver hack since 2008.
Jim Blinn proves an amusing version of the Pythagorean theorem: Construct a teapot on each side of a right triangle and the area of the teapot on the hypotenuse is equal to the sum of the areas of the teapots on the other two sides. Loren Carpenter's 1980 CGI film Vol Libre features the teapot, appearing at the beginning and end of the film in the foreground with a fractal-rendered mountainscape behind it. Vulkan and OpenGL graphics APIs feature Utah teapot along with Stanford Dragon and Stanford Bunny on their badges. With the advent of the first computer generated short films and proceeding full-length feature films, it has become an in-joke to hide the Utah teapot in one of the film's scenes. For example, in the movie Toy Story, the Utah teapot appears in a short tea-party scene; the teapot appears in The Simpsons episode "Treehouse of Horror VI" in which Homer discovers the "third dim