A software bug is an error, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The process of finding and fixing bugs is termed "debugging" and uses formal techniques or tools to pinpoint bugs, since the 1950s, some computer systems have been designed to deter, detect or auto-correct various computer bugs during operations. Most bugs arise from mistakes and errors made in either a program's source code or its design, or in components and operating systems used by such programs. A few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that interfere with its functionality, is said to be buggy. Bugs can trigger errors. Bugs may cause the program to crash or freeze the computer. Other bugs qualify as security bugs and might, for example, enable a malicious user to bypass access controls in order to obtain unauthorized privileges; some software bugs have been linked to disasters.
Bugs in code that controlled the Therac-25 radiation therapy machine were directly responsible for patient deaths in the 1980s. In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket had to be destroyed less than a minute after launch due to a bug in the on-board guidance computer program. In June 1994, a Royal Air Force Chinook helicopter crashed into the Mull of Kintyre, killing 29; this was dismissed as pilot error, but an investigation by Computer Weekly convinced a House of Lords inquiry that it may have been caused by a software bug in the aircraft's engine-control computer. In 2002, a study commissioned by the US Department of Commerce's National Institute of Standards and Technology concluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product". The term "bug" to describe defects has been a part of engineering jargon since the 1870s and predates electronic computers and computer software.
For instance, Thomas Edison wrote the following words in a letter to an associate in 1878: It has been just so in all of my inventions. The first step is an intuition, comes with a burst difficulties arise—this thing gives out and that "Bugs"—as such little faults and difficulties are called—show themselves and months of intense watching and labor are requisite before commercial success or failure is reached; the Middle English word bugge is the basis for the terms "bugbear" and "bugaboo" as terms used for a monster. Baffle Ball, the first mechanical pinball game, was advertised as being "free of bugs" in 1931. Problems with military gear during World War II were referred to as bugs. In a book published in 1942, Louise Dickinson Rich, speaking of a powered ice cutting machine, said, "Ice sawing was suspended until the creator could be brought in to take the bugs out of his darling."Isaac Asimov used the term "bug" to relate to issues with a robot in his short story "Catch That Rabbit", published in 1944.
The term "bug" was used in an account by computer pioneer Grace Hopper, who publicized the cause of a malfunction in an early electromechanical computer. A typical version of the story is: In 1946, when Hopper was released from active duty, she joined the Harvard Faculty at the Computation Laboratory where she continued her work on the Mark II and Mark III. Operators traced an error in the Mark II to a moth trapped in a relay; this bug was removed and taped to the log book. Stemming from the first bug, today we call errors or glitches in a program a bug. Hopper did not find the bug, as she acknowledged; the date in the log book was September 9, 1947. The operators who found it, including William "Bill" Burke of the Naval Weapons Laboratory, Virginia, were familiar with the engineering term and amusedly kept the insect with the notation "First actual case of bug being found." Hopper loved to recount the story. This log book, complete with attached moth, is part of the collection of the Smithsonian National Museum of American History.
The related term "debug" appears to predate its usage in computing: the Oxford English Dictionary's etymology of the word contains an attestation from 1945, in the context of aircraft engines. The concept that software might contain errors dates back to Ada Lovelace's 1843 notes on the analytical engine, in which she speaks of the possibility of program "cards" for Charles Babbage's analytical engine being erroneous:... an analysing process must have been performed in order to furnish the Analytical Engine with the necessary operative data. Granted that the actual mechanism is unerring in its processes, the cards may give it wrong orders; the first documented use of the term "bug" for a technical malfunction was by Thomas Edison. The Open Technology Institute, run by the group, New America, released a report "Bugs in the System" in August 2016 stating that U. S. policymakers should make reforms to help researchers address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure."
One of the report’s authors said that Congress has not done enough to address cyber software vulnerability though Congress has passed a number of bills to combat the larger issue of cyber security. Government researchers and cyber security experts are the people who discover software flaws
Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs, verifying that the software product is fit for use. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test: meets the requirements that guided its design and development, responds to all kinds of inputs, performs its functions within an acceptable time, it is sufficiently usable, can be installed and run in its intended environments, achieves the general result its stakeholders desire; as the number of possible tests for simple software components is infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources.
As a result, software testing attempts to execute a program or application with the intent of finding software bugs. The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can create new ones. Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors. Software testing can be conducted as soon; the overall approach to software development determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and implemented in testable programs. In contrast, under an agile approach, requirements and testing are done concurrently. Although testing can determine the correctness of software under the assumption of some specific hypotheses, testing cannot identify all the defects within the software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles—principles or mechanisms by which someone might recognize a problem.
These oracles may include specifications, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions; the scope of software testing includes the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing aids the process of attempting to make this assessment. Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, i.e. unrecognized requirements that result in errors of omission by the program designer. Requirement gaps can be non-functional requirements such as testability, maintainability, usability and security. Software faults occur through the following processes. A programmer makes an error. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will result in failures. For example, defects in the dead code will never result in failures.
A defect can turn into a failure. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single defect may result in a wide range of failure symptoms. A fundamental problem with software testing is that testing under all combinations of inputs and preconditions is not feasible with a simple product; this means that the number of defects in a software product can be large and defects that occur infrequently are difficult to find in testing. More non-functional dimensions of quality —usability, performance, reliability—can be subjective. Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or
NDepend is a static analysis tool for. NET managed code; the tool supports a large number of code metrics, allowing to visualize dependencies using directed graphs and dependency matrix. The tool performs code base snapshots comparisons, validation of architectural and quality rules. User-defined rules can be written using LINQ queries; this feature is named CQLinq. The tool comes with a large number of predefined CQLinq code rules. Code rules can be checked automatically during continuous integration; the main features of NDepend are: Dependency Visualization Software metrics Declarative code rule over LINQ query Integration with CruiseControl and TeamCity Optional code constraints in the source code using CLI attributes Version comparison of two versions of the same assembly NDepend. API Improvements Code Query Improvements Report Improvements Enhanced Baseline Experience Default Rules-Set Improvements Dashboard Improvements Enhanced Visual Studio Integration Planned support for VS2017 / dev15.
NET Core support Better Issues Management Quality Gates Smart Technical Debt Estimation New support for Visual Studio Team Services New licensing and release scheme Integration with Visual Studio 2015 Enhanced Visual Studio Integration Colored Code Metric View Intuitive display of Code Coverage percentage Rule Files Shareable amongst Projects Default Rules Description and HowToFix Default Rules Less False Positives Compiler Generated Code Removal Async Support Integration with TFS Integration with SonarQube Integration with TeamCity Support Visual Studio Themes Support for high DPI resolution All recent versions of the tool proposes live code queries and code rules through LINQ queries. This is one of the main innovations of NDepend. For example: - Base class should not use derivatives: // <Name>Base class should not use derivatives</Name> warnif count > 0 from baseClass in JustMyCode. Types where baseClass. IsClass && baseClass. NbChildren > 0 // <-- for optimization! Let derivedClassesUsed = baseClass.
DerivedTypes. UsedBy where derivedClassesUsed. Count > 0 select new - Avoid making complex methods more complex: // <Name>Avoid making complex methods more complex </Name> warnif count > 0 from m in JustMyCode. Methods where!m. IsAbstract && m. IsPresentInBothBuilds && m. CodeWasChanged let oldCC = m. OlderVersion. CyclomaticComplexity where oldCC > 6 && m. CyclomaticComplexity > oldCC select new Additionally, the tool proposes a live CQLinq query editor with code completion and embedded documentation. Design Structure Matrix List of tools for static code analysis Software visualization Official website Exiting The Zone Of Pain: Static Analysis with NDepend.aspx discusses NDepend Stack Overflow discussion: use of NDepend Abhishek Sur, on NDepend NDepend code metrics by Andre Loker Static analysis with NDepend by Henry Cordes Hendry Luk discusses Continuous software quality with NDepend Jim Holmes, on NDepend. Mário Romano discusses Metrics and Dependency Matrix with NDepend Nates Stuff review Scott Mitchell, Code Exploration using NDepend Travis Illig on NDepend Girish Suryanarayana, Ganesh Samarthyam, Tushar Sharma.
Refactoring for Software Design Smells: Managing Technical Debt Marcin Kawalerowicz and Craig Berntson. Continuous Integration in. NET James Avery and Jim Holmes. Windows developer power tools Patrick Cauldwell and Scott Hanselman. Code Leader: Using People and Processes to Build Successful Software Yogesh Shetty and Samir Jayaswal. Practical. NET for financial markets Paul Duvall. Continuous Integration Vanessa L. Williams. Visual Studio 2008 All-In-One Desk Reference For Dummies Patrick Smacchia. Practical. Net 2 and C# 2: Harness the Platform, the Language, the Framework
Graph drawing is an area of mathematics and computer science combining methods from geometric graph theory and information visualization to derive two-dimensional depictions of graphs arising from applications such as social network analysis, cartography and bioinformatics. A drawing of a graph or network diagram is a pictorial representation of the vertices and edges of a graph; this drawing should not be confused with the graph itself: different layouts can correspond to the same graph. In the abstract, all that matters is. In the concrete, the arrangement of these vertices and edges within a drawing affects its understandability, fabrication cost, aesthetics; the problem gets worse if the graph changes over time by adding and deleting edges and the goal is to preserve the user's mental map. Graphs are drawn as node–link diagrams in which the vertices are represented as disks, boxes, or textual labels and the edges are represented as line segments, polylines, or curves in the Euclidean plane.
Node–link diagrams can be traced back to the 13th century work of Ramon Llull, who drew diagrams of this type for complete graphs in order to analyze all pairwise combinations among sets of metaphysical concepts. In the case of directed graphs, arrowheads form a used graphical convention to show their orientation. Upward planar drawing uses the convention that every edge is oriented from a lower vertex to a higher vertex, making arrowheads unnecessary. Alternative conventions to node–link diagrams include adjacency representations such as circle packings, in which vertices are represented by disjoint regions in the plane and edges are represented by adjacencies between regions. Many different quality measures have been defined for graph drawings, in an attempt to find objective means of evaluating their aesthetics and usability. In addition to guiding the choice between different layout methods for the same graph, some layout methods attempt to directly optimize these measures; the crossing number of a drawing is the number of pairs of edges.
If the graph is planar it is convenient to draw it without any edge intersections. However, nonplanar graphs arise in applications, so graph drawing algorithms must allow for edge crossings; the area of a drawing is the size of its smallest bounding box, relative to the closest distance between any two vertices. Drawings with smaller area are preferable to those with larger area, because they allow the features of the drawing to be shown at greater size and therefore more legibly; the aspect ratio of the bounding box may be important. Symmetry display is the problem of finding symmetry groups within a given graph, finding a drawing that displays as much of the symmetry as possible; some layout methods automatically lead to symmetric drawings. It is important that edges have shapes that are as simple as possible, to make it easier for the eye to follow them. In polyline drawings, the complexity of an edge may be measured by its number of bends, many methods aim to provide drawings with few total bends or few bends per edge.
For spline curves the complexity of an edge may be measured by the number of control points on the edge. Several used quality measures concern lengths of edges: it is desirable to minimize the total length of the edges as well as the maximum length of any edge. Additionally, it may be preferable for the lengths of edges to be uniform rather than varied. Angular resolution is a measure of the sharpest angles in a graph drawing. If a graph has vertices with high degree it will have small angular resolution, but the angular resolution can be bounded below by a function of the degree; the slope number of a graph is the minimum number of distinct edge slopes needed in a drawing with straight line segment edges. Cubic graphs have slope number at most four, but graphs of degree five may have unbounded slope number. There are many different graph layout strategies: In force-based layout systems, the graph drawing software modifies an initial vertex placement by continuously moving the vertices according to a system of forces based on physical metaphors related to systems of springs or molecular mechanics.
These systems combine attractive forces between adjacent vertices with repulsive forces between all pairs of vertices, in order to seek a layout in which edge lengths are small while vertices are well-separated. These systems may perform gradient descent based minimization of an energy function, or they may translate the forces directly into velocities or accelerations for the moving vertices. Spectral layout methods use as coordinates the eigenvectors of a matrix such as the Laplacian derived from the adjacency matrix of t
Visualization or visualisation is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of humanity. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes. Visualization today has ever-expanding applications in science, engineering, interactive multimedia, etc. Typical of a visualization application is the field of computer graphics; the invention of computer graphics may be the most important development in visualization since the invention of central perspective in the Renaissance period. The development of animation helped advance visualization; the use of visualization to present information is not a new phenomenon. It has been used in maps, scientific drawings, data plots for over a thousand years. Examples from cartography include Ptolemy's Geographia, a map of China, Minard's map of Napoleon's invasion of Russia a century and a half ago.
Most of the concepts learned in devising these images carry over in a straightforward manner to computer visualization. Edward Tufte has written three critically acclaimed books. Computer graphics has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power limited its usefulness; the recent emphasis on visualization started in 1987 with the publication of Visualization in Scientific Computing, a special issue of Computer Graphics. Since there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH, devoted to the general topic, special areas in the field, for example volume visualization. Most people are familiar with the digital animations produced to present meteorological data during weather reports on television, though few can distinguish between those models of reality and the satellite photos that are shown on such programs. TV offers scientific visualizations when it shows computer drawn and animated reconstructions of road or airplane accidents.
Some of the most popular examples of scientific visualizations are computer-generated images that show real spacecraft in action, out in the void far beyond Earth, or on other planets. Dynamic forms of visualization, such as educational animation or timelines, have the potential to enhance learning about systems that change over time. Apart from the distinction between interactive visualizations and animation, the most useful categorization is between abstract and model-based scientific visualizations; the abstract visualizations show conceptual constructs in 2D or 3D. These generated shapes are arbitrary; the model-based visualizations either place overlays of data on real or digitally constructed images of reality or make a digital construction of a real object directly from the scientific data. Scientific visualization is done with specialized software, though there are a few exceptions, noted below; some of these specialized programs have been released as open source software, having often its origins in universities, within an academic environment where sharing software tools and giving access to the source code is common.
There are many proprietary software packages of scientific visualization tools. Models and frameworks for building visualizations include the data flow models popularized by systems such as AVS, IRIS Explorer, VTK toolkit, data state models in spreadsheet systems such as the Spreadsheet for Visualization and Spreadsheet for Images; as a subject in computer science, scientific visualization is the use of interactive, sensory representations visual, of abstract data to reinforce cognition, hypothesis building, reasoning. Data visualization is a related subcategory of visualization dealing with statistical graphics and geographic or spatial data, abstracted in schematic form. Scientific visualization is the transformation, selection, or representation of data from simulations or experiments, with an implicit or explicit geometric structure, to allow the exploration and understanding of the data. Scientific visualization focuses and emphasizes the representation of higher order data using graphics and animation techniques.
It is a important part of visualization and maybe the first one, as the visualization of experiments and phenomena is as old as science itself. Traditional areas of scientific visualization are flow visualization, medical visualization, astrophysical visualization, chemical visualization. There are several different techniques to visualize scientific data, with isosurface reconstruction and direct volume rendering being the more common. Educational visualization is using a simulation to create an image of something so it can be taught about; this is useful when teaching about a topic, difficult to otherwise see, for example, atomic structure, because atoms are far too small to be studied without expensive and difficult to use scientific equipment. Information visualization concentrates on the use of computer-supported tools to explore large amount of abstract data; the term "information visualization" was coined by the User Interface Research Group at Xerox PARC and included Jock Mackinlay. Practical application of information visualization in computer programs involves selecting and representing abstract data in a form that facilitates human interaction for exploration and understanding.