Rapid application development
Rapid-application development called Rapid-application building, is both a general term, used to refer to adaptive software development approaches, as well as the name for James Martin's approach to rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process. Prototypes are used in addition to or sometimes in place of design specifications. RAD is well suited for developing software, driven by user interface requirements. Graphical user interface builders are called rapid application development tools. Other approaches to rapid development include the adaptive, agile and unified models. Rapid application development was a response to plan-driven waterfall processes, developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method. One of the problems with these methods is that they were based on a traditional engineering model used to design and build things like bridges and buildings. Software is an inherently different kind of artifact.
Software can radically change the entire process used to solve a problem. As a result, knowledge gained from the development process itself can feed back to the requirements and design of the solution. Plan-driven approaches attempt to rigidly define the requirements, the solution, the plan to implement it, have a process that discourages changes. RAD approaches, on the other hand, recognize that software development is a knowledge intensive process and provide flexible processes that help take advantage of knowledge gained during the project to improve or adapt the solution; the first such RAD alternative was known as the spiral model. Boehm and other subsequent RAD approaches emphasized developing prototypes as well as or instead of rigorous design specifications. Prototypes had several advantages over traditional specifications: Risk reduction. A prototype could test some of the most difficult potential parts of the system early on in the life-cycle; this can provide valuable information as to the feasibility of a design and can prevent the team from pursuing solutions that turn out to be too complex or time consuming to implement.
This benefit of finding problems earlier in the life-cycle rather than was a key benefit of the RAD approach. The earlier a problem can be found the cheaper. Users are better at reacting than at creating specifications. In the waterfall model it was common for a user to sign off on a set of requirements but when presented with an implemented system to realize that a given design lacked some critical features or was too complex. In general most users give much more useful feedback when they can experience a prototype of the running system rather than abstractly define what that system should be. Prototypes can evolve into the completed product. One approach used in some RAD methods was to build the system as a series of prototypes that evolve from minimal functionality to moderately useful to the final completed system; the advantage of this besides the two advantages above was that the users could get useful business functionality much earlier in the process. Starting with the ideas of Barry Boehm and others, James Martin developed the rapid application development approach during the 1980s at IBM and formalized it by publishing a book in 1991, Rapid Application Development.
This has resulted in some confusion over the term RAD among IT professionals. It is important to distinguish between RAD as a general alternative to the waterfall model and RAD as the specific method created by Martin; the Martin method was tailored toward knowledge intensive and UI intensive business systems. These ideas were further developed and improved upon by RAD pioneers like James Kerr and Richard Hunter, who together wrote the seminal book on the subject, Inside RAD, which followed the journey of a RAD project manager as he drove and refined the RAD Methodology in real-time on an actual RAD project; these practitioners, those like them, helped RAD gain popularity as an alternative to traditional systems project life cycle approaches. The RAD approach matured during the period of peak interest in business re-engineering; the idea of business process re-engineering was to radically rethink core business processes such as sales and customer support with the new capabilities of Information Technology in mind.
RAD was an essential part of larger business re engineering programs. The rapid prototyping approach of RAD was a key tool to help users and analysts "think out of the box" about innovative ways that technology might radically reinvent a core business process; the James Martin approach to RAD divides the process into four distinct phases: Requirements planning phase – combines elements of the system planning and systems analysis phases of the Systems Development Life Cycle. Users, IT staff members discuss and agree on business needs, project scope and system requirements, it ends when the team agrees on obtains management authorization to continue. User design phase – during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes and outputs; the RAD groups or subgroups use a combination of Joint Application Development techniques and CASE tools to translate user needs into working models. User Design is a continuous interactive process that allows users to understand and approve a working model of the system that meets their needs.
Construction phase – focuses on program and application development task similar to the SDLC. In RAD, users c
Kanban is a lean method to manage and improve work across human systems. This approach aims to manage work by balancing demands with available capacity, by improving the handling of system-level bottlenecks. Work items are visualized to give participants a view of progress and process, from start to finish - via a Kanban board. Work is pulled as capacity permits, rather than work being pushed into the process. In knowledge work and in software development, the aim is to provide a visual process-management system which aids decision-making about what and how much to produce; the underlying Kanban method originated in lean manufacturing it is now used in software development and technology-related work and has been combined with other methods or frameworks such as Scrum. David Anderson's 2010 book, describes the method's evolution from a 2004 project at Microsoft using a theory of constraints approach and incorporating a drum-buffer-rope, to a 2006-2007 project at Corbis in which the kanban method was identified.
In 2009, Don Reinertsen published a book on second-generation lean product development which describes the adoption of the kanban system and the use of data collection and an economic model for management decision-making. Another early contribution came from Corey Ladas, whose 2009 book Scrumban suggested that kanban could improve Scrum for software development. Ladas saw Scrumban as the transition from Scrum to Kanban. Jim Benson and Tonianne DeMaria Barry published Personal Kanban, applying Kanban to individuals and small teams, in 2011. In Kanban from the Inside, Mike Burrows explained kanban's principles and underlying values and related them to earlier theories and models. Kanban Change Leadership, by Klaus Leopold and Siegfried Kaltenecker, explained the method from the perspective of change management and provided guidance to change initiatives. A condensed guide to the method was published in 2016, incorporating improvements and extensions from the early kanban projects. Although Kanban does not require that the team or organization use a Kanban board, they can be used to visualise the flow of work.
A Kanban board shows how work moves from left to right, each column represents a stage within the value stream. The image below is a typical view of a simplified Kanban board, where work items move from left to right. In some cases each column has a work in progress limit; this means that each column can only receive a fixed amount of work items with the aim to encourage focus, make system constraints evident. The diagram here and the one in the Kanban Board section shows a software development workflow; the boards, designed for the context in which they are used and may show work item types, columns delineating workflow activities, explicit policies, swimlanes. The aim is to make the general workflow and the progress of individual items clear to participants and stakeholders. Although it is used for software development and software teams, the kanban method has been applied to other aspects of knowledge work.. Business functions which have used kanban include: Human resources and recruitment Marketing Organizational strategy and executive leadership Lean software development List of software development philosophies Kanban: Successful Evolutionary Change for Your Technology Business, David J. Anderson.
(United States, Blue Hole Press, 2010. ISBN 978-0984521401 Scrumban: Essays on Kanban Systems for Lean Software Development, Corey Ladas. (United States, Modus Cooperandi Press, 2009. ISBN 9780578002149 Agile Project Management with Kanban, Eric Brechner.. ISBN 978-0735698956. Kanban in Action, Marcus Hammarberg and Joakim Sunden.. ISBN 978-1-617291-05-0. Lean from the Trenches: Managing Large-Scale Projects with Kanban, Henrik Kniberg.. ISBN 978-1-93435-685-2. Stop Starting, Start Finishing! Arne Roock and Claudia Leschik.. ISBN 978-0985305161. Real-World Kanban: Do Less, Accomplish More with Lean Thinking, Mattias Skarin.. ISBN 978-1680500776
Test-driven development is a software development process that relies on the repetition of a short development cycle: requirements are turned into specific test cases the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added, not proven to meet requirements. American software engineer Kent Beck, credited with having developed or "rediscovered" the technique, stated in 2003 that TDD encourages simple designs and inspires confidence. Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999, but more has created more general interest in its own right. Programmers apply the concept to improving and debugging legacy code developed with older techniques; the following sequence is based on the book Test-Driven Development by Example: 1. Add a test In test-driven development, each new feature begins with writing a test. Write a test that defines a function or improvements of a function, which should be succinct.
To write a test, the developer must understand the feature's specification and requirements. The developer can accomplish this through use cases and user stories to cover the requirements and exception conditions, can write the test in whatever testing framework is appropriate to the software environment, it could be a modified version of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference. 2. Run all tests and see if the new test fails This validates that the test harness is working shows that the new test does not pass without requiring new code because the required behavior exists, it rules out the possibility that the new test is flawed and will always pass; the new test should fail for the expected reason. This step increases the developer's confidence in the new test. 3. Write the code The next step is to write some code that causes the test to pass.
The new code written at this stage is not perfect and may, for example, pass the test in an inelegant way. That is acceptable because it will be improved and honed in Step 5. At this point, the only purpose of the written code is to pass the test; the programmer must not write code, beyond the functionality that the test checks. 4. Run tests If all test cases now pass, the programmer can be confident that the new code meets the test requirements, does not break or degrade any existing features. If they do not, the new code must be adjusted. 5. Refactor code The growing code base must be cleaned up during test-driven development. New code can be moved from where it was convenient for passing a test to where it more logically belongs. Duplication must be removed. Object, module and method names should represent their current purpose and use, as extra functionality is added; as features are added, method bodies can get other objects larger. They benefit from being split and their parts named to improve readability and maintainability, which will be valuable in the software lifecycle.
Inheritance hierarchies may be rearranged to be more logical and helpful, to benefit from recognized design patterns. There are general guidelines for refactoring and for creating clean code. By continually re-running the test cases throughout each refactoring phase, the developer can be confident that process is not altering any existing functionality; the concept of removing duplication is an important aspect of any software design. In this case, however, it applies to the removal of any duplication between the test code and the production code—for example magic numbers or strings repeated in both to make the test pass in Step 3. Repeat Starting with another new test, the cycle is repeated to push forward the functionality; the size of the steps should always be small, with as few as 1 to 10 edits between each test run. If new code does not satisfy a new test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debugging. Continuous integration helps by providing revertible checkpoints.
When using external libraries it is important not to make increments that are so small as to be merely testing the library itself, unless there is some reason to believe that the library is buggy or is not sufficiently feature-complete to serve all the needs of the software under development. There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid" and "You aren't gonna need it". By focusing on writing only the code necessary to pass tests, designs can be cleaner and clearer than is achieved by other methods. In Test-Driven Development by Example, Kent Beck suggests the principle "Fake it till you make it". To achieve some advanced design concept such as a design pattern, tests are written that generate that design; the code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important. Writing the tests first: The tests should be written before the functionality, to be tested.
This has been claimed to have many benefits. It helps ensure that the application is written for testability, as the developers must consider how to test the application from the outset rather than adding it later, it ensures that tests for every feature get written. Additionally, writing the tests first leads to a deeper and earlier understanding of the product requirements, ensures the effectiveness of the
Agile software development
Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer/end user. It advocates adaptive planning, evolutionary development, early delivery, continual improvement, it encourages rapid and flexible response to change; the term agile was popularized, by the Manifesto for Agile Software Development. The values and principles expoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban. There is significant anecdotal evidence that adopting agile practices and values improves the agility of software professionals and organizations. Iterative and incremental development methods can be traced back as early as 1957, with evolutionary project management and adaptive software development emerging in the early 1970s. During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods that critics described as overly regulated and micro-managed.
These included: rapid application development, from 1991. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods. At the same time, similar changes were underway in aerospace. In 2001, seventeen software developers met at a resort in Snowbird, Utah to discuss these lightweight development methods, including among others Kent Beck, Ward Cunningham, Dave Thomas, Jeff Sutherland, Ken Schwaber, Jim Highsmith, Alistair Cockburn, Robert C. Martin. Together they published the Manifesto for Agile Software Development. In 2005, a group headed by Cockburn and Highsmith wrote an addendum of project management principles, the PM Declaration of Interdependence, to guide software project management according to agile software development methods. In 2009, a group working with Martin wrote an extension of software development principles, the Software Craftsmanship Manifesto, to guide agile software development according to professional conduct and mastery.
In 2011, the Agile Alliance created the Guide to Agile Practices, an evolving open-source compendium of the working definitions of agile practices and elements, along with interpretations and experience guidelines from the worldwide community of agile practitioners. Based on their combined experience of developing software and helping others do that, the seventeen signatories to the manifesto proclaimed that they value: Individuals and Interactions over processes and tools Working Software over comprehensive documentation Customer Collaboration over contract negotiation Responding to Change over following a plan That is to say, the items on the left are valued more than the items on the right; as Scott Ambler elucidated: Tools and processes are important, but it is more important to have competent people working together effectively. Good documentation is useful in helping people to understand how the software is built and how to use it, but the main point of development is to create software, not documentation.
A contract is important but is no substitute for working with customers to discover what they need. A project plan is important, but it must not be too rigid to accommodate changes in technology or the environment, stakeholders' priorities, people's understanding of the problem and its solution; some of the authors formed the Agile Alliance, a non-profit organization that promotes software development according to the manifesto's values and principles. Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith said, The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of rarely-used tomes. We recognize the limits of planning in a turbulent environment; those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as "hackers" are ignorant of both the methodologies and the original definition of the term hacker.
The Manifesto for Agile Software Development is based on twelve principles: Customer satisfaction by early and continuous delivery of valuable software. Welcome changing requirements in late development. Deliver working software Close, daily cooperation between business people and developers Projects are built around motivated individuals, who should be trusted Face-to-face conversation is the best form of communication Working software is the primary measure of progress Sustainable development, able to maintain a constant pace Continuous attention to technical excellence and good design Simplicity—the art of maximizing the amount of work not done—is essential Best architectures and designs emerge from self-organizing teams Regularly, the team reflects on how to become more effective, adjusts accordingly Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design. Iterations, or sprints, are short time frames that last from one to four weeks.
Each iteration involves a cross-functional team working in all functions: pl
Scrum (software development)
Scrum is an agile framework for managing knowledge work, with an emphasis on software development, although it has wide application in other fields and is starting to be explored by traditional project teams more generally. It is designed for teams of three to nine members, who break their work into actions that can be completed within timeboxed iterations, called sprints, no longer than one month and most two weeks track progress and re-plan in 15-minute time-boxed stand-up meetings, called daily scrums. Approaches to coordinating the work of multiple scrum teams in larger organizations include large-scale scrum, scaled agile framework, scrum of scrums, Scrum@Scale, the Nexus, among others. Scrum is a lightweight and incremental framework for managing product development, it defines "a flexible, holistic product development strategy where a development team works as a unit to reach a common goal", challenges assumptions of the "traditional, sequential approach" to product development, enables teams to self-organize by encouraging physical co-location or close online collaboration of all team members, as well as daily face-to-face communication among all team members and disciplines involved.
A key principle of Scrum is the dual recognition that customers will change their minds about what they want or need and that there will be unpredictable challenges—for which a predictive or planned approach is not suited. As such, Scrum adopts an evidence-based empirical approach—accepting that the problem cannot be understood or defined up front, instead focusing on how to maximize the team's ability to deliver to respond to emerging requirements, to adapt to evolving technologies and changes in market conditions. Scrum is seen written in all-capitals, as SCRUM; the word is not an acronym, so this is not correct. While the trademark on the term Scrum itself has been allowed to lapse, it is deemed as owned by the wider community rather than an individual, so the leading capital for Scrum is retained in this article. Many of the terms used in Scrum are written with leading capitals. However, to maintain an encyclopedic tone, this article uses normal sentence case for these terms — unless they are recognized marks.
Hirotaka Takeuchi and Ikujiro Nonaka introduced the term scrum in the context of product development in their 1986 Harvard Business Review article, "The New New Product Development Game". Takeuchi and Nonaka argued in The Knowledge Creating Company that it is a form of "organizational knowledge creation good at bringing about innovation continuously and spirally"; the authors described a new approach to commercial product development that would increase speed and flexibility, based on case studies from manufacturing firms in the automotive and printer industries. They called this the holistic or rugby approach, as the whole process is performed by one cross-functional team across multiple overlapping phases, in which the team "tries to go the distance as a unit, passing the ball back and forth". In the early 1990s, Ken Schwaber used what would become Scrum at his company, Advanced Development Methods. In 1995, Sutherland and Schwaber jointly presented a paper describing the scrum framework at the Business Object Design and Implementation Workshop held as part of Object-Oriented Programming, Languages & Applications'95 in Austin, Texas.
Over the following years and Sutherland collaborated to combine this material—with their experience and evolving good practice—to develop what became known as Scrum. In 2001, Schwaber worked with Mike Beedle to describe the method in the book, Agile Software Development with Scrum. Scrum's approach to planning and managing product development involves bringing decision-making authority to the level of operation properties and certainties. In 2002, Schwaber with others founded the Scrum Alliance and set up the Certified Scrum accreditation series. Schwaber left the Scrum Alliance in late 2009 and founded Scrum.org which oversees the parallel Professional Scrum accreditation series. Since 2009, a public document called The Scrum Guide has defined Scrum, it has been revised 5 times, with the current version being November 2017. In 2018, Schwaber and the Scrum.org community, along with leaders of the Kanban community, published The Kanban Guide for Scrum Teams. There are three roles in the scrum framework.
These are ideally co-located to ensure optimal communication among team members. Together these three roles form the scrum team. While many organizations have other roles involved with defining and delivering the product, Scrum defines only these three; the product owner represents the product's stakeholders and the voice of the customer, is responsible for the product backlog and accountable for maximizing the value that the team delivers. The product owner defines the product in customer-centric terms, adds them to the product backlog, prioritizes them based on importance and dependencies. A scrum team should have only one product owner; this role should not be combined with that of the scrum master. The product owner should focus on the business
A debugger or debugging tool is a computer program, used to test and debug other programs. The code to be examined might alternatively be running on an instruction set simulator, a technique that allows great power in its ability to halt when specific conditions are encountered, but which will be somewhat slower than executing the code directly on the appropriate processor; some debuggers offer two modes of operation, partial simulation, to limit this impact. A "trap" occurs when the program cannot continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory; when the program "traps" or reaches a preset condition, the debugger shows the location in the original code if it is a source-level debugger or symbolic debugger now seen in integrated development environments. If it is a low-level debugger or a machine-language debugger it shows the line in the disassembly.
Debuggers offer a query processor, a symbol resolver, an expression interpreter, a debug support interface at its top level. Debuggers offer more sophisticated functions such as running a program step by step, stopping at some event or specified instruction by means of a breakpoint, tracking the values of variables; some debuggers have the ability to modify program state. It may be possible to continue execution at a different location in the program to bypass a crash or logical error; the same functionality which makes a debugger useful for eliminating bugs allows it to be used as a software cracking tool to evade copy protection, digital rights management, other software protection features. It also makes it useful as a general verification tool, fault coverage, performance analyzer if instruction path lengths are shown. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces. Debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, visualization features.
Some debuggers include a feature called "reverse debugging" known as "historical debugging" or "backwards debugging". These debuggers make it possible to step a program's execution backwards in time. Various debuggers include this feature. Microsoft Visual Studio offers IntelliTrace reverse debugging for Visual Basic. NET, some other languages, but not C++. Reverse debuggers exist for C, C++, Python and other languages; some are open source. Some reverse debuggers slow down the target by orders of magnitude, but the best reverse debuggers cause a slowdown of 2× or less. Reverse debugging is useful for certain types of problems, but is still not used yet; some debuggers operate on a single specific language while others can handle multiple languages transparently. For example, if the main target program is written in COBOL but calls assembly language subroutines and PL/1 subroutines, the debugger may have to dynamically switch modes to accommodate the changes in language as they occur; some debuggers incorporate memory protection to avoid storage violations such as buffer overflow.
This may be important in transaction processing environments where memory is dynamically allocated from memory'pools' on a task by task basis. Most modern microprocessors have at least one of these features in their CPU design to make debugging easier: Hardware support for single-stepping a program, such as the trap flag. An instruction set that meets the Popek and Goldberg virtualization requirements makes it easier to write debugger software that runs on the same CPU as the software being debugged. In-system programming allows an external hardware debugger to reprogram a system under test. Many systems with such ISP support have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved, page fault hardware. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set. Processors used in embedded systems have extensive JTAG debug support.
Debugging is the process of finding and resolving defects or problems within a computer program that prevent correct operation of computer software or a system. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, profiling; the terms "bug" and "debugging" are popularly attributed to Admiral Grace Hopper in the 1940s. While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. However, the term "bug", in the sense of "technical error", dates back at least to 1878 and Thomas Edison; the term "debugging" seems to have been used as a term in aeronautics before entering the world of computers. Indeed, in an interview Grace Hopper remarked; the moth fit the existing terminology, so it was saved. A letter from J. Robert Oppenheimer used the term in a letter to Dr. Ernest Lawrence at UC Berkeley, dated October 27, 1944, regarding the recruitment of additional technical staff.
The Oxford English Dictionary entry for "debug" quotes the term "debugging" used in reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" refers to debugging, this time of aircraft cameras. Hopper's bug was found on September 9, 1947; the term was not adopted by computer programmers until the early 1950s. The seminal article by Gill in 1951 is the earliest in-depth discussion of programming errors, but it does not use the term "bug" or "debugging". In the ACM's digital library, the term "debugging" is first used in three papers from 1952 ACM National Meetings. Two of the three use the term in quotation marks. By 1963 "debugging" was a common enough term to be mentioned in passing without explanation on page 1 of the CTSS manual. Kidwell's article Stalking the Elusive Computer Bug discusses the etymology of "bug" and "debug" in greater detail; as software and electronic systems have become more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, schedule software patches or full updates to a system.
The words "anomaly" and "discrepancy" can be used, as being more neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-called errors, defects or bugs must be fixed. Instead, an impact assessment can be made to determine if changes to remove an anomaly would be cost-effective for the system, or a scheduled new release might render the change unnecessary. Not all issues are mission-critical in a system, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known problem. Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zero defects. Considering the collateral issues, such as the cost-versus-benefit impact assessment broader debugging techniques will expand to determine the frequency of anomalies to help assess their impact to the overall system.
Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies with the complexity of the system, depends, to some extent, on the programming language used and the available tools, such as debuggers. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, restart it, set breakpoints, change values in memory; the term debugger can refer to the person, doing the debugging. High-level programming languages, such as Java, make debugging easier, because they have features such as exception handling and type checking that make real sources of erratic behaviour easier to spot. In programming languages such as C or assembly, bugs may cause silent problems such as memory corruption, it is difficult to see where the initial problem happened.
In those cases, memory debugger tools may be needed. In certain situations, general purpose software tools that are language specific in nature can be useful; these take the form of static code analysis tools. These tools look for a specific set of known problems, some common and some rare, within the source code. Concentrating more on the semantics rather than the syntax, as compilers and interpreters do; some tools claim to be able to detect over 300 different problems. Both commercial and free tools exist for various languages; these tools can be useful when checking large source trees, where it is impractical to do code walkthroughs. A typical example of a problem detected would be a variable dereference that occurs before the variable is assigned a value; as another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating errors in code, syntactically correct, but these tools have a reputation of false positives. The old Unix lint program is an early example.
For debugging electronic hardware (e.g