Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs, verifying that the software product is fit for use. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test: meets the requirements that guided its design and development, responds to all kinds of inputs, performs its functions within an acceptable time, it is sufficiently usable, can be installed and run in its intended environments, achieves the general result its stakeholders desire; as the number of possible tests for simple software components is infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources.
As a result, software testing attempts to execute a program or application with the intent of finding software bugs. The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can create new ones. Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors. Software testing can be conducted as soon; the overall approach to software development determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and implemented in testable programs. In contrast, under an agile approach, requirements and testing are done concurrently. Although testing can determine the correctness of software under the assumption of some specific hypotheses, testing cannot identify all the defects within the software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles—principles or mechanisms by which someone might recognize a problem.
These oracles may include specifications, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions; the scope of software testing includes the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing aids the process of attempting to make this assessment. Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, i.e. unrecognized requirements that result in errors of omission by the program designer. Requirement gaps can be non-functional requirements such as testability, maintainability, usability and security. Software faults occur through the following processes. A programmer makes an error. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will result in failures. For example, defects in the dead code will never result in failures.
A defect can turn into a failure. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single defect may result in a wide range of failure symptoms. A fundamental problem with software testing is that testing under all combinations of inputs and preconditions is not feasible with a simple product; this means that the number of defects in a software product can be large and defects that occur infrequently are difficult to find in testing. More non-functional dimensions of quality —usability, performance, reliability—can be subjective. Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or
Rapid application development
Rapid-application development called Rapid-application building, is both a general term, used to refer to adaptive software development approaches, as well as the name for James Martin's approach to rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process. Prototypes are used in addition to or sometimes in place of design specifications. RAD is well suited for developing software, driven by user interface requirements. Graphical user interface builders are called rapid application development tools. Other approaches to rapid development include the adaptive, agile and unified models. Rapid application development was a response to plan-driven waterfall processes, developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method. One of the problems with these methods is that they were based on a traditional engineering model used to design and build things like bridges and buildings. Software is an inherently different kind of artifact.
Software can radically change the entire process used to solve a problem. As a result, knowledge gained from the development process itself can feed back to the requirements and design of the solution. Plan-driven approaches attempt to rigidly define the requirements, the solution, the plan to implement it, have a process that discourages changes. RAD approaches, on the other hand, recognize that software development is a knowledge intensive process and provide flexible processes that help take advantage of knowledge gained during the project to improve or adapt the solution; the first such RAD alternative was known as the spiral model. Boehm and other subsequent RAD approaches emphasized developing prototypes as well as or instead of rigorous design specifications. Prototypes had several advantages over traditional specifications: Risk reduction. A prototype could test some of the most difficult potential parts of the system early on in the life-cycle; this can provide valuable information as to the feasibility of a design and can prevent the team from pursuing solutions that turn out to be too complex or time consuming to implement.
This benefit of finding problems earlier in the life-cycle rather than was a key benefit of the RAD approach. The earlier a problem can be found the cheaper. Users are better at reacting than at creating specifications. In the waterfall model it was common for a user to sign off on a set of requirements but when presented with an implemented system to realize that a given design lacked some critical features or was too complex. In general most users give much more useful feedback when they can experience a prototype of the running system rather than abstractly define what that system should be. Prototypes can evolve into the completed product. One approach used in some RAD methods was to build the system as a series of prototypes that evolve from minimal functionality to moderately useful to the final completed system; the advantage of this besides the two advantages above was that the users could get useful business functionality much earlier in the process. Starting with the ideas of Barry Boehm and others, James Martin developed the rapid application development approach during the 1980s at IBM and formalized it by publishing a book in 1991, Rapid Application Development.
This has resulted in some confusion over the term RAD among IT professionals. It is important to distinguish between RAD as a general alternative to the waterfall model and RAD as the specific method created by Martin; the Martin method was tailored toward knowledge intensive and UI intensive business systems. These ideas were further developed and improved upon by RAD pioneers like James Kerr and Richard Hunter, who together wrote the seminal book on the subject, Inside RAD, which followed the journey of a RAD project manager as he drove and refined the RAD Methodology in real-time on an actual RAD project; these practitioners, those like them, helped RAD gain popularity as an alternative to traditional systems project life cycle approaches. The RAD approach matured during the period of peak interest in business re-engineering; the idea of business process re-engineering was to radically rethink core business processes such as sales and customer support with the new capabilities of Information Technology in mind.
RAD was an essential part of larger business re engineering programs. The rapid prototyping approach of RAD was a key tool to help users and analysts "think out of the box" about innovative ways that technology might radically reinvent a core business process; the James Martin approach to RAD divides the process into four distinct phases: Requirements planning phase – combines elements of the system planning and systems analysis phases of the Systems Development Life Cycle. Users, IT staff members discuss and agree on business needs, project scope and system requirements, it ends when the team agrees on obtains management authorization to continue. User design phase – during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes and outputs; the RAD groups or subgroups use a combination of Joint Application Development techniques and CASE tools to translate user needs into working models. User Design is a continuous interactive process that allows users to understand and approve a working model of the system that meets their needs.
Construction phase – focuses on program and application development task similar to the SDLC. In RAD, users c
Kanban is a lean method to manage and improve work across human systems. This approach aims to manage work by balancing demands with available capacity, by improving the handling of system-level bottlenecks. Work items are visualized to give participants a view of progress and process, from start to finish - via a Kanban board. Work is pulled as capacity permits, rather than work being pushed into the process. In knowledge work and in software development, the aim is to provide a visual process-management system which aids decision-making about what and how much to produce; the underlying Kanban method originated in lean manufacturing it is now used in software development and technology-related work and has been combined with other methods or frameworks such as Scrum. David Anderson's 2010 book, describes the method's evolution from a 2004 project at Microsoft using a theory of constraints approach and incorporating a drum-buffer-rope, to a 2006-2007 project at Corbis in which the kanban method was identified.
In 2009, Don Reinertsen published a book on second-generation lean product development which describes the adoption of the kanban system and the use of data collection and an economic model for management decision-making. Another early contribution came from Corey Ladas, whose 2009 book Scrumban suggested that kanban could improve Scrum for software development. Ladas saw Scrumban as the transition from Scrum to Kanban. Jim Benson and Tonianne DeMaria Barry published Personal Kanban, applying Kanban to individuals and small teams, in 2011. In Kanban from the Inside, Mike Burrows explained kanban's principles and underlying values and related them to earlier theories and models. Kanban Change Leadership, by Klaus Leopold and Siegfried Kaltenecker, explained the method from the perspective of change management and provided guidance to change initiatives. A condensed guide to the method was published in 2016, incorporating improvements and extensions from the early kanban projects. Although Kanban does not require that the team or organization use a Kanban board, they can be used to visualise the flow of work.
A Kanban board shows how work moves from left to right, each column represents a stage within the value stream. The image below is a typical view of a simplified Kanban board, where work items move from left to right. In some cases each column has a work in progress limit; this means that each column can only receive a fixed amount of work items with the aim to encourage focus, make system constraints evident. The diagram here and the one in the Kanban Board section shows a software development workflow; the boards, designed for the context in which they are used and may show work item types, columns delineating workflow activities, explicit policies, swimlanes. The aim is to make the general workflow and the progress of individual items clear to participants and stakeholders. Although it is used for software development and software teams, the kanban method has been applied to other aspects of knowledge work.. Business functions which have used kanban include: Human resources and recruitment Marketing Organizational strategy and executive leadership Lean software development List of software development philosophies Kanban: Successful Evolutionary Change for Your Technology Business, David J. Anderson.
(United States, Blue Hole Press, 2010. ISBN 978-0984521401 Scrumban: Essays on Kanban Systems for Lean Software Development, Corey Ladas. (United States, Modus Cooperandi Press, 2009. ISBN 9780578002149 Agile Project Management with Kanban, Eric Brechner.. ISBN 978-0735698956. Kanban in Action, Marcus Hammarberg and Joakim Sunden.. ISBN 978-1-617291-05-0. Lean from the Trenches: Managing Large-Scale Projects with Kanban, Henrik Kniberg.. ISBN 978-1-93435-685-2. Stop Starting, Start Finishing! Arne Roock and Claudia Leschik.. ISBN 978-0985305161. Real-World Kanban: Do Less, Accomplish More with Lean Thinking, Mattias Skarin.. ISBN 978-1680500776
The Unified Software Development Process or Unified Process is an iterative and incremental software development process framework. The best-known and extensively documented refinement of the Unified Process is the Rational Unified Process. Other examples are Agile Unified Process; the Unified Process is not a process, but rather an extensible framework which should be customized for specific organizations or projects. The Rational Unified Process is a customizable framework; as a result, it is impossible to say whether a refinement of the process was derived from UP or from RUP, so the names tend to be used interchangeably. The name Unified Process as opposed to Rational Unified Process is used to describe the generic process, including those elements which are common to most refinements; the Unified Process name is used to avoid potential issues of trademark infringement since Rational Unified Process and RUP are trademarks of IBM. The first book to describe the process was titled The Unified Software Development Process and published in 1999 by Ivar Jacobson, Grady Booch and James Rumbaugh.
Since various authors unaffiliated with Rational Software have published books and articles using the name Unified Process, whereas authors affiliated with Rational Software have favored the name Rational Unified Process. In 2012 the Disciplined Agile Delivery framework was released, a hybrid framework that adopts and extends strategies from Unified Process, Scrum, XP, other methods; the Unified Process is incremental development process. The Elaboration and Transition phases are divided into a series of timeboxed iterations; each iteration results in an increment, a release of the system that contains added or improved functionality compared with the previous release. Although most iterations will include work in most of the process disciplines the relative effort and emphasis will change over the course of the project; the Unified Process insists that architecture sits at the heart of the project team's efforts to shape the system. Since no single model is sufficient to cover all aspects of a system, the Unified Process supports multiple architectural models and views.
One of the most important deliverables of the process is the executable architecture baseline, created during the Elaboration phase. This partial implementation of the system serves to validate the architecture and act as a foundation for remaining development; the Unified Process requires the project team to focus on addressing the most critical risks early in the project life cycle. The deliverables of each iteration in the Elaboration phase, must be selected in order to ensure that the greatest risks are addressed first; the Unified Process divides the project into four phases: Inception Elaboration Construction Transition Inception is the smallest phase in the project, ideally it should be quite short. If the Inception Phase is long it may be an indication of excessive up-front specification, contrary to the spirit of the Unified Process; the following are typical goals for the Inception phase: Establish Prepare a preliminary project schedule and cost estimate Feasibility Buy or develop itThe Lifecycle Objective Milestone marks the end of the Inception phase.
Develop an approximate vision of the system, make the business case, define the scope, produce rough estimate for cost and schedule. During the Elaboration phase, the project team is expected to capture a healthy majority of the system requirements. However, the primary goals of Elaboration are to address known risk factors and to establish and validate the system architecture. Common processes undertaken in this phase include the creation of use case diagrams, conceptual diagrams and package diagrams; the architecture is validated through the implementation of an Executable Architecture Baseline. This is a partial implementation of the system which includes the core most architecturally significant components, it is built in a series of small time-boxed iterations. By the end of the Elaboration phase, the system architecture must have stabilized and the executable architecture baseline must demonstrate that the architecture will support the key system functionality and exhibit the right behavior in terms of performance and cost.
The final Elaboration phase deliverable is a plan for the Construction phase. At this point the plan should be accurate and credible since it should be based on the Elaboration phase experience and since significant risk factors should have been addressed during the Elaboration phase. Construction is the largest phase of the project. In this phase, the remainder of the system is built on the foundation laid in Elaboration. System features are implemented in a series of time-boxed iterations; each iteration results in an executable release of the software. It is customary to write full-text use cases during the construction phase and each one becomes the start of a new iteration. Common Unified Modeling Language diagrams used during this phase include activity diagrams, sequence diagrams, collaboration diagrams, State Transition diagrams and interaction overview diagrams. Iterative implementation for the lower risks and easier elements are done; the final Construction phase deliverable is software ready to be deployed in the Transition phase.
The final project phase is Transition. In this phase the system is deployed to the target users. Fe
Dynamic systems development method
Dynamic systems development method is an agile project delivery framework used as a software development method. First released in 1994, DSDM sought to provide some discipline to the rapid application development method. In versions the DSDM Agile Project Framework was revised and became a generic approach to project management and solution delivery rather than being focused on software development and code creation and could be used for non-IT projects; the DSDM Agile Project Framework covers a wide range of activities across the whole project lifecycle and includes strong foundations and governance, which set it apart from some other Agile methods. The DSDM Agile Project Framework is an iterative and incremental approach that embraces principles of Agile development, including continuous user/customer involvement. DSDM fixes cost and time at the outset and uses the MoSCoW prioritisation of scope into musts, shoulds and won't haves to adjust the project deliverable to meet the stated time constraint.
DSDM is one of a number of Agile methods for developing software and non-IT solutions, it forms a part of the Agile Alliance. In 2014, DSDM released the latest version of the method in the'DSDM Agile Project Framework'. At the same time the new DSDM manual recognised the need to operate alongside other frameworks for service delivery PRINCE2, Managing Successful Programmes, PMI; the previous version had only contained guidance on. In the early 1990s, rapid application development was spreading across the IT industry; the user interfaces for software applications were moving from the old green screens to the graphical user interfaces that are used today. New application development tools were coming on the market, such as PowerBuilder; these enabled developers to share their proposed solutions much more with their customers – prototyping became a reality and the frustrations of the classical, sequential development methods could be put to one side. However, the RAD movement was unstructured: there was no agreed definition of a suitable process and many organisations came up with their own definition and approach.
Many major corporations were interested in the possibilities but they were concerned that they did not lose the level of quality in the end deliverables that free-flow development could give rise to. The DSDM Consortium was founded in 1994 by an association of vendors and experts in the field of software engineering and was created with the objective of "jointly developing and promoting an independent RAD framework" by combining their best practice experiences; the origins were an event organised by the Butler Group in London. People at that meeting all worked for blue-chip organisations such as British Airways, American Express and Logica. In July 2006, DSDM Public Version 4.2 was made available for individuals to use. In 2014, the DSDM handbook was made available public. Additionally, templates for DSDM can be downloaded. In October 2016 the DSDM Consortium rebranded as the Agile Business Consortium; the Agile Business Consortium is a not-for-profit, vendor-independent organisation which owns and administers the DSDM framework.
Atern is a vendor-independent approach that recognises that more projects fail because of people problems than technology. Atern’s focus is on helping people to work together to achieve the business goals. Atern is independent of tools and techniques enabling it to be used in any business and technical environment without tying the business to a particular vendor. There are eight principles underpinning DSDM Atern; these principles direct the team in the attitude they must take and the mindset they must adopt to deliver consistently. Focus on the business need Deliver on time Collaborate Never compromise quality Build incrementally from firm foundations Develop iteratively Communicate continuously and Demonstrate control Timeboxing: is the approach for completing the project incrementally by breaking it down into splitting the project in portions, each with a fixed budget and a delivery date. For each portion a number of requirements are selected; because time and budget are fixed, the only remaining variables are the requirements.
So if a project is running out of time or money the requirements with the lowest priority are omitted. This does not mean that an unfinished product is delivered, because of the Pareto Principle that 80% of the project comes from 20% of the system requirements, so as long as those most important 20% of requirements are implemented into the system, the system therefore meets the business needs and that no system is built in the first try. MoSCoW: is a technique for prioritising work requirements, it is an acronym that stands for: MUST have SHOULD have COULD have WON'T havePrototyping: refers to the creation of prototypes of the system under development at an early stage of the project. It enables the early discovery of shortcomings in the system and allows future users to ‘test-drive’ the system; this way good user involvement is realised, one of the key success factors of DSDM, or any System Development project for that matter. Testing: helps ensure a solution of good quality, DSDM advocates testing throughout each iteration.
Since DSDM is a tool and technique independent method, the project team is free to choose its own test management method. Workshop: brings project stakeholders together to discuss requirements and mutual understanding. Model
A debugger or debugging tool is a computer program, used to test and debug other programs. The code to be examined might alternatively be running on an instruction set simulator, a technique that allows great power in its ability to halt when specific conditions are encountered, but which will be somewhat slower than executing the code directly on the appropriate processor; some debuggers offer two modes of operation, partial simulation, to limit this impact. A "trap" occurs when the program cannot continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory; when the program "traps" or reaches a preset condition, the debugger shows the location in the original code if it is a source-level debugger or symbolic debugger now seen in integrated development environments. If it is a low-level debugger or a machine-language debugger it shows the line in the disassembly.
Debuggers offer a query processor, a symbol resolver, an expression interpreter, a debug support interface at its top level. Debuggers offer more sophisticated functions such as running a program step by step, stopping at some event or specified instruction by means of a breakpoint, tracking the values of variables; some debuggers have the ability to modify program state. It may be possible to continue execution at a different location in the program to bypass a crash or logical error; the same functionality which makes a debugger useful for eliminating bugs allows it to be used as a software cracking tool to evade copy protection, digital rights management, other software protection features. It also makes it useful as a general verification tool, fault coverage, performance analyzer if instruction path lengths are shown. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces. Debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, visualization features.
Some debuggers include a feature called "reverse debugging" known as "historical debugging" or "backwards debugging". These debuggers make it possible to step a program's execution backwards in time. Various debuggers include this feature. Microsoft Visual Studio offers IntelliTrace reverse debugging for Visual Basic. NET, some other languages, but not C++. Reverse debuggers exist for C, C++, Python and other languages; some are open source. Some reverse debuggers slow down the target by orders of magnitude, but the best reverse debuggers cause a slowdown of 2× or less. Reverse debugging is useful for certain types of problems, but is still not used yet; some debuggers operate on a single specific language while others can handle multiple languages transparently. For example, if the main target program is written in COBOL but calls assembly language subroutines and PL/1 subroutines, the debugger may have to dynamically switch modes to accommodate the changes in language as they occur; some debuggers incorporate memory protection to avoid storage violations such as buffer overflow.
This may be important in transaction processing environments where memory is dynamically allocated from memory'pools' on a task by task basis. Most modern microprocessors have at least one of these features in their CPU design to make debugging easier: Hardware support for single-stepping a program, such as the trap flag. An instruction set that meets the Popek and Goldberg virtualization requirements makes it easier to write debugger software that runs on the same CPU as the software being debugged. In-system programming allows an external hardware debugger to reprogram a system under test. Many systems with such ISP support have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved, page fault hardware. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set. Processors used in embedded systems have extensive JTAG debug support.
In systems engineering and software engineering, requirements analysis encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product or project, taking account of the conflicting requirements of the various stakeholders, documenting and managing software or system requirements. Requirements analysis is critical to the failure of a systems or software project; the requirements should be documented, measurable, traceable, related to identified business needs or opportunities, defined to a level of detail sufficient for system design. Conceptually, requirements analysis includes three types of activities: Eliciting requirements:, business process documentation, stakeholder interviews; this is sometimes called requirements gathering or requirements discovery. Analyzing requirements: determining whether the stated requirements are clear, complete and unambiguous, resolving any apparent conflicts. Recording requirements: Requirements may be documented in various forms including a summary list and may include natural-language documents, use cases, user stories, process specifications and a variety of models including data models.
Requirements analysis can be a long and tiring process during which many delicate psychological skills are involved. Large systems may confront analysts with thousands of system requirements. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer; these may include the development of scenarios, the identification of use cases, the use of workplace observation or ethnography, holding interviews, or focus groups and creating requirements lists. Prototyping may be used to develop an example system. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced. Requirements quality can be improved through these and other methods Visualization.
Using tools that promote better understanding of the desired end-product such as visualization and simulation. Consistent use of templates. Producing a consistent set of models and templates to document the requirements. Documenting dependencies. Documenting dependencies and interrelationships among requirements, as well as any assumptions and congregations. See Stakeholder analysis for a discussion of people or organizations that have a valid interest in the system, they may be affected by it either indirectly. A major new emphasis in the 1990s was a focus on the identification of stakeholders, it is recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include: anyone who operates the system anyone who benefits from the system anyone involved in purchasing or procuring the system. In a mass-market product organization, product management and sometimes sales act as surrogate consumers to guide development of the product. Organizations which regulate aspects of the system people or organizations opposed to the system organizations responsible for systems which interface with the system under design.
Those organizations who integrate horizontally with the organization for whom the analyst is designing the system. Requirements have cross-functional implications that are unknown to individual stakeholders and missed or incompletely defined during stakeholder interviews; these cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trained facilitator, wherein stakeholders participate in discussions to elicit requirements, analyze their details and uncover cross-functional implications. A dedicated scribe should be present to document the discussion, freeing up the Business Analyst to lead the discussion in a direction that generates appropriate requirements which meet the session objective. JRD Sessions are analogous to Joint Application Design Sessions. In the former, the sessions elicit requirements that guide design, whereas the latter elicit the specific design features to be implemented in satisfaction of elicited requirements.
One traditional way of documenting requirements has been contract style requirement lists. In a complex system such requirements lists can run to hundreds of pages long. An appropriate metaphor would be an long shopping list; such lists are much out of favour in modern analysis. Provides a checklist of requirements. Provide a contract between the project sponsor and developers. For a large system can provide a high level description from which lower-level requirements can be derived; such lists can run to hundreds of pages. They are not intended to serve as a reader-friendly description of the desired application; such requirements lists abstract all the requirements and so there is little context. The Business Analyst may include context for requirements in accompanying design documentation. Thi