A Petri net known as a place/transition net, is one of several mathematical modeling languages for the description of distributed systems. It is a class of discrete event dynamic system. A Petri net is a directed bipartite graph, in which the nodes represent places; the directed arcs describe. Some sources state that Petri nets were invented in August 1939 by Carl Adam Petri—at the age of 13—for the purpose of describing chemical processes. Like industry standards such as UML activity diagrams, Business Process Model and Notation and EPCs, Petri nets offer a graphical notation for stepwise processes that include choice and concurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis. A Petri net consists of places and arcs. Arcs run from a place to a transition or never between places or between transitions; the places from which an arc runs to a transition are called the input places of the transition.
Graphically, places in a Petri net may contain a discrete number of marks called tokens. Any distribution of tokens over the places will represent a configuration of the net called a marking. In an abstract sense relating to a Petri net diagram, a transition of a Petri net may fire if it is enabled, i.e. there are sufficient tokens in all of its input places. A firing is atomic. Unless an execution policy is defined, the execution of Petri nets is nondeterministic: when multiple transitions are enabled at the same time, they will fire in any order. Since firing is nondeterministic, multiple tokens may be present anywhere in the net, Petri nets are well suited for modeling the concurrent behavior of distributed systems. Petri nets are state-transition systems. Definition 1. A net is a triple N = where: P and T are disjoint finite sets of places and transitions, respectively. F ⊂ ∪ is a set of arcs. Definition 2. Given a net N =, a configuration is a set C so that C ⊆ P. Definition 3. An elementary net is a net of the form EN = where: N = is a net.
C is such. Definition 4. A Petri net is a net of the form PN =, which extends the elementary net so that: N = is a net. M: P → Z is a place multiset, where Z is a countable set. M extends the concept of configuration and is described with reference to Petri net diagrams as a marking. W: F → Z is an arc multiset, so that the count for each arc is a measure of the arc multiplicity. If a Petri net is equivalent to an elementary net Z can be the countable set and those elements in P that map to 1 under M form a configuration. If a Petri net is not an elementary net the multiset M can be interpreted as representing a non-singleton set of configurations. In this respect, M extends the concept of configuration for elementary nets to Petri nets. In the diagram of a Petri net, places are conventionally depicted with circles, transitions with long narrow rectangles and arcs as one-way arrows that show connections of places to transitions or transitions to places. If the diagram were of an elementary net those places in a configuration would be conventionally depicted as circles, where each circle encompasses a single dot called a token.
In the given diagram of a Petri net, the place circles may encompass more than one token to show the number of times a place appears in a configuration. The configuration of tokens distributed over an entire Petri net diagram is called a marking. In the top figure, the place p1 is an input place of transition t. Let PN0 be a Petri net with a marking configured M0 and PN1 be a Petri net with a marking configured M1; the configuration of PN0 enable transition t through the property that all input places have sufficient number of tokens "equal to or greater" than the multiplicities on their respective arcs to t. Once and only once a transition is enabled will the transition fire. In this example, the firing of transition t generates a map that has the marking configured M1 in the image of M0 and results in Petri net PN1, seen in the bottom figure. In the diagram, the firing rule for a transition can be characterised by subtracting a number of tokens from its input places equal to the multiplicity of the respective input arcs and accumulating a new number of tokens at the output places equal to the multiplicity of the respective output arcs.
Remark 1. The precise meaning of "equal to or greater" will depend on the precise algebraic properties of addition being applied on Z in the firing rule, where subtle variations on the algebraic properties can lead to other classes of Petri nets; the following formal definition is loosely base
The term web service is either a service offered by an electronic device to another electronic device, communicating with each other via the World Wide Web, or a web service implemented in the particular technology or brand, e.g W3C Web Services. In a web service, the Web technology such as HTTP—originally designed for human-to-machine communication—is utilized for machine-to-machine communication, more for transferring machine-readable file formats such as XML and JSON. In practice, a web service provides an object-oriented web-based interface to a database server, utilized for example by another web server, or by a mobile app, that provides a user interface to the end user. Many organizations that provide data in formatted HTML pages will provide that data on their server as XML or JSON through a web service to allow syndication, for example Wikipedia's Export. Another application offered to the end user may be a mashup, where a web server consumes several web services at different machines, compiles the content into one user interface.
Restful APIs do not require XML-based web service protocols to support their interfaces. In relation to W3C Web Services, the W3C defined a web service as: A web service is a software system designed to support interoperable machine-to-machine interaction over a network, it has an interface described in a machine-processable format. Other systems interact with the web service in a manner prescribed by its description using SOAP-messages conveyed using HTTP with an XML serialization in conjunction with other web-related standards. W3C Web Services may use SOAP over HTTP protocol, allowing less costly interactions over the Internet than via proprietary solutions like EDI/B2B. Besides SOAP over HTTP, web services can be implemented on other reliable transport mechanisms like FTP. In a 2002 document, the Web Services Architecture Working Group defined a web services architecture, requiring a standardized implementation of a "web service." The term "web service" describes a standardized way of integrating web-based applications using the XML, SOAP, WSDL and UDDI open standards over an Internet Protocol backbone.
XML is the data format used to contain the data and provide metadata around it, SOAP is used to transfer the data, WSDL is used for describing the services available and UDDI lists what services are available. A web service is a method of communication between two electronic devices over a network, it is a software function provided at a network address over the web with the service always on as in the concept of utility computing. Many organizations use multiple software systems for management. Different software systems need to exchange data with each other, a web service is a method of communication that allows two software systems to exchange this data over the internet; the software system that requests data is called a service requester, whereas the software system that would process the request and provide the data is called a service provider. Different software may use different programming languages, hence there is a need for a method of data exchange that doesn't depend upon a particular programming language.
Most types of software can, interpret XML tags. Thus, web services can use XML files for data exchange. Rules for communication between different systems need to be defined, such as: How one system can request data from another system. Which specific parameters are needed in the data request. What would be the structure of the data produced. What error messages to display when a certain rule for communication is not observed, to make troubleshooting easier. All of these rules for communication are defined in a file called WSDL. A directory called. So when one software system needs one particular report/data, it would go to the UDDI and find out which other system it can contact for receiving that data. Once the software system finds out which other system it should contact, it w
Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test. Software testing can provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs, verifying that the software product is fit for use. Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. In general, these properties indicate the extent to which the component or system under test: meets the requirements that guided its design and development, responds to all kinds of inputs, performs its functions within an acceptable time, it is sufficiently usable, can be installed and run in its intended environments, achieves the general result its stakeholders desire; as the number of possible tests for simple software components is infinite, all software testing uses some strategy to select tests that are feasible for the available time and resources.
As a result, software testing attempts to execute a program or application with the intent of finding software bugs. The job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can create new ones. Software testing can provide objective, independent information about the quality of software and risk of its failure to users or sponsors. Software testing can be conducted as soon; the overall approach to software development determines when and how testing is conducted. For example, in a phased process, most testing occurs after system requirements have been defined and implemented in testable programs. In contrast, under an agile approach, requirements and testing are done concurrently. Although testing can determine the correctness of software under the assumption of some specific hypotheses, testing cannot identify all the defects within the software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against test oracles—principles or mechanisms by which someone might recognize a problem.
These oracles may include specifications, comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, but only that it does not function properly under specific conditions; the scope of software testing includes the examination of code as well as the execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing aids the process of attempting to make this assessment. Not all software defects are caused by coding errors. One common source of expensive defects is requirement gaps, i.e. unrecognized requirements that result in errors of omission by the program designer. Requirement gaps can be non-functional requirements such as testability, maintainability, usability and security. Software faults occur through the following processes. A programmer makes an error. If this defect is executed, in certain situations the system will produce wrong results, causing a failure. Not all defects will result in failures. For example, defects in the dead code will never result in failures.
A defect can turn into a failure. Examples of these changes in environment include the software being run on a new computer hardware platform, alterations in source data, or interacting with different software. A single defect may result in a wide range of failure symptoms. A fundamental problem with software testing is that testing under all combinations of inputs and preconditions is not feasible with a simple product; this means that the number of defects in a software product can be large and defects that occur infrequently are difficult to find in testing. More non-functional dimensions of quality —usability, performance, reliability—can be subjective. Software developers can't test everything, but they can use combinatorial test design to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users to get greater test coverage with fewer tests. Whether they are looking for speed or
Agile software development
Agile software development is an approach to software development under which requirements and solutions evolve through the collaborative effort of self-organizing and cross-functional teams and their customer/end user. It advocates adaptive planning, evolutionary development, early delivery, continual improvement, it encourages rapid and flexible response to change; the term agile was popularized, by the Manifesto for Agile Software Development. The values and principles expoused in this manifesto were derived from and underpin a broad range of software development frameworks, including Scrum and Kanban. There is significant anecdotal evidence that adopting agile practices and values improves the agility of software professionals and organizations. Iterative and incremental development methods can be traced back as early as 1957, with evolutionary project management and adaptive software development emerging in the early 1970s. During the 1990s, a number of lightweight software development methods evolved in reaction to the prevailing heavyweight methods that critics described as overly regulated and micro-managed.
These included: rapid application development, from 1991. Although these all originated before the publication of the Agile Manifesto, they are now collectively referred to as agile software development methods. At the same time, similar changes were underway in aerospace. In 2001, seventeen software developers met at a resort in Snowbird, Utah to discuss these lightweight development methods, including among others Kent Beck, Ward Cunningham, Dave Thomas, Jeff Sutherland, Ken Schwaber, Jim Highsmith, Alistair Cockburn, Robert C. Martin. Together they published the Manifesto for Agile Software Development. In 2005, a group headed by Cockburn and Highsmith wrote an addendum of project management principles, the PM Declaration of Interdependence, to guide software project management according to agile software development methods. In 2009, a group working with Martin wrote an extension of software development principles, the Software Craftsmanship Manifesto, to guide agile software development according to professional conduct and mastery.
In 2011, the Agile Alliance created the Guide to Agile Practices, an evolving open-source compendium of the working definitions of agile practices and elements, along with interpretations and experience guidelines from the worldwide community of agile practitioners. Based on their combined experience of developing software and helping others do that, the seventeen signatories to the manifesto proclaimed that they value: Individuals and Interactions over processes and tools Working Software over comprehensive documentation Customer Collaboration over contract negotiation Responding to Change over following a plan That is to say, the items on the left are valued more than the items on the right; as Scott Ambler elucidated: Tools and processes are important, but it is more important to have competent people working together effectively. Good documentation is useful in helping people to understand how the software is built and how to use it, but the main point of development is to create software, not documentation.
A contract is important but is no substitute for working with customers to discover what they need. A project plan is important, but it must not be too rigid to accommodate changes in technology or the environment, stakeholders' priorities, people's understanding of the problem and its solution; some of the authors formed the Agile Alliance, a non-profit organization that promotes software development according to the manifesto's values and principles. Introducing the manifesto on behalf of the Agile Alliance, Jim Highsmith said, The Agile movement is not anti-methodology, in fact many of us want to restore credibility to the word methodology. We want to restore a balance. We embrace modeling, but not in order to file some diagram in a dusty corporate repository. We embrace documentation, but not hundreds of pages of rarely-used tomes. We recognize the limits of planning in a turbulent environment; those who would brand proponents of XP or SCRUM or any of the other Agile Methodologies as "hackers" are ignorant of both the methodologies and the original definition of the term hacker.
The Manifesto for Agile Software Development is based on twelve principles: Customer satisfaction by early and continuous delivery of valuable software. Welcome changing requirements in late development. Deliver working software Close, daily cooperation between business people and developers Projects are built around motivated individuals, who should be trusted Face-to-face conversation is the best form of communication Working software is the primary measure of progress Sustainable development, able to maintain a constant pace Continuous attention to technical excellence and good design Simplicity—the art of maximizing the amount of work not done—is essential Best architectures and designs emerge from self-organizing teams Regularly, the team reflects on how to become more effective, adjusts accordingly Most agile development methods break product development work into small increments that minimize the amount of up-front planning and design. Iterations, or sprints, are short time frames that last from one to four weeks.
Each iteration involves a cross-functional team working in all functions: pl
TLA+ is a formal specification language developed by Leslie Lamport. It is used to design, model and verify concurrent systems. TLA+ has been described as exhaustively-testable pseudocode, its use likened to drawing blueprints for software systems. For design and documentation, TLA+ fulfills the same purpose as informal technical specifications. However, TLA+ specifications are written in a formal language of logic and mathematics, the precision of specifications written in this language is intended to uncover design flaws before system implementation is underway. Since TLA+ specifications are written in a formal language, they are amenable to finite model checking; the model checker finds all possible system behaviours up to some number of execution steps, examines them for violations of desired invariance properties such as safety and liveness. TLA + specifications use basic set theory to define temporal logic to define liveness. TLA+ is used to write machine-checked proofs of correctness both for algorithms and mathematical theorems.
The proofs are written in a declarative, hierarchical style independent of any single theorem prover backend. Both formal and informal structured mathematical proofs can be written in TLA+. TLA+ was introduced in 1999, following several decades of research into a verification method for concurrent systems. A toolchain has since developed, including distributed model checker; the pseudocode-like language PlusCal was created in 2009. TLA +2 was announced in 2014; the current TLA+ reference is The TLA+ Hyperbook by Leslie Lamport. Modern temporal logic was developed by Arthur Prior in 1957 called tense logic. Although Amir Pnueli was the first to study the applications of temporal logic to computer science, Prior speculated on its use a decade earlier in 1967: "The usefulness of systems of this sort does not depend on any serious metaphysical assumption that time is discrete. LTL became an important tool for analysis of concurrent programs expressing properties such as mutual exclusion and freedom from deadlock.
Concurrent with Pnueli's work on LTL, academics were working to generalize Hoare logic for verification of multiprocess programs. Leslie Lamport became interested in the problem after peer review found an error in a paper he submitted on mutual exclusion. Ed Ashcroft introduced invariance in his 1975 paper "Proving Assertions About Parallel Programs", which Lamport used to generalize Floyd's method in his 1977 paper "Proving Correctness of Multiprocess Programs". Lamport's paper introduced safety and liveness as generalizations of partial correctness and termination, respectively; this method was used to verify the first concurrent garbage collection algorithm in a 1978 paper with Edsger Dijkstra. Lamport first encountered Pnueli's LTL during a 1978 seminar at Stanford organized by Susan Owicki. According to Lamport, "I was sure that temporal logic was some kind of abstract nonsense that would never have any practical application, but it seemed like fun, so I attended." In 1980 he published "'Sometime' is Sometimes'Not Never'", which became one of the most frequently-cited papers in the temporal logic literature.
Lamport worked on writing temporal logic specifications during his time at SRI, but found the approach to be impractical: "However, I became disillusioned with temporal logic when I saw how Schwartz, Melliar-Smith, Fritz Vogt were spending days trying to specify a simple FIFO queue - arguing over whether the properties they listed were sufficient. I realized that, despite its aesthetic appeal, writing a specification as a conjunction of temporal properties just didn't work in practice."His search for a practical method of specification resulted in the 1983 paper "Specifying Concurrent Programming Modules", which introduced the idea of describing state transitions as boolean-valued functions of primed and unprimed variables. Work continued throughout the 1980s, Lamport began publishing papers on the temporal logic of actions in 1990. TLA enabled the use of actions in temporal formulas, which according to Lamport "provides an elegant way to formalize and systematize all the reasoning used in concurrent system verification."TLA specifications consisted of ordinary non-temporal mathematics, which Lamport found less cumbersome than a purely temporal specification.
TLA provided a mathematical foundation to the specification language TLA+, introduced with the paper "Specifying Concurrent Systems with TLA+" in 1999. That same year, Yuan Yu wrote the TLC model checker for TLA+ specifications. Lamport published a full textbook on TLA+ in 2002, titled "Specifying Systems: The TLA+ Language and Tools for Software Engineers". PlusCal was introduced in 2009, the TLA+ proof system in 2012. TLA+2 was announced in 2014, adding some additional language constructs as well as increasing in-language support for the proof system. Lamport is engaged in creating an updated
In systems engineering and software engineering, requirements analysis encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product or project, taking account of the conflicting requirements of the various stakeholders, documenting and managing software or system requirements. Requirements analysis is critical to the failure of a systems or software project; the requirements should be documented, measurable, traceable, related to identified business needs or opportunities, defined to a level of detail sufficient for system design. Conceptually, requirements analysis includes three types of activities: Eliciting requirements:, business process documentation, stakeholder interviews; this is sometimes called requirements gathering or requirements discovery. Analyzing requirements: determining whether the stated requirements are clear, complete and unambiguous, resolving any apparent conflicts. Recording requirements: Requirements may be documented in various forms including a summary list and may include natural-language documents, use cases, user stories, process specifications and a variety of models including data models.
Requirements analysis can be a long and tiring process during which many delicate psychological skills are involved. Large systems may confront analysts with thousands of system requirements. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer; these may include the development of scenarios, the identification of use cases, the use of workplace observation or ethnography, holding interviews, or focus groups and creating requirements lists. Prototyping may be used to develop an example system. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced. Requirements quality can be improved through these and other methods Visualization.
Using tools that promote better understanding of the desired end-product such as visualization and simulation. Consistent use of templates. Producing a consistent set of models and templates to document the requirements. Documenting dependencies. Documenting dependencies and interrelationships among requirements, as well as any assumptions and congregations. See Stakeholder analysis for a discussion of people or organizations that have a valid interest in the system, they may be affected by it either indirectly. A major new emphasis in the 1990s was a focus on the identification of stakeholders, it is recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include: anyone who operates the system anyone who benefits from the system anyone involved in purchasing or procuring the system. In a mass-market product organization, product management and sometimes sales act as surrogate consumers to guide development of the product. Organizations which regulate aspects of the system people or organizations opposed to the system organizations responsible for systems which interface with the system under design.
Those organizations who integrate horizontally with the organization for whom the analyst is designing the system. Requirements have cross-functional implications that are unknown to individual stakeholders and missed or incompletely defined during stakeholder interviews; these cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trained facilitator, wherein stakeholders participate in discussions to elicit requirements, analyze their details and uncover cross-functional implications. A dedicated scribe should be present to document the discussion, freeing up the Business Analyst to lead the discussion in a direction that generates appropriate requirements which meet the session objective. JRD Sessions are analogous to Joint Application Design Sessions. In the former, the sessions elicit requirements that guide design, whereas the latter elicit the specific design features to be implemented in satisfaction of elicited requirements.
One traditional way of documenting requirements has been contract style requirement lists. In a complex system such requirements lists can run to hundreds of pages long. An appropriate metaphor would be an long shopping list; such lists are much out of favour in modern analysis. Provides a checklist of requirements. Provide a contract between the project sponsor and developers. For a large system can provide a high level description from which lower-level requirements can be derived; such lists can run to hundreds of pages. They are not intended to serve as a reader-friendly description of the desired application; such requirements lists abstract all the requirements and so there is little context. The Business Analyst may include context for requirements in accompanying design documentation. Thi
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is