The Unified Software Development Process or Unified Process is an iterative and incremental software development process framework. The best-known and extensively documented refinement of the Unified Process is the Rational Unified Process. Other examples are Agile Unified Process; the Unified Process is not a process, but rather an extensible framework which should be customized for specific organizations or projects. The Rational Unified Process is a customizable framework; as a result, it is impossible to say whether a refinement of the process was derived from UP or from RUP, so the names tend to be used interchangeably. The name Unified Process as opposed to Rational Unified Process is used to describe the generic process, including those elements which are common to most refinements; the Unified Process name is used to avoid potential issues of trademark infringement since Rational Unified Process and RUP are trademarks of IBM. The first book to describe the process was titled The Unified Software Development Process and published in 1999 by Ivar Jacobson, Grady Booch and James Rumbaugh.
Since various authors unaffiliated with Rational Software have published books and articles using the name Unified Process, whereas authors affiliated with Rational Software have favored the name Rational Unified Process. In 2012 the Disciplined Agile Delivery framework was released, a hybrid framework that adopts and extends strategies from Unified Process, Scrum, XP, other methods; the Unified Process is incremental development process. The Elaboration and Transition phases are divided into a series of timeboxed iterations; each iteration results in an increment, a release of the system that contains added or improved functionality compared with the previous release. Although most iterations will include work in most of the process disciplines the relative effort and emphasis will change over the course of the project; the Unified Process insists that architecture sits at the heart of the project team's efforts to shape the system. Since no single model is sufficient to cover all aspects of a system, the Unified Process supports multiple architectural models and views.
One of the most important deliverables of the process is the executable architecture baseline, created during the Elaboration phase. This partial implementation of the system serves to validate the architecture and act as a foundation for remaining development; the Unified Process requires the project team to focus on addressing the most critical risks early in the project life cycle. The deliverables of each iteration in the Elaboration phase, must be selected in order to ensure that the greatest risks are addressed first; the Unified Process divides the project into four phases: Inception Elaboration Construction Transition Inception is the smallest phase in the project, ideally it should be quite short. If the Inception Phase is long it may be an indication of excessive up-front specification, contrary to the spirit of the Unified Process; the following are typical goals for the Inception phase: Establish Prepare a preliminary project schedule and cost estimate Feasibility Buy or develop itThe Lifecycle Objective Milestone marks the end of the Inception phase.
Develop an approximate vision of the system, make the business case, define the scope, produce rough estimate for cost and schedule. During the Elaboration phase, the project team is expected to capture a healthy majority of the system requirements. However, the primary goals of Elaboration are to address known risk factors and to establish and validate the system architecture. Common processes undertaken in this phase include the creation of use case diagrams, conceptual diagrams and package diagrams; the architecture is validated through the implementation of an Executable Architecture Baseline. This is a partial implementation of the system which includes the core most architecturally significant components, it is built in a series of small time-boxed iterations. By the end of the Elaboration phase, the system architecture must have stabilized and the executable architecture baseline must demonstrate that the architecture will support the key system functionality and exhibit the right behavior in terms of performance and cost.
The final Elaboration phase deliverable is a plan for the Construction phase. At this point the plan should be accurate and credible since it should be based on the Elaboration phase experience and since significant risk factors should have been addressed during the Elaboration phase. Construction is the largest phase of the project. In this phase, the remainder of the system is built on the foundation laid in Elaboration. System features are implemented in a series of time-boxed iterations; each iteration results in an executable release of the software. It is customary to write full-text use cases during the construction phase and each one becomes the start of a new iteration. Common Unified Modeling Language diagrams used during this phase include activity diagrams, sequence diagrams, collaboration diagrams, State Transition diagrams and interaction overview diagrams. Iterative implementation for the lower risks and easier elements are done; the final Construction phase deliverable is software ready to be deployed in the Transition phase.
The final project phase is Transition. In this phase the system is deployed to the target users. Fe
Profiling (computer programming)
In software engineering, profiling is a form of dynamic program analysis that measures, for example, the space or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most profiling information serves to aid program optimization. Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a profiler. Profilers may use a number of different techniques, such as event-based, statistical and simulation methods. Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, performance counters. Profilers are used in the performance engineering process. Program analysis tools are important for understanding program behavior. Computer architects need such tools to evaluate. Software writers need tools to identify critical sections of code. Compiler writers use such tools to find out how well their instruction scheduling or branch prediction algorithm is performing...
The output of a profiler may be: A statistical summary of the events observed Summary profile information is shown annotated against the source code statements where the events occur, so the size of measurement data is linear to the code size of the program./* ------------ source------------------------- count */ 0001 IF X = "A" 0055 0002 THEN DO 0003 ADD 1 to XCOUNT 0032 0004 ELSE 0005 IF X = "B" 0055 A stream of recorded events For sequential programs, a summary profile is sufficient, but performance problems in parallel programs depend on the time relationship of events, thus requiring a full trace to get an understanding of what is happening. The size of a trace is linear to the program's instruction path length, making it somewhat impractical. A trace may therefore be initiated at one point in a program and terminated at another point to limit the output. An ongoing interaction with the hypervisor This provides the opportunity to switch a trace on or off at any desired point during execution in addition to viewing on-going metrics about the program.
It provides the opportunity to suspend asynchronous processes at critical points to examine interactions with other parallel processes in more detail. A profiler can be applied to an individual method or at the scale of a module or program, to identify performance bottlenecks by making long-running code obvious. A profiler can be used to understand code from a timing point of view, with the objective of optimizing it to handle various runtime conditions or various loads. Profiling results can be ingested by a compiler. Profiling results can be used to guide the optimization of an individual algorithm. Profilers are built into some application performance management systems that aggregate profiling data to provide insight into transaction workloads in distributed applications. Performance-analysis tools existed on IBM/360 and IBM/370 platforms from the early 1970s based on timer interrupts which recorded the Program status word at set timer-intervals to detect "hot spots" in executing code; this was an early example of sampling.
In early 1974 instruction-set simulators permitted full trace and other performance-monitoring features. Profiler-driven program analysis on Unix dates back to 1973, when Unix systems included a basic tool, which listed each function and how much of program execution time it used. In 1982 gprof extended the concept to a complete call graph analysis. In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM; the ATOM platform converts a program into its own profiler: at compile time, it inserts code into the program to be analyzed. That inserted; this technique - modifying a program to analyze itself - is known as "instrumentation". In 2004 both the gprof and ATOM papers appeared on the list of the 50 most influential PLDI papers for the 20-year period ending in 1999. Flat profilers compute the average call times, from the calls, do not break down the call times based on the callee or the context. Call graph profilers show the call times, frequencies of the functions, the call-chains involved based on the callee.
In some tools full context is not preserved. Input-sensitive profilers add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values, they generate charts that characterize how an application's performance scales as a function of its input. Profilers, which are programs themselves, analyze target programs by collecting information on their execution. Based on their data granularity, on how profilers collect information, they are classified into event based or statistical profilers. Profilers interrupt program execution to collect information, which may result in a limited resolution in the time measurements, which should be taken with a grain of salt. Basic block profilers report a number of machine clock cycles devoted to executing each line of code, or a timing based on adding these together.
Scrum (software development)
Scrum is an agile framework for managing knowledge work, with an emphasis on software development, although it has wide application in other fields and is starting to be explored by traditional project teams more generally. It is designed for teams of three to nine members, who break their work into actions that can be completed within timeboxed iterations, called sprints, no longer than one month and most two weeks track progress and re-plan in 15-minute time-boxed stand-up meetings, called daily scrums. Approaches to coordinating the work of multiple scrum teams in larger organizations include large-scale scrum, scaled agile framework, scrum of scrums, Scrum@Scale, the Nexus, among others. Scrum is a lightweight and incremental framework for managing product development, it defines "a flexible, holistic product development strategy where a development team works as a unit to reach a common goal", challenges assumptions of the "traditional, sequential approach" to product development, enables teams to self-organize by encouraging physical co-location or close online collaboration of all team members, as well as daily face-to-face communication among all team members and disciplines involved.
A key principle of Scrum is the dual recognition that customers will change their minds about what they want or need and that there will be unpredictable challenges—for which a predictive or planned approach is not suited. As such, Scrum adopts an evidence-based empirical approach—accepting that the problem cannot be understood or defined up front, instead focusing on how to maximize the team's ability to deliver to respond to emerging requirements, to adapt to evolving technologies and changes in market conditions. Scrum is seen written in all-capitals, as SCRUM; the word is not an acronym, so this is not correct. While the trademark on the term Scrum itself has been allowed to lapse, it is deemed as owned by the wider community rather than an individual, so the leading capital for Scrum is retained in this article. Many of the terms used in Scrum are written with leading capitals. However, to maintain an encyclopedic tone, this article uses normal sentence case for these terms — unless they are recognized marks.
Hirotaka Takeuchi and Ikujiro Nonaka introduced the term scrum in the context of product development in their 1986 Harvard Business Review article, "The New New Product Development Game". Takeuchi and Nonaka argued in The Knowledge Creating Company that it is a form of "organizational knowledge creation good at bringing about innovation continuously and spirally"; the authors described a new approach to commercial product development that would increase speed and flexibility, based on case studies from manufacturing firms in the automotive and printer industries. They called this the holistic or rugby approach, as the whole process is performed by one cross-functional team across multiple overlapping phases, in which the team "tries to go the distance as a unit, passing the ball back and forth". In the early 1990s, Ken Schwaber used what would become Scrum at his company, Advanced Development Methods. In 1995, Sutherland and Schwaber jointly presented a paper describing the scrum framework at the Business Object Design and Implementation Workshop held as part of Object-Oriented Programming, Languages & Applications'95 in Austin, Texas.
Over the following years and Sutherland collaborated to combine this material—with their experience and evolving good practice—to develop what became known as Scrum. In 2001, Schwaber worked with Mike Beedle to describe the method in the book, Agile Software Development with Scrum. Scrum's approach to planning and managing product development involves bringing decision-making authority to the level of operation properties and certainties. In 2002, Schwaber with others founded the Scrum Alliance and set up the Certified Scrum accreditation series. Schwaber left the Scrum Alliance in late 2009 and founded Scrum.org which oversees the parallel Professional Scrum accreditation series. Since 2009, a public document called The Scrum Guide has defined Scrum, it has been revised 5 times, with the current version being November 2017. In 2018, Schwaber and the Scrum.org community, along with leaders of the Kanban community, published The Kanban Guide for Scrum Teams. There are three roles in the scrum framework.
These are ideally co-located to ensure optimal communication among team members. Together these three roles form the scrum team. While many organizations have other roles involved with defining and delivering the product, Scrum defines only these three; the product owner represents the product's stakeholders and the voice of the customer, is responsible for the product backlog and accountable for maximizing the value that the team delivers. The product owner defines the product in customer-centric terms, adds them to the product backlog, prioritizes them based on importance and dependencies. A scrum team should have only one product owner; this role should not be combined with that of the scrum master. The product owner should focus on the business
Rational Unified Process
The Rational Unified Process is an iterative software development process framework created by the Rational Software Corporation, a division of IBM since 2003. RUP is not a single concrete prescriptive process, but rather an adaptable process framework, intended to be tailored by the development organizations and software project teams that will select the elements of the process that are appropriate for their needs. RUP is a specific implementation of the Unified Process. Rational Software developed the rational unified process as a software process product; the product includes a hyperlinked knowledge-base with sample artifacts and detailed descriptions for many different types of activities. RUP is included in the IBM Rational Method Composer product which allows customization of the process. Philippe Kruchten, an experienced Rational technical representative was tasked with heading up the original RUP team; this journey began with the creation of the Rational Objectory Process in 1996, when Rational acquired the Objectory Process, written by Ivar Jacobson and company.
This was renamed Rational Unified Process in subsequent releases, in part to align the name with that of the Unified Modeling Language. These initial versions combined the Rational Software organisation's extensive field experience building object-oriented systems with Objectory's guidance on practices such as use cases, incorporated extensive content from Jim Rumbaugh's Object Modeling Technology approach to modeling, Grady Booch's Booch method, the newly released UML 0.8. To help make this growing knowledge base more accessible, Philippe Kruchten was tasked with the assembly of an explicit process framework for modern software engineering; this effort employed the HTML-based process delivery mechanism developed by Objectory. The resulting "Rational Unified Process" completed a strategic tripod for Rational: a tailorable process that guided development tools that automated the application of that process services that accelerated adoption of both the process and the tools; this guidance was augmented in subsequent versions with knowledge based on the experience of companies that Rational had acquired.
In 1997, a requirements and test discipline were added to the approach, much of the additional material sourced from the Requirements College method developed by Dean Leffingwell et al. at Requisite, Inc. and the SQA Process method developed at SQA Inc. both companies having been acquired by Rational Software. In 1998 Rational Software added two new disciplines: business modeling, much of this content had been in the Objectory Process a Configuration and Change Management discipline, sourced through the acquisition of Pure Atria Corporation; these additions lead to an overarching set of principles that were defined by Rational and articulated within RUP as the six best practices for modern software engineering: Develop iteratively, with risk as the primary iteration driver Manage requirements Employ a component-based architecture Model software visually Continuously verify quality Control changesThese best practices were aligned with Rational's product line, both drove the ongoing development of Rational's products, as well as being used by Rational's field teams to help customers improve the quality and predictability of their software development efforts.
Additional techniques including performance testing, UI Design, data engineering were included, an update to reflect changes in UML 1.1. In 1999, a project management discipline was introduced, as well as techniques to support real-time software development and updates to reflect UML 1.3. Besides, the first book to describe the process, The Unified Software Development Process, was published in the same year. Between 2000 and 2003, a number of changes introduced guidance from ongoing Rational field experience with iterative development, in addition to tool support for enacting RUP instances and for customization of the RUP framework; these changes included: the introduction of concepts and techniques from approaches such as eXtreme Programming, that would come to be known collectively as agile methods. This included techniques such as pair programming, test-first design, papers that explained how RUP enabled XP to scale for use on larger projects. A complete overhaul of the testing discipline to better reflect how testing work was conducted in different iterative development contexts.
The introduction of supporting guidance - known as "tool mentors" - for enacting the RUP practices in various tools. These provided step-by-step method support to Rational tool users. Automating the customization of RUP in a way that would allow customers to select parts from the RUP process framework, customize their selection with their own additions, still incorporate improvements in subsequent releases from Rational. IBM acquired Rational Software in February 2003. In 2006, IBM created a subset of RUP tailored for the delivery of Agile projects - released as an OpenSource method called OpenUP through the Eclipse web-site. RUP is based on a set of building blocks and content elements, describing what is to be produced, the necessary skills required and the step-by-step explanation describing how specific development goals are to be achieved; the main building blocks, or content elements, are the following: Roles – A role defines a set of related skills and responsibilities. Work products – A work product represents something resulting from a task, including all the documents and models produced while working through the process.
Tasks – A task describes a unit of work assigned to a Role that provides a meaningful result. Within each iteration, the task
Rapid application development
Rapid-application development called Rapid-application building, is both a general term, used to refer to adaptive software development approaches, as well as the name for James Martin's approach to rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process. Prototypes are used in addition to or sometimes in place of design specifications. RAD is well suited for developing software, driven by user interface requirements. Graphical user interface builders are called rapid application development tools. Other approaches to rapid development include the adaptive, agile and unified models. Rapid application development was a response to plan-driven waterfall processes, developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method. One of the problems with these methods is that they were based on a traditional engineering model used to design and build things like bridges and buildings. Software is an inherently different kind of artifact.
Software can radically change the entire process used to solve a problem. As a result, knowledge gained from the development process itself can feed back to the requirements and design of the solution. Plan-driven approaches attempt to rigidly define the requirements, the solution, the plan to implement it, have a process that discourages changes. RAD approaches, on the other hand, recognize that software development is a knowledge intensive process and provide flexible processes that help take advantage of knowledge gained during the project to improve or adapt the solution; the first such RAD alternative was known as the spiral model. Boehm and other subsequent RAD approaches emphasized developing prototypes as well as or instead of rigorous design specifications. Prototypes had several advantages over traditional specifications: Risk reduction. A prototype could test some of the most difficult potential parts of the system early on in the life-cycle; this can provide valuable information as to the feasibility of a design and can prevent the team from pursuing solutions that turn out to be too complex or time consuming to implement.
This benefit of finding problems earlier in the life-cycle rather than was a key benefit of the RAD approach. The earlier a problem can be found the cheaper. Users are better at reacting than at creating specifications. In the waterfall model it was common for a user to sign off on a set of requirements but when presented with an implemented system to realize that a given design lacked some critical features or was too complex. In general most users give much more useful feedback when they can experience a prototype of the running system rather than abstractly define what that system should be. Prototypes can evolve into the completed product. One approach used in some RAD methods was to build the system as a series of prototypes that evolve from minimal functionality to moderately useful to the final completed system; the advantage of this besides the two advantages above was that the users could get useful business functionality much earlier in the process. Starting with the ideas of Barry Boehm and others, James Martin developed the rapid application development approach during the 1980s at IBM and formalized it by publishing a book in 1991, Rapid Application Development.
This has resulted in some confusion over the term RAD among IT professionals. It is important to distinguish between RAD as a general alternative to the waterfall model and RAD as the specific method created by Martin; the Martin method was tailored toward knowledge intensive and UI intensive business systems. These ideas were further developed and improved upon by RAD pioneers like James Kerr and Richard Hunter, who together wrote the seminal book on the subject, Inside RAD, which followed the journey of a RAD project manager as he drove and refined the RAD Methodology in real-time on an actual RAD project; these practitioners, those like them, helped RAD gain popularity as an alternative to traditional systems project life cycle approaches. The RAD approach matured during the period of peak interest in business re-engineering; the idea of business process re-engineering was to radically rethink core business processes such as sales and customer support with the new capabilities of Information Technology in mind.
RAD was an essential part of larger business re engineering programs. The rapid prototyping approach of RAD was a key tool to help users and analysts "think out of the box" about innovative ways that technology might radically reinvent a core business process; the James Martin approach to RAD divides the process into four distinct phases: Requirements planning phase – combines elements of the system planning and systems analysis phases of the Systems Development Life Cycle. Users, IT staff members discuss and agree on business needs, project scope and system requirements, it ends when the team agrees on obtains management authorization to continue. User design phase – during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes and outputs; the RAD groups or subgroups use a combination of Joint Application Development techniques and CASE tools to translate user needs into working models. User Design is a continuous interactive process that allows users to understand and approve a working model of the system that meets their needs.
Construction phase – focuses on program and application development task similar to the SDLC. In RAD, users c
Dynamic systems development method
Dynamic systems development method is an agile project delivery framework used as a software development method. First released in 1994, DSDM sought to provide some discipline to the rapid application development method. In versions the DSDM Agile Project Framework was revised and became a generic approach to project management and solution delivery rather than being focused on software development and code creation and could be used for non-IT projects; the DSDM Agile Project Framework covers a wide range of activities across the whole project lifecycle and includes strong foundations and governance, which set it apart from some other Agile methods. The DSDM Agile Project Framework is an iterative and incremental approach that embraces principles of Agile development, including continuous user/customer involvement. DSDM fixes cost and time at the outset and uses the MoSCoW prioritisation of scope into musts, shoulds and won't haves to adjust the project deliverable to meet the stated time constraint.
DSDM is one of a number of Agile methods for developing software and non-IT solutions, it forms a part of the Agile Alliance. In 2014, DSDM released the latest version of the method in the'DSDM Agile Project Framework'. At the same time the new DSDM manual recognised the need to operate alongside other frameworks for service delivery PRINCE2, Managing Successful Programmes, PMI; the previous version had only contained guidance on. In the early 1990s, rapid application development was spreading across the IT industry; the user interfaces for software applications were moving from the old green screens to the graphical user interfaces that are used today. New application development tools were coming on the market, such as PowerBuilder; these enabled developers to share their proposed solutions much more with their customers – prototyping became a reality and the frustrations of the classical, sequential development methods could be put to one side. However, the RAD movement was unstructured: there was no agreed definition of a suitable process and many organisations came up with their own definition and approach.
Many major corporations were interested in the possibilities but they were concerned that they did not lose the level of quality in the end deliverables that free-flow development could give rise to. The DSDM Consortium was founded in 1994 by an association of vendors and experts in the field of software engineering and was created with the objective of "jointly developing and promoting an independent RAD framework" by combining their best practice experiences; the origins were an event organised by the Butler Group in London. People at that meeting all worked for blue-chip organisations such as British Airways, American Express and Logica. In July 2006, DSDM Public Version 4.2 was made available for individuals to use. In 2014, the DSDM handbook was made available public. Additionally, templates for DSDM can be downloaded. In October 2016 the DSDM Consortium rebranded as the Agile Business Consortium; the Agile Business Consortium is a not-for-profit, vendor-independent organisation which owns and administers the DSDM framework.
Atern is a vendor-independent approach that recognises that more projects fail because of people problems than technology. Atern’s focus is on helping people to work together to achieve the business goals. Atern is independent of tools and techniques enabling it to be used in any business and technical environment without tying the business to a particular vendor. There are eight principles underpinning DSDM Atern; these principles direct the team in the attitude they must take and the mindset they must adopt to deliver consistently. Focus on the business need Deliver on time Collaborate Never compromise quality Build incrementally from firm foundations Develop iteratively Communicate continuously and Demonstrate control Timeboxing: is the approach for completing the project incrementally by breaking it down into splitting the project in portions, each with a fixed budget and a delivery date. For each portion a number of requirements are selected; because time and budget are fixed, the only remaining variables are the requirements.
So if a project is running out of time or money the requirements with the lowest priority are omitted. This does not mean that an unfinished product is delivered, because of the Pareto Principle that 80% of the project comes from 20% of the system requirements, so as long as those most important 20% of requirements are implemented into the system, the system therefore meets the business needs and that no system is built in the first try. MoSCoW: is a technique for prioritising work requirements, it is an acronym that stands for: MUST have SHOULD have COULD have WON'T havePrototyping: refers to the creation of prototypes of the system under development at an early stage of the project. It enables the early discovery of shortcomings in the system and allows future users to ‘test-drive’ the system; this way good user involvement is realised, one of the key success factors of DSDM, or any System Development project for that matter. Testing: helps ensure a solution of good quality, DSDM advocates testing throughout each iteration.
Since DSDM is a tool and technique independent method, the project team is free to choose its own test management method. Workshop: brings project stakeholders together to discuss requirements and mutual understanding. Model
Domain-driven design is an approach to software development for complex needs by connecting the implementation to an evolving model. The premise of domain-driven design is the following: placing the project's primary focus on the core domain and domain logic; the term was coined by Eric Evans in his book of the same title. Concepts of the model include: Context The setting in which a word or statement appears that determines its meaning; the subject area to which the user applies a program is the domain of the software. Ideally, it would be preferable to have a unified model. While this is a noble goal, in reality it fragments into multiple models, it is useful to recognize this fact of work with it. Strategic Design is a set of principles for maintaining model integrity, distillation of the Domain Model and working with multiple models. Multiple models are in play on any large project, yet when code based on distinct models is combined, software becomes buggy and difficult to understand. Communication among team members becomes confusing.
It is unclear in what context a model should not be applied. Therefore: Explicitly define the context within. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, physical manifestations such as code bases and database schemas. Keep the model consistent within these bounds, but don’t be distracted or confused by issues outside; when a number of people are working in the same bounded context, there is a strong tendency for the model to fragment. The bigger the team, the bigger the problem, but as few as three or four people can encounter serious problems, yet breaking down the system into ever-smaller contexts loses a valuable level of integration and coherency. Therefore: Institute a process of merging all code and other implementation artifacts with automated tests to flag fragmentation quickly. Relentlessly exercise the ubiquitous language to hammer out a shared view of the model as the concepts evolve in different people’s heads. An individual bounded context leaves some problems in the absence of a global view.
The context of other models may still be vague and in flux. People on other teams won’t be aware of the context bounds and will unknowingly make changes that blur the edges or complicate the interconnections; when connections must be made between different contexts, they tend to bleed into each other. Therefore: Identify define its bounded context; this includes the implicit models of non-object-oriented subsystems. Name each bounded context, make the names part of the ubiquitous language. Describe the points of contact between the models, outlining explicit translation for any communication and highlighting any sharing. Map the existing terrain. In the book Domain-Driven Design, a number of high-level concepts and practices are articulated, such as ubiquitous language meaning that the domain model should form a common language given by domain experts for describing system requirements, that works well for the business users or sponsors and for the software developers; the book is focused on describing the domain layer as one of the common layers in an object-oriented system with a multilayered architecture.
In DDD, there are artifacts to express and retrieve domain models: Entity An object, not defined by its attributes, but rather by a thread of continuity and its identity. Example: Most airlines distinguish each seat uniquely on every flight; each seat is an entity in this context. However, Southwest Airlines, EasyJet and Ryanair do not distinguish between every seat. In this context, a seat is a value object. Value object An object that contains has no conceptual identity, they should be treated as immutable. Example: When people exchange business cards, they do not distinguish between each unique card. In this context, business cards are value objects. Aggregate A collection of objects that are bound together by a root entity, otherwise known as an aggregate root; the aggregate root guarantees the consistency of changes being made within the aggregate by forbidding external objects from holding references to its members. Example: When you drive a car, you do not have to worry about moving the wheels forward, making the engine combust with spark and fuel, etc..
In this context, the car is an aggregate of several other objects and serves as the aggregate root to all of the other systems. Domain Event A domain object. A domain event is an event. Service When an operation does not conceptually belong to any object. Following the natural contours of the problem, you can implement these operations in services. See Service. Repository Methods for retrieving domain objects should delegate to a specialized Repository object such that alternative storage implementations may be interchanged. Factory Methods for creating domain objects should delegate to a specialized Factory object such that alte