A functional specification in systems engineering and software development is a document that specifies the functions that a system or component must perform. The documentation typically describes what is needed by the user as well as requested properties of inputs and outputs. A functional specification is the technical response to a matching requirements document. Thus it picks up the results of the analysis stage. On more complex systems multiple levels of functional specifications will typically nest to each other, e. g. on the level, on the module level. A functional specification does not define the inner workings of the proposed system, instead, it focuses on what various outside agents might observe when interacting with the system. Such a requirement describes an interaction between an agent and the software system. When the user input to the system by clicking the OK button. There are many purposes for functional specifications, to let the developers know what to build. To let the testers know what tests to run, to let stakeholders know what they are getting.
In the ordered industrial software engineering life-cycle, functional specification describes what has to be implemented, the next, Systems architecture document describes how the functions will be realized using a chosen software environment. In non industrial, prototypical systems development, functional specifications are written after or as part of requirements analysis. When the team agrees that functional specification consensus is reached, the spec is typically declared complete or signed off. After this, typically the software development and testing team write source code, while testing is performed, the behavior of the program is compared against the expected behavior as defined in the functional specification. One popular method of writing a specification document involves drawing or rendering either simple wireframes or accurate. The benefit of this method is that additional details can be attached to the screen examples
In computing, memory refers to the computer hardware devices involved to store information for immediate use in a computer, it is synonymous with the term primary storage. Computer memory operates at a speed, for example random-access memory, as a distinction from storage that provides slow-to-access program and data storage. If needed, contents of the memory can be transferred to secondary storage. An archaic synonym for memory is store, there are two main kinds of semiconductor memory and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory, most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes both one bit per cell and multiple bits per cell. The memory cells are grouped into words of fixed word length, each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word, typical secondary storage devices are hard disk drives and solid-state drives.
In the early 1940s, memory technology oftenly permit a capacity of a few bytes, the next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient, two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, the Williams tube would prove more capacious than the Selectron tube and less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory, jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s, developments in technology and economies of scale have made possible so-called Very Large Memory computers.
The term memory when used with reference to computers generally refers to Random Access Memory or RAM, volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM or dynamic RAM, SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM, non-volatile memory is computer memory that can retain the stored information even when not powered
System dynamics is an approach to understand the nonlinear behaviour of complex systems over time using stocks, internal feedback loops, table functions and time delays. System dynamics is a methodology and mathematical modeling technique to frame, convenient graphical user interface system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions, the best known SD model is probably the 1972 The Limits to Growth. System dynamics is an aspect of systems theory as a method to understand the behavior of complex systems. Examples are chaos theory and social dynamics, System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management.
His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, at that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability and these hand simulations were the start of the field of system dynamics. During the late 1950s and early 1960s, Forrester and a team of students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of DYNAMO, a version of SIMPLE. Forrester published the first, and still classic, book in the field titled Industrial Dynamics in 1961, from the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems.
In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling, john F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled Urban Dynamics, the Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. The second major application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, at the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could, on the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the worlds socioeconomic system. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome, Forrester called the refined version of the model WORLD2
IT service management
As a discipline, ITSM has ties and common interests with other IT and general management approaches, e. g. quality management, information security management and software engineering. Consequently, IT service management frameworks have been influenced by other standards and adopted concepts from them, e. g. CMMI, there is an international, chapter-based professional association, the IT Service Management Forum. The main goal of the itSMF is to foster the exchange of experiences, to this end, national itSMF chapters organize conferences and workshops. Some of them contribute to the translations of ITSM framework documents into their respective languages or publish own ITSM guides. There are several certifications for service management like ITIL foundation 2011, IT service management is often equated with the Information Technology Infrastructure Library, even though there are a variety of standards and frameworks contributing to the overall ITSM discipline. ITIL originated as a publication of United Kingdom government agencies.
In January 2014, ownership of ITIL was transferred to Axelos, a joint venture of the UK government and Capita, the current version of the ITIL framework is the 2011 edition. The 2011 edition, published in July 2011, is a revision of the edition known as ITIL version 3. ITIL version 3 was an upgrade from version 2. Whereas version 2 was process-oriented, version 3 is service-orientated, other frameworks for ITSM and overlapping disciplines include Business Process Framework is a process framework for telecommunications service providers. COBIT is an IT Governance framework that specifies control objectives, recent versions have aligned the naming of select control objectives to established ITSM process names. FitSM is a standard for service management. It contains several parts, including e. g. auditable requirements and document templates and its basic process framework is in large parts aligned to that of ISO/IEC20000. ISO/IEC20000 is a standard for managing and delivering IT services. Its process model bears many similarities to that of ITIL version 2, since BS15000, ISO/IEC20000 defines minimum requirements for an effective service management system.
Conformance of the SMS to ISO/IEC can be audited and organizations can achieve an ISO/IEC20000 certification of their SMS for a defined scope, MOF includes, in addition to a general framework of service management functions, guidance on managing services based on Microsoft technologies. Execution of ITSM processes in an organization, especially those processes that are more workflow-driven ones, ITSM tools are often marketed as ITSM suites, which support not one, but a whole set of ITSM processes. At their core is usually a workflow management system for handling incidents, service requests and they usually include a tool for a configuration management database
Rational Unified Process
The Rational Unified Process is an iterative software development process framework created by the Rational Software Corporation, a division of IBM since 2003. RUP is an implementation of the Unified Process. Rational Software originally developed the unified process as a software process product. The product includes a hyperlinked knowledge-base with sample artifacts and detailed descriptions for different types of activities. RUP is included in the IBM Rational Method Composer product which allows customization of the process, Philippe Kruchten, an experienced Rational technical representative was tasked with heading up the original RUP team. This journey began with the creation of the Rational Objectory Process in 1996 and this was renamed Rational Unified Process in subsequent releases, in part to align the name with that of the Unified Modeling Language.8. To help make this knowledge base more accessible, Philippe Kruchten was tasked with the assembly of an explicit process framework for modern software engineering.
This effort employed the HTML-based process delivery mechanism developed by Objectory and this guidance was augmented in subsequent versions with knowledge based on the experience of companies that Rational had acquired. Additional techniques including performance testing, UI Design, data engineering were included, in 1999, a project management discipline was introduced, as well as techniques to support real-time software development and updates to reflect UML1. These changes included, the introduction of concepts and techniques from such as eXtreme Programming. This included techniques such as programming, test-first design. A complete overhaul of the discipline to better reflect how testing work was conducted in different iterative development contexts. The introduction of supporting guidance - known as tool mentors - for enacting the RUP practices in various tools and these essentially provided step-by-step method support to Rational tool users. IBM acquired Rational Software in February 2003, in 2006, IBM created a subset of RUP tailored for the delivery of Agile projects - released as an OpenSource method called OpenUP through the Eclipse web-site.
The main building blocks, or content elements, are the following, Roles – A role defines a set of related skills, work products – A work product represents something resulting from a task, including all the documents and models produced while working through the process. Tasks – A task describes a unit of work assigned to a Role that provides a meaningful result, each phase has one key objective and milestone at the end that denotes the objective being accomplished. The visualization of RUP phases and disciplines over time is referred to as the RUP hump chart, the primary objective is to scope the system adequately as a basis for validating initial costing and budgets. In this phase the business case which includes business context, success factors, to complement the business case, a basic use case model, project plan, initial risk assessment and project description are generated
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can provide an objective, independent view of the software to allow the business to appreciate. Test techniques include the process of executing a program or application with the intent of finding software bugs, Software testing involves the execution of a software component or system component to evaluate one or more properties of interest. As a result, software testing typically attempts to execute a program or application with the intent of finding software bugs, the job of testing is an iterative process as when one bug is fixed, it can illuminate other, deeper bugs, or can even create new ones. Software testing can provide objective, independent information about the quality of software, Software testing can be conducted as soon as executable software exists. The overall approach to software development often determines when and how testing is conducted, for example, in a phased process, most testing occurs after system requirements have been defined and implemented in testable programs.
In contrast, under an Agile approach, programming, although testing can determine the correctness of software under the assumption of some specific hypotheses, testing cannot identify all the defects within software. Instead, it furnishes a criticism or comparison that compares the state, a primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions, in the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members, information derived from software testing may be used to correct the process by which software is developed. Every software product has a target audience, for example, the audience for video game software is completely different from banking software. Software testing is the process of attempting to make this assessment, not all software defects are caused by coding errors.
One common source of defects is requirement gaps, e. g. unrecognized requirements which result in errors of omission by the program designer. Requirement gaps can often be non-functional requirements such as testability, maintainability, performance, Software faults occur through the following processes. A programmer makes an error, which results in a defect in the source code. If this defect is executed, in certain situations the system will produce wrong results, not all defects will necessarily result in failures. For example, defects in dead code will never result in failures, a defect can turn into a failure when the environment is changed. Examples of these changes in environment include the software being run on a new hardware platform, alterations in source data
There are several radically different forms of the V-model, and this creates considerable confusion. The V-model falls into three broad categories, first there is the German V-Model Das V-Modell, the official project management methodology of the German government. It is roughly equivalent to PRINCE2, but more relevant to software development. There is no accepted definition of this model, which is more directly covered in the alternative article on the V-Model. There are therefore multiple variations of this version and this problem must be borne in mind when discussing the V-model. The US has a government standard V-model which dates back about 20 years like its German counterpart and its scope is a narrower systems development lifecycle model, but far more detailed and more rigorous than most UK practitioners and testers would understand by the V-model. The V-model is a representation of the systems development lifecycle. It summarizes the steps to be taken in conjunction with the corresponding deliverables within computerized system validation framework.
The V represents the sequence of steps in a life cycle development. It describes the activities to be performed and the results that have to be produced during product development, the left side of the V represents the decomposition of requirements, and creation of system specifications. The right side of the V represents integration of parts and their validation, Requirements need to be validated first against the higher level requirements or user needs. Furthermore, there is something as validation of system models and this can partially be done at the left side also. To claim that only occurs at the right side may not be correct. The easiest way is to say that verification is always against the requirements and it is sometimes said that validation can be expressed by the query Are you building the right thing. And verification by Are you building it right, in practice, the usage of these terms varies. Sometimes they are used interchangeably. The PMBOK guide, an IEEE standard defines them as follows in its 4th edition, the assurance that a product, service, or system meets the needs of the customer and other identified stakeholders.
It often involves acceptance and suitability with external customers, the evaluation of whether or not a product, service, or system complies with a regulation, specification, or imposed condition
CM applied over the life cycle of a system provides visibility and control of its performance and physical attributes. CM verifies that a system performs as intended, and is identified and documented in sufficient detail to support its life cycle. The relatively minimal cost of implementing CM is returned many fold in cost avoidance, the lack of CM, or its ineffectual implementation, can be very expensive and sometimes can have such catastrophic consequences such as failure of equipment or loss of life. CM emphasizes the relation between parts and systems for effectively controlling system change. It helps to verify that proposed changes are considered to minimize adverse effects. CM verifies that changes are carried out as prescribed and that documentation of items and systems reflects their true configuration, a complete CM program includes provisions for the storing and updating of all system information on a component and system basis. A structured CM program ensures that documentation for items is accurate, in many cases, without CM, the documentation exists but is not consistent with the item itself.
For this reason, engineers and management are frequently forced to develop documentation reflecting the status of the item before they can proceed with a change. This reverse engineering process is wasteful in terms of human and other resources, the CM process became its own technical discipline sometime in the late 1960s when the DoD developed a series of military standards called the 480 series that were subsequently issued in the 1970s. This marked the beginning of what has now evolved into the most widely distributed and accepted standard on CM, many of these functions and models have redefined CM from its traditional holistic approach to technical management. Some treat CM as being similar to an activity. CM is the practice of handling changes systematically so that a system maintains its integrity over time, during system development, CM allows program management to track requirements throughout the life-cycle through acceptance and operations and maintenance. As changes inevitably occur in the requirements and design, they must be approved and documented, ideally the CM process is applied throughout the system lifecycle.
The CM process for both hardware- and software-configuration items comprises five distinct disciplines as established in the MIL–HDBK–61A and in ANSI/EIA-649 and these disciplines are carried out as policies and procedures for establishing baselines and for performing a standard change-management process. The IEEE12207 process IEEE12207.2 has activities and adds Release management. It is the basis by which changes to any part of a system are identified and tracked through design, development and final delivery. CI incrementally establishes and maintains the current basis for Configuration Status Accounting of a system. Configuration Control, includes the evaluation of all change-requests and change-proposals and it covers the process of controlling modifications to the systems design, firmware and documentation
Enterprise systems engineering
Enterprise systems engineering is the discipline that applies systems engineering to the design of an enterprise. As a discipline, it includes a body of knowledge, principles, an enterprise is a complex, socio-technical system that comprises interdependent resources of people and technology that must interact to fulfill a common mission. Enterprise must produce different kind of analysis on the people, technology, as nowadays, enterprise becomes more complex with more problems and people to deal with, it is important to integrated the system in order to reach a higher standard or level for the business. There are four important elements in order for enterprise system engineering to work and it includes development through adaption, strategic technical planning, enterprise governance and ESE processes. Development through adaptation is a way to compromise with the problems. As time goes by, the environment will change and it needs adaptation in order to develop continuously, to develop through adaption, it experiences different stages.
For example, mobile phone has gone quite a few adaptations in its evolutionary development from the past. When it first released, the size of the phone is enormous, while since the generation changes, Also, it developed from 1G to 4G for wireless network which makes everything goes quicker and faster and more convenient. To sum up, it refers to the process of composing a diversity of ideas and choices to the enterprise. Strategic technical planning gives the enterprise the picture of their aim and objectives in the future and it brings a balance of assimilation and modernization to the enterprise. It has different components for STP. according to, Enterprise governance includes two aspects which are corporate governance and business governance. It is essential to understand the company and to know what must be done in order to succeed and it allows us to make the right decision on the choice of CEO and executives for the company, and to identify the risks of the company. There are four different steps in the Enterprise system engineering process and it includes technology planning, capabilities-based engineering analysis, enterprise architecture and enterprise analysis and assessment.
It is a step that searching and looking for key technologies for the enterprise, the aim of this step is to determine and associate all the innovative ideas and choose the technology that are useful for the enterprise to develop in a sustainable way. We have to identify and look for the trend of the technology to decide what technology the company needs and it is important to understand what each of the technology can be achieved and will the characteristic of the technology fits in the company well. It is a method that focus on the essential elements that whole enterprise needs. It is a scheme that target the innovation and evolution of the capabilities, there is a set of essential steps for the analysis. The activities are dependent and it is conducted iteratively, there are four aspects according to Microsofts Michael Platt which are the prospective of business, application and technology
Profiling (computer programming)
Most commonly, profiling information serves to aid program optimization. Profiling is achieved by instrumenting either the source code or its binary executable form using a tool called a profiler. Profilers may use a number of different techniques, such as event-based, instrumented, profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, and performance counters. Profilers are used in the engineering process. The size of a trace is linear to the programs instruction path length, a trace may therefore be initiated at one point in a program and terminated at another point to limit the output. An ongoing interaction with the hypervisor This provides the opportunity to switch a trace on or off at any desired point during execution in addition to viewing on-going metrics about the program. It provides the opportunity to suspend asynchronous processes at critical points to examine interactions with other processes in more detail.
This was an example of sampling. In early 1974 instruction-set simulators permitted full trace and other performance-monitoring features, profiler-driven program analysis on Unix dates back to at least 1979, when Unix systems included a basic tool, which listed each function and how much of program execution time it used. In 1982 gprof extended the concept to a call graph analysis. In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM, the ATOM platform converts a program into its own profiler, at compile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data and this technique - modifying a program to analyze itself - is known as instrumentation. In 2004 both the gprof and ATOM papers appeared on the list of the 50 most influential PLDI papers for the 20-year period ending in 1999. Flat profilers compute the average times, from the calls. Call graph profilers show the times, and frequencies of the functions.
In some tools full context is not preserved, input-sensitive profilers add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an applications performance scales as a function of its input, which are programs themselves, analyze target programs by collecting information on their execution. Based on their data granularity, on how profilers collect information, like Java, the runtime provides various callbacks into the agent, for trapping events like method JIT / enter / leave, object creation, etc
This article deals with decision-making as analyzed in psychology. In psychology, decision-making is regarded as the process resulting in the selection of a belief or a course of action among several alternative possibilities. Every decision-making process produces a final choice, it may or may not prompt action, decision-making is the process of identifying and choosing alternatives based on the values and preferences of the decision-maker. Decision-making can be regarded as a problem-solving activity terminated by a solution deemed to be satisfactory and it is therefore a process which can be more or less rational or irrational and can be based on explicit or tacit knowledge. Cognitive, the decision-making process regarded as a continuous process integrated in the interaction with the environment, the analysis of individual decisions concerned with the logic of decision-making, or communicative rationality, and the invariant choice it leads to. A major part of decision-making involves the analysis of a set of alternatives described in terms of evaluative criteria.
Then the task might be to rank these alternatives in terms of how attractive they are to the decision-maker when all the criteria are considered simultaneously. Another task might be to find the best alternative or to determine the relative priority of each alternative when all the criteria are considered simultaneously. Solving such problems is the focus of multiple-criteria decision analysis and this leads to the formulation of a decision-making paradox. Logical decision-making is an important part of all science-based professions, where specialists apply their knowledge in an area to make informed decisions. For example, medical decision-making often involves a diagnosis and the selection of appropriate treatment and they may follow a recognition primed decision that fits their experience and arrive at a course of action without weighing alternatives. The decision-makers environment can play a part in the decision-making process, for example, environmental complexity is a factor that influences cognitive function. A complex environment is an environment with a number of different possible states which come.
Studies done at the University of Colorado have shown that more complex environments correlate with cognitive function. One experiment measured complexity in a room by the number of objects and appliances present. Cognitive function was greatly affected by the measure of environmental complexity making it easier to think about the situation. Research about decision-making is published under the label problem solving and it is important to differentiate between problem analysis and decision-making. Traditionally, it is argued that problem analysis must be done first, information overload is a gap between the volume of information and the tools we have to assimilate it
Systems development life cycle
The systems development life-cycle concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. Computer systems are complex and often link multiple traditional systems potentially supplied by different software vendors, SDLC can be described along a spectrum of agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on processes which allow for rapid changes along the development cycle. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on limited project scope, sequential or big-design-up-front models, such as waterfall, focus on complete and correct planning to guide large projects and risks to successful and predictable results. Other models, such as development, tend to focus on a form of development that is guided by project scope. In project management a project can be defined both with a life cycle and an SDLC, during which slightly different activities occur.
According to Taylor, the life cycle encompasses all the activities of the project. SDLC is used during the development of an IT project, it describes the different stages involved in the project from the drawing board, through the completion of the project. The product life cycle describes the process for building systems in a very deliberate and methodical way. Information systems activities revolved around heavy data processing and number crunching routines, the system development life cycle framework provides a sequence of activities for system designers and developers to follow. It consists of a set of steps or phases in which phase of the SDLC uses the results of the previous one. The SDLC adheres to important phases that are essential for developers, such as planning, analysis and implementation and it includes evaluation of present system, information gathering, feasibility study and request approval. A number of SDLC models have been created, fountain, spiral and fix, rapid prototyping, incremental and stabilize.
The oldest of these, and the best known, is the waterfall model, conduct the preliminary analysis, in this step, you need to find out the organizations objectives and the nature and scope of the problem under study. Even if a problem only to a small segment of the organization itself. Then you need to see how the problem being studied fits in with them, propose alternative solutions, In digging into the organizations objectives and specific problems, you may have already covered some solutions. Alternate proposals may come from interviewing employees, clients and you can study what competitors are doing. With this data, you will have three choices, leave the system as is, improve it, or develop a new system, Systems analysis, requirements definition, Defines project goals into defined functions and operation of the intended application