Sperry Corporation was a major American equipment and electronics company whose existence spanned more than seven decades of the 20th century. Through a series of mergers it exists today as a part of Unisys, while some other of its former divisions became part of Honeywell, Lockheed Martin, United Technologies, Northrop Grumman; the company is best known as the developer of the artificial horizon and a wide variety of other gyroscope-based aviation instruments like autopilots, analog ballistics computers and gyro gunsights. In the post-WWII era they branched out into electronics, both aviation related, computers; the company was founded in 1910, as the Sperry Gyroscope Company by Elmer Ambrose Sperry to manufacture navigation equipment, chiefly his own inventions—the marine gyrostabilizer and the gyrocompass at 40 Flatbush Avenue Extension in Downtown Brooklyn. During World War I the company diversified into aircraft components including bomb sights and fire control systems. In their early decades, Sperry Gyroscope and related companies were concentrated on Long Island, New York in Nassau County.
Over the years, it diversified to other locations. In 1918, Lawrence Sperry split from his father to compete over aero-instruments with the Lawrence Sperry Aircraft Company, including the new automatic pilot. In 1924, following the death of Lawrence on December 13, 1923, the two firms were brought together; the company became Sperry Corporation in 1933. The new corporation was a holding company for a number of smaller entities such as the original Sperry Gyroscope, Ford Instrument Company, Intercontinental Aviation, Inc. and others. The company made advanced aircraft navigation equipment for the market, including the Sperry Gyroscope and the Sperry Radio Direction Finder. Sperry supported the work of a group of Stanford University inventors, led by Russell and Sigurd Varian, who had invented the klystron, incorporated this technology and related inventions into their products; the company prospered during World War II as military demand skyrocketed, ranking 19th among US corporations in the value of wartime production contracts.
It specialized in high technology devices such as analog computer–controlled bomb sights, airborne radar systems, automated take-off and landing systems. Sperry was the creator of the Ball Turret Gun mounted under the Boeing B-17 Flying Fortress and the Consolidated B-24 Liberator, as commemorated by the film Memphis Belle and the poem The Death of the Ball Turret Gunner. Postwar, the company expanded its interests in electronics and computing, producing the company's first digital computer, SPEEDAC, in 1953. During the 1950s, a large part of Sperry Gyroscope moved to Phoenix and soon became the Sperry Flight Systems Company; this was to preserve parts of this defense company in the event of a nuclear war. The Gyroscope division remained headquartered in New York—in its massive Lake Success, Long Island, plant —into the 1980s. In 1955, Sperry renamed itself Sperry Rand. Acquiring Eckert-Mauchly Computer Corporation and Engineering Research Associates along with Remington Rand, the company developed the successful UNIVAC computer series and signed a valuable cross-licensing deal with IBM.
The company remained a major military contractor. From 1967 to 1973 the corporation was involved in an acrimonious antitrust lawsuit with Honeywell, Inc.. In 1961, Sperry Rand was ranked 34th on the Fortune 500 list of largest companies in the United States. In 1978, Sperry Rand decided to concentrate on its computing interests, sold a number of divisions including Remington Rand Systems, Remington Rand Machines, Ford Instrument Company and Sperry Vickers; the company reverted to Sperry Corporation. At about the same time as the Rand acquisition, Sperry Gyroscope decided to open a facility that would exclusively produce its marine instruments. After considerable searching and evaluation, a plant was built in Charlottesville, in 1956, Sperry Piedmont Division began producing marine navigation products, it was renamed Sperry Marine. In the 1970s, Sperry Corporation was a traditional conglomerate headquartered in the Sperry Rand Building at 1290 Avenue of Americas in Manhattan, selling typewriters, office equipment, electronic digital computers for business and the military and farm equipment and consumer products In addition, Sperry Systems Management did a fair amount of government defense contracting.
Sperry managed the operation from 1961 to 1975, of the large Louisiana Army Ammunition Plant near Minden. Sperry bought out and continued the RCA line of electronic digital computers: architectural cousins to the IBM System/360. In 1983, Sperry sold Vickers to Libbey Owens Ford. In 1986, after the success of a second hostile takeover bid engineered by Burroughs' CEO and former U. S. Secretary of the Treasury, Michael Blumenthal, Sperry Corporation merged with Burroughs Corporation to become Unisys; the takeover came about after Sperry used a "poison pill" in the form of a major share price hike to dissuade the hostile bid, as a result of which Burroughs had to borrow much more from the banks than was anticipated in order to complete the bid. Certain internal divisions of Sperry were sold off after the merger, such as Sperry New Holl
PHP: Hypertext Preprocessor is a general-purpose programming language designed for web development. It was created by Rasmus Lerdorf in 1994. PHP stood for Personal Home Page, but it now stands for the recursive initialism PHP: Hypertext Preprocessor. PHP code may be executed with a command line interface, embedded into HTML code, or it can be used in combination with various web template systems, web content management systems, web frameworks. PHP code is processed by a PHP interpreter implemented as a module in a web server or as a Common Gateway Interface executable; the web server combines the results of the interpreted and executed PHP code, which may be any type of data, including images, with the generated web page. PHP can be used for many programming tasks outside of the web context, such as standalone graphical applications and robotic drone control; the standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License. PHP has been ported and can be deployed on most web servers on every operating system and platform, free of charge.
The PHP language evolved without a written formal specification or standard until 2014, with the original implementation acting as the de facto standard which other implementations aimed to follow. Since 2014, work has gone on to create a formal PHP specification. PHP development began in 1994 when Rasmus Lerdorf wrote several Common Gateway Interface programs in C, which he used to maintain his personal homepage, he extended them to work with web forms and to communicate with databases, called this implementation "Personal Home Page/Forms Interpreter" or PHP/FI. PHP/FI could be used to build dynamic web applications. To accelerate bug reporting and improve the code, Lerdorf announced the release of PHP/FI as "Personal Home Page Tools version 1.0" on the Usenet discussion group comp.infosystems.www.authoring.cgi on June 8, 1995. This release had the basic functionality that PHP has today; this included Perl-like variables, form handling, the ability to embed HTML. The syntax was simpler, more limited and less consistent.
Early PHP was not intended to be a new programming language, grew organically, with Lerdorf noting in retrospect: "I don't know how to stop it, there was never any intent to write a programming language I have no idea how to write a programming language, I just kept adding the next logical step on the way." A development team began to form and, after months of work and beta testing released PHP/FI 2 in November 1997. The fact that PHP was not designed, but instead was developed organically has led to inconsistent naming of functions and inconsistent ordering of their parameters. In some cases, the function names were chosen to match the lower-level libraries which PHP was "wrapping", while in some early versions of PHP the length of the function names was used internally as a hash function, so names were chosen to improve the distribution of hash values. Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP 3, changing the language's name to the recursive acronym PHP: Hypertext Preprocessor.
Afterwards, public testing of PHP 3 began, the official launch came in June 1998. Suraski and Gutmans started a new rewrite of PHP's core, producing the Zend Engine in 1999, they founded Zend Technologies in Ramat Gan, Israel. On May 22, 2000, PHP 4, powered by the Zend Engine 1.0, was released. As of August 2008 this branch reached version 4.4.9. PHP 4 will any security updates be released. On July 14, 2004, PHP 5 was released, powered by the new Zend Engine II. PHP 5 included new features such as improved support for object-oriented programming, the PHP Data Objects extension, numerous performance enhancements. In 2008, PHP 5 became the only stable version under development. Late static binding had been missing from PHP and was added in version 5.3. Many high-profile open-source projects ceased to support PHP 4 in new code as of February 5, 2008, because of the GoPHP5 initiative, provided by a consortium of PHP developers promoting the transition from PHP 4 to PHP 5. Over time, PHP interpreters became available on most existing 32-bit and 64-bit operating systems, either by building them from the PHP source code, or by using pre-built binaries.
For PHP versions 5.3 and 5.4, the only available Microsoft Windows binary distributions were 32-bit x86 builds, requiring Windows 32-bit compatibility mode while using Internet Information Services on a 64-bit Windows platform. PHP version 5.5 made. Official security support for PHP 5.6 ended on 31 December 2018, but Debian 8.0 Jessie will extend support until June 2020. PHP received mixed reviews due to lacking native Unicode support at the core language level. In 2005, a project headed by Andrei Zmievski was initiated to bring native Unicode support throughout PHP, by embedding the International Components for Unicode library, representing text strings as UTF-16 internally. Since this would cause major changes both to the internals of the language and to user code, it was planned to release this as version 6.0 of the language, along with other major features in development. However, a shortage of developers who understood the necessary changes, performance problems arising from conversion to and from UTF-16, used in a web context, led to delays in the project.
As a result, a PHP 5.3 release was created in 2009, with many non-Unicode f
Statistics is a branch of mathematics dealing with data collection, analysis and presentation. In applying statistics to, for example, a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. See glossary of probability and statistics; when census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements.
In contrast, an observational study does not involve experimental manipulation. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, inferential statistics, which draw conclusions from data that are subject to random variation. Descriptive statistics are most concerned with two sets of properties of a distribution: central tendency seeks to characterize the distribution's central or typical value, while dispersion characterizes the extent to which members of the distribution depart from its center and each other. Inferences on mathematical statistics are made under the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, this is compared as an alternative to an idealized null hypothesis of no relationship between two data sets.
Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors and Type II errors. Multiple problems have come to be associated with this framework: ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis. Measurement processes that generate statistical data are subject to error. Many of these errors are classified as random or systematic, but other types of errors can be important; the presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems. Statistics can be said to have begun in ancient civilization, going back at least to the 5th century BC, but it was not until the 18th century that it started to draw more from calculus and probability theory. In more recent years statistics has relied more on statistical software to produce tests such as descriptive analysis.
Some definitions are: Merriam-Webster dictionary defines statistics as "a branch of mathematics dealing with the collection, analysis and presentation of masses of numerical data." Statistician Arthur Lyon Bowley defines statistics as "Numerical statements of facts in any department of inquiry placed in relation to each other."Statistics is a mathematical body of science that pertains to the collection, interpretation or explanation, presentation of data, or as a branch of mathematics. Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is concerned with the use of data in the context of uncertainty and decision making in the face of uncertainty. Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, measure-theoretic probability theory.
In applying statistics to a problem, it is common practice to start with a population or process to be studied. Populations can be diverse topics such as "all people living in a country" or "every atom composing a crystal". Ideally, statisticians compile data about the entire population; this may be organized by governmental statistical institutes. Descriptive statistics can be used to summarize the population data. Numerical descriptors include mean and standard deviation for continuous data types, while frequency and percentage are more useful in terms of describing categorical data; when a census is not feasible, a chosen subset of the population called. Once a sample, representative of the population is determined, data is collected for the sample members in an observational or experimental setting. Again, descriptive statistics can be used to summarize the sample data. However, the drawing of the sample has been subject to an element of randomness, hence the established numerical descriptors from the sample are due to uncertainty.
To still draw meaningful conclusions about the entire population, in
An entity–relationship model describes interrelated things of interest in a specific domain of knowledge. A basic ER model is composed of entity types and specifies relationships that can exist between entities. In software engineering, an ER model is formed to represent things a business needs to remember in order to perform business processes; the ER model becomes an abstract data model, that defines a data or information structure which can be implemented in a database a relational database. Entity–relationship modeling was developed for database design by Peter Chen and published in a 1976 paper. However, variants of the idea existed previously; some ER models show super and subtype entities connected by generalization-specialization relationships, an ER model can be used in the specification of domain-specific ontologies. An entity–relationship model is the result of systematic analysis to define and describe what is important to processes in an area of a business, it does not define the business processes.
It is drawn in a graphical form as boxes that are connected by lines which express the associations and dependencies between entities. An ER model can be expressed in a verbal form, for example: one building may be divided into zero or more apartments, but one apartment can only be located in one building. Entities may be characterized not only by relationships, but by additional properties, which include identifiers called "primary keys". Diagrams created to represent attributes as well as entities and relationships may be called entity-attribute-relationship diagrams, rather than entity–relationship models. An ER model is implemented as a database. In a simple relational database implementation, each row of a table represents one instance of an entity type, each field in a table represents an attribute type. In a relational database a relationship between entities is implemented by storing the primary key of one entity as a pointer or "foreign key" in the table of another entity There is a tradition for ER/data models to be built at two or three levels of abstraction.
Note that the conceptual-logical-physical hierarchy below is used in other kinds of specification, is different from the three schema approach to software engineering. Conceptual data model This is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set; the conceptual ER model defines master reference data entities that are used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization. A conceptual ER model may be used as more logical data models; the purpose of the conceptual ER model is to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration. Logical data model A logical ER model does not require a conceptual ER model if the scope of the logical ER model includes only the development of a distinct information system.
The logical ER model contains more detail than the conceptual ER model. In addition to master data entities and transactional data entities are now defined; the details of each data entity are developed and the relationships between these data entities are established. The logical ER model is however developed independently of the specific database management system into which it can be implemented. Physical data model One or more physical ER; the physical ER model is developed to be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different; the physical model is instantiated in the structural metadata of a database management system as relational database objects such as database tables, database indexes such as unique key indexes, database constraints such as a foreign key constraint or a commonality constraint. The ER model is normally used to design modifications to the relational database objects and to maintain the structural metadata of the database.
The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information, to be stored in a database. The data modeling technique can be used to describe any ontology for a certain area of interest. In the case of the design of an information system, based on a database, the conceptual data model is, at a stage, mapped to a logical data model, such as the relational model. Note that sometimes, both of these phases are referred to as "physical design." An entity may be defined as a thing capable of an independent existence that can be uniquely identified. An entity is an abstraction from the complexities of a domain; when we speak of an entity, we speak of some aspect of the real world that can be distinguished from other aspects of the real world. An entity is a thing that exists either logically. An entity may be a physical object such as a house or a car, an event
A programmer, coder, or software engineer is a person who creates computer software. The term computer programmer can refer to a specialist in one area of computers, or to a generalist who writes code for many kinds of software. One who practices, or professes, a formal approach to programming may be known as a programmer analyst. On the other hand, "code monkey" is a derogatory term for a programmer who writes code without any involvement in the design or specifications. A programmer's primary computer language is prefixed to these titles, those who work in a web environment prefix their titles with web. A range of occupations—including: software developer, web developer, mobile applications developer, embedded firmware developer, software engineer, computer scientist, game programmer, game developer, or software analyst—that involve programming require a range of other skills; the use of the term programmer for these positions is sometimes considered an insulting or derogatory simplification.
British countess and mathematician Ada Lovelace is considered the first computer programmer, as she was the first to publish an algorithm intended for implementation on Charles Babbage's analytical engine, in October 1842, intended for the calculation of Bernoulli numbers. Because Babbage's machine was never completed to a functioning standard in her time, she never saw this algorithm run; the first person to run a program on a functioning modern electronically based computer was computer scientist Konrad Zuse, in 1941. The ENIAC programming team, consisting of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman were the first working programmers. International Programmers' Day is celebrated annually on 7 January. In 2009, the government of Russia decreed a professional annual holiday known as Programmers' Day to be celebrated on 13 September, it had been an unofficial international holiday before that. The word "software" did not appear in print until the 1960s.
Before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide software products and services was Computer Usage Company in 1955; the software industry expanded in the early 1960s immediately after computers were first sold in mass-produced quantities. Universities and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers; some were distributed between users of a particular machine for no charge. Others were done on a commercial basis, other firms such as Computer Sciences Corporation started to grow; the computer/hardware makers started bundling operating systems, system software and programming environments with their machines. The industry expanded with the rise of the personal computer in the mid-1970s, which brought computing to the desktop of the office worker. In the following years, it created a growing market for games and utilities.
DOS, Microsoft's first operating system product, was the dominant operating system at the time. In the early years of the 21st century, another successful business model has arisen for hosted software, called software-as-a-service, or SaaS. From the point of view of producers of some proprietary software, SaaS reduces the concerns about unauthorized copying, since it can only be accessed through the Web, by definition, no client software is loaded onto the end user's PC. By 2014, the role of cloud developer had been defined. Computer programmers write, test and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers conceive and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.
Programmers work in many settings, including corporate information technology departments, big software companies, small service firms and government entities of all sizes. Many professional programmers work for consulting companies at client sites as contractors. Licensing is not required to work as a programmer, although professional certifications are held by programmers. Programming is considered a profession. Programmers' work varies depending on the type of business for which they are writing programs. For example, the instructions involved in updating financial records are different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Simple programs can be written in a few hours, more complex ones may require more than a year of work, while others are never considered'complete' but rather are continuously improved as long as they stay in use. In most cases, several programmers work together as a team under a senior programmer’s supervision.
Programmers write programs according to the specifications determined b
Control tables are tables that control the control flow or play a major part in program control. There are no rigid rules about the structure or content of a control table—its qualifying attribute is its ability to direct control flow in some way through "execution" by a processor or interpreter; the design of such tables is sometimes referred to as table-driven design. In some cases, control tables can be specific implementations of finite-state-machine-based automata-based programming. If there are several hierarchical levels of control table they may behave in a manner equivalent to UML state machinesControl tables have the equivalent of conditional expressions or function references embedded in them implied by their relative column position in the association list. Control tables reduce the need for programming similar structures or program statements over and over again; the two-dimensional nature of most tables makes them easier to view and update than the one-dimensional nature of program code.
In some cases, non-programmers can be assigned to maintain the control tables. Transformation of input values to: an index, for branching or pointer lookup a program name, relative subroutine number, program label or program offset, to alter control flow Controlling a main loop in event-driven programming using a control variable for state transitions Controlling the program cycle for Online transaction processing applications Acting as virtual instructions for a virtual machine processed by an interpretersimilar to bytecode - but with operations implied by the table structure itself The tables can have multiple dimensions, of fixed or variable lengths and are portable between computer platforms, requiring only a change to the interpreter, not the algorithm itself - the logic of, embodied within the table structure and content; the structure of the table may be similar to a multimap associative array, where a data value may be mapped to one or more functions to be performed. In its simplest implementation, a control table may sometimes be a one-dimensional table for directly translating a raw data value to a corresponding subroutine offset, index or pointer using the raw data value either directly as the index to the array, or by performing some basic arithmetic on the data beforehand.
This can be achieved in constant time. In most architectures, this can be accomplished in two or three machine instructions - without any comparisons or loops; the technique is known as a "trivial hash function" or, when used for branch tables, "double dispatch". For this to be feasible, the range of all possible values of the data needs to be small. Table to translate raw ASCII values to new subroutine index in constant time using one-dimensional array In automata-based programming and pseudoconversational transaction processing, if the number of distinct program states is small, a "dense sequence" control variable can be used to efficiently dictate the entire flow of the main program loop. A two byte raw data value would require a minimum table size of 65,536 bytes - to handle all input possibilities - whilst allowing just 256 different output values. However, this direct translation technique provides an fast validation & conversion to a subroutine pointer if the heuristics, together with sufficient fast access memory, permits its use.
A branch table is a one-dimensional'array' of contiguous machine code branch/jump instructions to effect a multiway branch to a program label when branched into by an preceding, indexed branch. It is sometimes generated by an optimizing compiler to execute a switch statement - provided that the input range is small and dense, with few gaps. Although quite compact - compared to the multiple equivalent If statements - the branch instructions still carry some redundancy, since the branch opcode and condition code mask are repeated alongside the branch offsets. Control tables containing only the offsets to the program labels can be constructed to overcome this redundancy and yet requiring only minor execution time overhead compared to a conventional branch table. More a control table can be thought of as a Truth table or as an executable implementation of a printed decision table, they contain propositions, together with one or more associated'actions'. These actions are performed by generic or custom-built subroutines that are called by an "interpreter" program.
The interpreter in this instance functions as a virtual machine, that'executes' the control table entries and thus provides a higher level of abstraction than the underlying code of the interpreter. A control table can be constructed along similar lines to a language dependent switch statement but with the added possibility of testing for combinations of input values and calling multiple subroutines. (The switch statement construct in any case may not be available, or has confusingly differing impl
Computer-aided software engineering
Computer-aided software engineering is the domain of software tools used to design and implement applications. CASE tools are similar to and were inspired by computer-aided design tools used for designing hardware products. CASE tools are used for developing high-quality, defect-free, maintainable software. CASE software is associated with methods for the development of information systems together with automated tools that can be used in the software development process; the Information System Design and Optimization System project, started in 1968 at the University of Michigan, initiated a great deal of interest in the whole concept of using computer systems to help analysts in the difficult process of analysing requirements and developing systems. Several papers by Daniel Teichroew fired a whole generation of enthusiasts with the potential of automated systems development, his Problem Statement Language / Problem Statement Analyzer tool was a CASE tool although it predated the term. Another major thread emerged as a logical extension to the data dictionary of a database.
By extending the range of metadata held, the attributes of an application could be held within a dictionary and used at runtime. This "active dictionary" became the precursor to the more modern model-driven engineering capability. However, the active dictionary did not provide a graphical representation of any of the metadata, it was the linking of the concept of a dictionary holding analysts' metadata, as derived from the use of an integrated set of techniques, together with the graphical representation of such data that gave rise to the earlier versions of CASE. The term was coined by software company Nastec Corporation of Southfield, Michigan in 1982 with their original integrated graphics and text editor GraphiText, the first microcomputer-based system to use hyperlinks to cross-reference text strings in documents—tumo an early forerunner of today's web page link. GraphiText's successor product, DesignAid, was the first microprocessor-based tool to logically and semantically evaluate software and system design diagrams and build a data dictionary.
Under the direction of Albert F. Case, Snr. vice president for product management and consulting, Vaughn Frick, director of product management, the DesignAid product suite was expanded to support analysis of a wide range of structured analysis and design methodologies, including those of Ed Yourdon and Tom DeMarco, Chris Gane & Trish Sarson, Ward-Mellor SA/SD and Warnier-Orr. The next entrant into the market was Excelerator from Index Technology in Mass.. While DesignAid ran on Convergent Technologies and Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/AT platform. While, at the time of launch, for several years, the IBM platform did not support networking or a centralized database as did the Convergent Technologies or Burroughs machines, the allure of IBM was strong, Excelerator came to prominence. Hot on the heels of Excelerator were a rash of offerings from companies such as Knowledgeware, Texas Instrument's CA Gen and Andersen Consulting's FOUNDATION toolset.
CASE tools were at their peak in the early 1990s. At the time IBM had proposed AD/Cycle, an alliance of software vendors centered on IBM's Software repository using IBM DB2 in mainframe and OS/2: The application development tools can be from several sources: from IBM, from vendors, from the customers themselves. IBM has entered into relationships with Bachman Information Systems, Index Technology Corporation, Knowledgeware wherein selected products from these vendors will be marketed through an IBM complementary marketing program to provide offerings that will help to achieve complete life-cycle coverage. With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the market for the mainstream CASE tools of today. Many of the leaders of the CASE market of the early 1990s ended up being purchased by Computer Associates, including IEW, IEF, ADW, Learmonth & Burchett Management Systems; the other trend that led to the evolution of CASE tools was the rise of object-oriented methods and tools.
Most of the various tool vendors added some support for object-oriented tools. In addition new products arose that were designed from the bottom up to support the object-oriented approach. Andersen developed its project Eagle as an alternative to Foundation. Several of the thought leaders in object-oriented development each developed their own methodology and CASE tool set: Jacobsen, Booch, etc; these diverse tool sets and methods were consolidated via standards led by the Object Management Group. The OMG's Unified Modelling Language is widely accepted as the industry standard for object-oriented modeling. A. Fuggetta classified CASE software different into 3 categories: Tools support specific tasks in the software life-cycle. Workbenches combine two or more tools focused on a specific part of the software life-cycle. Environments support the complete software life-cycle. CASE tools support specific tasks in the software development life-cycle, they can be divided into the following categories: Business and Analysis modeling.
Graphical modeling tools. E.g. E/R modeling, object modeling, etc. Development. Design and construction phases of the life-cycle. Debugging environments. E.g. GNU Debugger. Verification and validation. Analyze code and specifications for correctness, etc. Configuration management. Control the check-in and check-out of repository objects and files. E.g. SCCS, CMS. Metrics and measurement. A