A regular expression, regex or regexp is a sequence of characters that define a search pattern. This pattern is used by string searching algorithms for "find" or "find and replace" operations on strings, or for input validation, it is a technique developed in formal language theory. The concept arose in the 1950s when the American mathematician Stephen Cole Kleene formalized the description of a regular language; the concept came into common use with Unix text-processing utilities. Since the 1980s, different syntaxes for writing regular expressions exist, one being the POSIX standard and another used, being the Perl syntax. Regular expressions are used in search engines and replace dialogs of word processors and text editors, in text processing utilities such as sed and AWK and in lexical analysis. Many programming languages provide regex capabilities, built-in or via libraries; the phrase regular expressions, regexes, is used to mean the specific, standard textual syntax for representing patterns for matching text.
Each character in a regular expression is either a metacharacter, having a special meaning, or a regular character that has a literal meaning. For example, in the regex a. A is a literal character which matches just'a', while'.' is a meta character that matches every character except a newline. Therefore, this regex matches, for example,'a', or'ax', or'a0'. Together and literal characters can be used to identify text of a given pattern, or process a number of instances of it. Pattern matches may vary from a precise equality to a general similarity, as controlled by the metacharacters. For example. Is a general pattern, is less general and a is a precise pattern; the metacharacter syntax is designed to represent prescribed targets in a concise and flexible way to direct the automation of text processing of a variety of input data, in a form easy to type using a standard ASCII keyboard. A simple case of a regular expression in this syntax is to locate a word spelled two different ways in a text editor, the regular expression serialie matches both "serialise" and "serialize".
Wildcards achieve this, but are more limited in what they can pattern, as they have fewer metacharacters and a simple language-base. The usual context of wildcard characters is in globbing similar names in a list of files, whereas regexes are employed in applications that pattern-match text strings in general. For example, the regex ^+|+$ matches excess whitespace at the beginning or end of a line. An advanced regular expression that matches any numeral is??. A regex processor translates a regular expression in the above syntax into an internal representation which can be executed and matched against a string representing the text being searched in. One possible approach is the Thompson's construction algorithm to construct a nondeterministic finite automaton, made deterministic and the resulting deterministic finite automaton is run on the target text string to recognize substrings that match the regular expression; the picture shows the NFA scheme N obtained from the regular expression s*, where s denotes a simpler regular expression in turn, recursively translated to the NFA N.
Regular expressions originated in 1951, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular sets. These arose in theoretical computer science, in the subfields of automata theory and the description and classification of formal languages. Other early implementations of pattern matching include the SNOBOL language, which did not use regular expressions, but instead its own pattern matching constructs. Regular expressions entered popular use from 1968 in two uses: pattern matching in a text editor and lexical analysis in a compiler. Among the first appearances of regular expressions in program form was when Ken Thompson built Kleene's notation into the editor QED as a means to match patterns in text files. For speed, Thompson implemented regular expression matching by just-in-time compilation to IBM 7094 code on the Compatible Time-Sharing System, an important early example of JIT compilation, he added this capability to the Unix editor ed, which led to the popular search tool grep's use of regular expressions.
Around the same time when Thompson developed QED, a group of researchers including Douglas T. Ross implemented a tool based on regular expressions, used for lexical analysis in compiler design. Many variations of these original forms of regular expressions were used in Unix programs at Bell Labs in the 1970s, including vi, sed, AWK, expr, in other programs such as Emacs. Regexes were subsequently adopted by a wide range of programs, with these early forms standardized in the POSIX.2 standard in 1992. In the 1980s the more complicated regexes arose in Perl, which derived from a regex library written by Henry Spencer, who wrote an implementation of Advanced Regular Expressions for Tcl; the Tcl library is a hybrid NFA/DFA implementation with improved performance characteristics. Software projects that have adopted Spencer's Tcl regular expression implementation include PostgreSQL. Perl expanded on Spencer's original library
Free software movement
The free software movement or free/open-source software movement or free/libre open-source software movement is a social movement with the goal of obtaining and guaranteeing certain freedoms for software users, namely the freedom to run the software, to study and change the software, to redistribute copies with or without changes. Although drawing on traditions and philosophies among members of the 1970s hacker culture and academia, Richard Stallman formally founded the movement in 1983 by launching the GNU Project. Stallman established the Free Software Foundation in 1985 to support the movement; the philosophy of the movement is that the use of computers should not lead to people being prevented from cooperating with each other. In practice, this means rejecting "proprietary software", which imposes such restrictions, promoting free software, with the ultimate goal of liberating everyone in cyberspace – that is, every computer user. Stallman notes that this action will promote rather than hinder the progression of technology, since "it means that much wasteful duplication of system programming effort will be avoided.
This effort can go instead into advancing the state of the art". Members of the free software movement believe that all users of software should have the freedoms listed in The Free Software Definition. Many of them hold that it is immoral to prohibit or prevent people from exercising these freedoms and that these freedoms are required to create a decent society where software users can help each other, to have control over their computers; some free software users and programmers do not believe that proprietary software is immoral, citing an increased profitability in the business models available for proprietary software or technical features and convenience as their reasons."While social change may occur as an unintended by-product of technological change, advocates of new technologies have promoted them as instruments of positive social change." This quote by San Jose State professor Joel West explains much of the philosophy, or the reason that the free source movement is alive. If it is assumed that social change is not only affected, but in some points of view, directed by the advancement of technology, is it ethical to hold these technologies from certain people?
If not to make a direct change, this movement is in place to raise awareness about the effects that take place because of the physical things around us. A computer, for instance, allows us so many more freedoms than we have without a computer, but should these technological mediums be implied freedoms, or selective privileges? The debate over the morality of both sides to the free software movement is a difficult topic to compromise respective opposition; the Free Software Foundation believes all software needs free documentation, in particular because conscientious programmers should be able to update manuals to reflect modification that they made to the software, but deems the freedom to modify less important for other types of written works. Within the free software movement, the FLOSS Manuals foundation specialises on the goal of providing such documentation. Members of the free software movement advocate that works which serve a practical purpose should be free; the core work of the free software movement focused on software development.
The free software movement rejects proprietary software, refusing to install software that does not give them the freedoms of free software. According to Stallman, "The only thing in the software field, worse than an unauthorised copy of a proprietary program, is an authorised copy of the proprietary program because this does the same harm to its whole community of users, in addition the developer, the perpetrator of this evil, profits from it." Some supporters of the free software movement take up public speaking, or host a stall at software-related conferences to raise awareness of software freedom. This is seen as important since people who receive free software, but who are not aware that it is free software, will accept a non-free replacement or will add software, not free software. Margaret S. Elliot, a researcher in the Institute for Software at the University of California Irvine, not only outlines many benefits that could come from a free software movement, she claims that it is inherently necessary to give every person equal opportunity to utilize the Internet, assuming that the computer is globally accessible.
Since the world has become more based in the framework of technology and its advancement, creating a selective internet that allows only some to surf the web is nonsensical according to Elliot. If there is a desire to live in a more coexistent world, benefited by communication and global assistance globally free software should be a position to strive for, according to many scholars who promote awareness about the free software movement; the ideas sparked by the GNU associates are an attempt to promote a "cooperative environment" that understands the benefits of having a local community and a global community. A lot of lobbying work has been done against software expansions of copyright law. Other lobbying focusses directly on use of free software by government agencies and government-funded projects; the Venezuelan government implemented a free software law in January 2006. Decree No. 3,390 mandated all government agencies to migrate to free software over a two-year period. Congressmen Edgar David Villanueva and Jacques Rodrich Ackerman have been instrumental in introducing free software in Peru, with bill 1609 on "Free Software in Public Administration".
The incident invited the attention of Microsoft Inc, whose general manager wrote a letter to Villanueva. His response received worldwide attention and is seen a
GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
In computer science, artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals. Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of achieving its goals. Colloquially, the term "artificial intelligence" is used to describe machines that mimic "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"; as machines become capable, tasks considered to require "intelligence" are removed from the definition of AI, a phenomenon known as the AI effect. A quip in Tesler's Theorem says "AI is whatever hasn't been done yet." For instance, optical character recognition is excluded from things considered to be AI, having become a routine technology. Modern machine capabilities classified as AI include understanding human speech, competing at the highest level in strategic game systems, autonomously operating cars, intelligent routing in content delivery networks and military simulations.
Artificial intelligence can be classified into three different types of systems: analytical, human-inspired, humanized artificial intelligence. Analytical AI has only characteristics consistent with cognitive intelligence. Human-inspired AI has elements from emotional intelligence. Humanized AI shows characteristics of all types of competencies, is able to be self-conscious and is self-aware in interactions with others. Artificial intelligence was founded as an academic discipline in 1956, in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding, followed by new approaches and renewed funding. For most of its history, AI research has been divided into subfields that fail to communicate with each other; these sub-fields are based on technical considerations, such as particular goals, the use of particular tools, or deep philosophical differences. Subfields have been based on social factors; the traditional problems of AI research include reasoning, knowledge representation, learning, natural language processing and the ability to move and manipulate objects.
General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, methods based on statistics and economics; the AI field draws upon computer science, information engineering, psychology, linguistics and many other fields. The field was founded on the claim that human intelligence "can be so described that a machine can be made to simulate it"; this raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth and philosophy since antiquity. Some people consider AI to be a danger to humanity if it progresses unabated. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment. In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, theoretical understanding.
Thought-capable artificial beings appeared as storytelling devices in antiquity, have been common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R. U. R.. These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence; the study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction; this insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that "if a human could not distinguish between responses from a machine and a human, the machine could be considered "intelligent".
The first work, now recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was born at a workshop at Dartmouth College in 1956. Attendees Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Arthur Samuel became the founders and leaders of AI research, they and their students produced programs that the press described as "astonishing": computers were learning checkers strategies (and by 1959 were playing better than the average human
C++ is a general-purpose programming language, developed by Bjarne Stroustrup as an extension of the C language, or "C with Classes". It has imperative, object-oriented and generic programming features, while providing facilities for low-level memory manipulation, it is always implemented as a compiled language, many vendors provide C++ compilers, including the Free Software Foundation, Intel, IBM, so it is available on many platforms. C++ was designed with a bias toward system programming and embedded, resource-constrained software and large systems, with performance and flexibility of use as its design highlights. C++ has been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications and performance-critical applications. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2017 as ISO/IEC 14882:2017.
The C++ programming language was standardized in 1998 as ISO/IEC 14882:1998, amended by the C++03, C++11 and C++14 standards. The current C++ 17 standard supersedes these with an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Danish computer scientist Bjarne Stroustrup at Bell Labs since 1979 as an extension of the C language. C++20 is the next planned standard, keeping with the current trend of a new version every three years. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++; the motivation for creating a new language originated from Stroustrup's experience in programming for his Ph. D. thesis. Stroustrup found that Simula had features that were helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development; when Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing.
Remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast and used; as well as C and Simula's influences, other languages influenced C++, including ALGOL 68, Ada, CLU and ML. Stroustrup's "C with Classes" added features to the C compiler, including classes, derived classes, strong typing and default arguments. In 1983, "C with Classes" was renamed to "C++", adding new features that included virtual functions, function name and operator overloading, constants, type-safe free-store memory allocation, improved type checking, BCPL style single-line comments with two forward slashes. Furthermore, it included the development of a standalone compiler for Cfront. In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard; the first commercial implementation of C++ was released in October of the same year.
In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, protected members. In 1990, The Annotated C++ Reference Manual was published; this work became the basis for the future standard. Feature additions included templates, namespaces, new casts, a boolean type. After the 2.0 update, C++ evolved slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, providing more facilities to C++ programmers. After a minor C++14 update released in December 2014, various new additions were introduced in C++17, further changes planned for 2020; as of 2017, C++ remains the third most popular programming language, behind Java and C. On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language".
According to Stroustrup: "the name signifies the evolutionary nature of the changes from C". This name is credited to Rick Mascitti and was first used in December 1983; when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator and a common naming convention of using "+" to indicate an enhanced computer program. During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name. Throughout C++'s life, its development and evolution has been guided by a set of principles: It must be driven by actual problems and its features should be useful in real world programs; every feature should be implementable. Programmers should be free to pick their own programming style, that style should be supported by C++. Allowing a useful feature is more important than preventing every possible misuse of C++, it should provide facilities for organising programs into separate, well-defined parts, provide facilities for combining separately developed parts.
No implicit violations of the type system (but allow explicit violations.
All-India Muslim League
The All-India Muslim League was a political party established in 1906 in the British Indian Empire. Its strong advocacy for the establishment of a separate Muslim-majority nation-state, Pakistan led to the partition of British India in 1947 by the British Empire; the party arose out of a literary movement begun at The Aligarh Muslim University in which Syed Ahmad Khan was a central figure. It remained an elitist organisation until 1937 when the leadership began mobilising the Muslim masses and the league became a popular organisation. In the 1930s, the idea of a separate nation-state and influential philosopher Sir Muhammad Iqbal's vision of uniting the four provinces in North-West British India further supported the rationale of the two-nation theory. With global events leading up to World War II and the Congress party's effective protest against the United Kingdom unilaterally involving India in the war without consulting the Indian people, the Muslim League went on to support the British war efforts.
The Muslim League played a decisive role in the 1940s, becoming a driving force behind the division of India along religious lines and the creation of Pakistan as a Muslim state in 1947. After the partition and subsequent establishment of Pakistan, the Muslim League continued as a minor party in India where it was part of the government. In Bangladesh, the Muslim League was revived in 1976 but it was reduced in size, rendering it insignificant in the political arena. In India, the Indian Union Muslim League and in Pakistan the Pakistan Muslim League became the original successors of the All-India Muslim League. Despite efforts by the pioneers of the Congress to attract Muslims to their sessions the majority of the Muslim leadership, such as Sir Syed Ahmed Khan and Syed Ameer Ali, rejected the notion that India's "two distinct communities" could be represented by the Congress. In 1886, Sir Syed founded the Muhammadan Educational Conference, but a self-imposed ban prevented it from discussing politics.
Its original goal was to advocate for British education science and literature, among India's Muslims. The conference, in addition to generating funds for Sir Syed's Aligarh Muslim University, motivated the Muslim upper class to propose an expansion of educational uplift elsewhere, known as the Aligarh Movement. In turn, this new awareness of Muslim needs helped stimulate a political consciousness among Muslim elites, who went on to form the All-India Muslim League; the formation of a Muslim political party on the national level was seen as essential by 1901. The first stage of its formation was the meeting held at Lucknow in September 1906, with the participation of representatives from all over India; the decision for re-consideration to form the all-Indian Muslim political party was taken and further proceedings were adjourned until the next meeting of the All India Muhammadan Educational Conference. The Simla Deputation reconsidered the issue in October 1906 and decided to frame the objectives of the party on the occasion of the annual meeting of the Educational Conference, scheduled to be held in Dhaka.
Meanwhile, Nawab Salimullah Khan published a detailed scheme through which he suggested the party to be named All-India Muslim Confederacy. Pursuant upon the decisions taken earlier at the Lucknow meeting and in Simla, the annual meeting of the All-India Muhammadan Educational Conference was held in Dhaka from 27 December until 30 December 1906. Three thousand delegates attended, headed by both Nawab Waqar-ul-Mulk and Nawab Muhasan-ul-Mulk, in which they explained its objectives and stressed the unity of Muslims under the banner of an association, it was formally proposed by Nawab Salimullah Khan and supported by Hakim Ajmal Khan, Maulana Muhammad Ali Jauhar, Zafar Ali Khan, Syed Nabiullah, a barrister from Lucknow, Syed Zahur Ahmad, an eminent lawyer, as well as several others. The Muslim League's insistence on separate electorates and reserved seats in the Imperial Council were granted in the Indian Councils Act after the League held protests in India and lobbied London; the draft proposals for the reforms communicated on 1 October 1908 provided Muslims with reserved seats in all councils, with nomination only being maintained in Punjab.
The communication displayed how much the Government had accommodated Muslim demands and showed an increase in Muslim representation in the Imperial and provincial legislatures. But the Muslim League's demands were only met in UP and Madras. However, the Government did accept the idea of separate electorates; the idea had not been accepted by the Secretary of State, who proposed mixed electoral colleges, causing the Muslim League to agitate and the Muslim press to protest what they perceived to be a betrayal of the Viceroy's assurance to the Simla deputation. On 23 February Morley told the House of Lords that Muslims demanded separate representation and accepted them; this was the League's first victory. But the Indian Councils Bill did not satisfy the demands of the Muslim League, it was based on the October 1908 communique. The Muslim League's London branch opposed the bill and in a debate obtained the support of several parliamentarians. In 1909 the members of the Muslim League organised a Muslim protest.
The Reforms Committee of Minto's council believed that Muslims had a point and advised Minto to discuss with some Muslim leaders. The Government offered a few more seats to Muslims in compromise but would not agree to satisfy the League's demand. Minto believed that the Muslims had been given enough while Morley was still not certain because of the pressure Muslims could apply on the government; the Muslim League's central committee once ag
A finite-state machine or finite-state automaton, finite automaton, or a state machine, is a mathematical model of computation. It is an abstract machine that can be in one of a finite number of states at any given time; the FSM can change from one state to another in response to some external inputs. An FSM is defined by a list of its states, its initial state, the conditions for each transition. Finite state machines are of two types – deterministic finite state machines and non-deterministic finite state machines. A deterministic finite-state machine can be constructed equivalent to any non-deterministic one; the behavior of state machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. Simple examples are vending machines, which dispense products when the proper combination of coins is deposited, whose sequence of stops is determined by the floors requested by riders, traffic lights, which change sequence when cars are waiting, combination locks, which require the input of combination numbers in the proper order.
The finite state machine has less computational power than some other models of computation such as the Turing machine. The computational power distinction means there are computational tasks that a Turing machine can do but a FSM cannot; this is because a FSM's memory is limited by the number of states it has. FSMs are studied in the more general field of automata theory. An example of a simple mechanism that can be modeled by a state machine is a turnstile. A turnstile, used to control access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway; the arms are locked, blocking the entry, preventing patrons from passing through. Depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again. Considered as a state machine, the turnstile has two possible states: Unlocked. There are two possible inputs that affect its state: pushing the arm.
In the locked state, pushing on the arm has no effect. Putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect. However, a customer pushing through the arms, giving a push input, shifts the state back to Locked; the turnstile state machine can be represented by a state transition table, showing for each possible state, the transitions between them and the outputs resulting from each input: The turnstile state machine can be represented by a directed graph called a state diagram. Each state is represented by a node. Edges show the transitions from one state to another; each arrow is labeled with the input. An input that doesn't cause a change of state is represented by a circular arrow returning to the original state; the arrow into the Locked node from the black dot indicates. A state is a description of the status of a system, waiting to execute a transition. A transition is a set of actions to be executed when a condition is fulfilled or when an event is received.
For example, when using an audio system to listen to the radio, receiving a "next" stimulus results in moving to the next station. When the system is in the "CD" state, the "next" stimulus results in moving to the next track. Identical stimuli trigger different actions depending on the current state. In some finite-state machine representations, it is possible to associate actions with a state: an entry action: performed when entering the state, an exit action: performed when exiting the state. Several state transition table types are used; the most common representation is shown below: the combination of current state and input shows the next state. The complete action's information is not directly described in the table and can only be added using footnotes. A FSM definition including the full actions information is possible using state tables; the Unified Modeling Language has a notation for describing state machines. UML state machines overcome the limitations of traditional finite state machines while retaining their main benefits.
UML state machines introduce the new concepts of hierarchically nested states and orthogonal regions, while extending the notion of actions. UML state machines have the characteristics of Moore machines, they support actions that depend on both the state of the system and the triggering event, as in Mealy machines, as well as entry and exit actions, which are associated with states rather than transitions, as in Moore machines. The Specification and Description Language is a standard from ITU that includes graphical symbols to describe actions in the transition: send an event receive an event start a timer cancel a timer start another concurrent state machine decisionSDL embeds basic data types called "Abstract Data Types", an action language, an execution semantic in order to make the finite state machine executable. There are a large number of variants to represent an FSM such as the one in figure 3. In addition to their use in modeling reactive systems