An identifier is a name that identifies either a unique object or a unique class of objects, where the "object" or class may be an idea, physical object, or physical substance. The abbreviation ID refers to identity, identification, or an identifier. An identifier may be a word, letter, symbol, or any combination of those; the words, letters, or symbols may follow an encoding system or they may be arbitrary. When an identifier follows an encoding system, it is referred to as a code or ID code. For instance the ISO/IEC 11179 metadata registry standard defines a code as system of valid symbols that substitute for longer values in contrast to identifiers without symbolic meaning. Identifiers that do not follow any encoding scheme are said to be arbitrary IDs; the unique identifier is an identifier that refers to only one instance—only one particular object in the universe. A part number is an identifier, but it is not a unique identifier—for that, a serial number is needed, to identify each instance of the part design.
Thus the identifier "Model T" identifies the class of automobiles. The concepts of name and identifier are denotatively equal, the terms are thus denotatively synonymous. For example, both "Jamie Zawinski" and "Netscape employee number 20" are identifiers for the same specific human being; this is an emic indistinction rather than an etic one. In metadata, an identifier is a language-independent label, sign or token that uniquely identifies an object within an identification scheme; the suffix identifier is used as a representation term when naming a data element. ID codes may inherently carry metadata along with them. For example, when you know that the food package in front of you has the identifier "2011-09-25T15:42Z-MFR5-P02-243-45", you not only have that data, you have the metadata that tells you that it was packaged on September 25, 2011, at 3:42pm UTC, manufactured by Licensed Vendor Number 5, at the Peoria, IL, USA plant, in Building 2, was the 243rd package off the line in that shift, was inspected by Inspector Number 45.
Arbitrary identifiers might lack metadata. For example, if a food package just says 100054678214, its ID may not tell anything except identity—no date, manufacturer name, production sequence rank, or inspector number. In some cases, arbitrary identifiers such as sequential serial numbers leak information. Opaque identifiers—identifiers designed to avoid leaking that small amount of information—include "really opaque pointers" and Version 4 UUIDs. In computer science, identifiers are lexical tokens. Identifiers are used extensively in all information processing systems. Identifying entities makes it possible to refer to them, essential for any kind of symbolic processing. In computer languages, identifiers are tokens; some of the kinds of entities an identifier might denote include variables, labels and packages. Which character sequences constitute identifiers depends on the lexical grammar of the language. A common rule is alphanumeric sequences, with underscore allowed, with the condition that it not begin with a digit – so foo, foo1, foo_bar, _foo are allowed, but 1foo is not – this is the definition used in earlier versions of C and C++, many other languages.
Versions of these languages, along with many other modern languages, support all Unicode characters in an identifier. However, a common restriction is not to permit whitespace characters and language operators. For example, forbidding + in identifiers due to its use as a binary operation means that a+b and a + b can be tokenized the same, while if it were allowed, a+b would be an identifier, not an addition. Whitespace in identifier is problematic, as if spaces are allowed in identifiers a clause such as if rainy day 1 is legal, with rainy day as an identifier, but tokenizing this requires the phrasal context of being in the condition of an if clause; some languages do allow spaces in identifiers, such as ALGOL 68 and some ALGOL variants – for example, the following is a valid statement: real half pi. Half pi. In ALGOL this was possible because keywords are syntactically differentiated, so there is no risk of collision or ambiguity, spaces are eliminated during the line reconstruction phase, the source was processed via scannerless parsing, so lexing could be context-sensitive
In mathematics and computer science, mutual recursion is a form of recursion where two mathematical or computational objects, such as functions or data types, are defined in terms of each other. Mutual recursion is common in functional programming and in some problem domains, such as recursive descent parsers, where the data types are mutually recursive; the most important basic example of a data type that can be defined by mutual recursion is a tree, which can be defined mutually recursively in terms of a forest. Symbolically: f: t: v f A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f; this definition is elegant and easy to work with abstractly, as it expresses a tree in simple terms: a list of one type, a pair of two types. Further, it matches many algorithms on trees, which consist of doing one thing with the value, another thing with the children; this mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest: t: v A tree t consists of a pair of a value v and a list of trees.
This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list of another, which require disentangling to prove results about. In Standard ML, the tree and forest data types can be mutually recursively defined as follows, allowing empty trees: Just as algorithms on recursive data types can be given by recursive functions, algorithms on mutually recursive data structures can be given by mutually recursive functions. Common examples include algorithms on trees, recursive descent parsers; as with direct recursion, tail call optimization is necessary if the recursion depth is large or unbounded, such as using mutual recursion for multitasking. Note that tail call optimization in general may be more difficult to implement than the special case of tail-recursive call optimization, thus efficient implementation of mutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascal that require declaration before use, mutually recursive functions require forward declaration, as a forward reference cannot be avoided when defining them.
As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions defined as nested functions within its scope if this is supported. This is useful for sharing state across a set of functions without having to pass parameters between them. A standard example of mutual recursion, admittedly artificial, determines whether a non-negative number is or odd by defining two separate functions that call each other, decrementing each time. In C: These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, in turn equivalent to is 2 even?, so on down to 0. This example is mutual single recursion, could be replaced by iteration. In this example, the mutually recursive calls are tail calls, tail call optimization would be necessary to execute in constant stack space. In C, this would take O stack space; this could be reduced to a single recursive function. In that case, is_odd, which could be inlined, would call is_even, but is_even would only call itself.
As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value and its behavior on children, can be split up into two mutually recursive functions, one specifying the behavior on a tree, calling the forest function for the forest of children, one specifying the behavior on a forest, calling the tree function for the tree in the forest. In Python: In this case the tree function calls the forest function by single recursion, but the forest function calls the tree function by multiple recursion. Using the Standard ML data type above, the size of a tree can be computed via the following mutually recursive functions: A more detailed example in Scheme, counting the leaves of a tree: These examples reduce to a single recursive function by inlining the forest function in the tree function, done in practice: directly recursive functions that operate on trees sequentially process the value of the node and recurse on the children within one function, rather than dividing these into two separate functions.
A more complicated example is given by recursive descent parsers, which can be implemented by having one function for each production rule of a grammar, which mutually recurse. This can be done without mutual recursion, for example by still having separate functions for each production rule, but having them called by a single controller function, or by putting all the grammar in a single function. Mutual recursion can implement a finite-state machine, with one function for each state, single recursion in changing state; this can be used as a simple form of cooperative multitasking. A similar approach to multitasking is to instead use coroutines which call each other, where rather than terminating by calling another routine, one coroutine yields to another but does not terminate, resumes execution when it is yielded back to; this allows individual coroutines to hold state, without it needing to be passed by parameters or stored in shared variables. There are al
A definition is a statement of the meaning of a term. Definitions can be classified into two large categories, intensional definitions and extensional definitions. Another important category of definitions is the class of ostensive definitions, which convey the meaning of a term by pointing out examples. A term may have many different senses and multiple meanings, thus require multiple definitions. In mathematics, a definition is used to give a precise meaning to a new term, instead of describing a pre-existing term. Definitions and axioms are the basis. In modern usage, a definition is something expressed in words, that attaches a meaning to a word or group of words; the word or group of words, to be defined is called the definiendum, the word, group of words, or action that defines it is called the definiens. In the definition "An elephant is a large gray animal native to Asia and Africa", the word "elephant" is the definiendum, everything after the word "is" is the definiens; the definiens is not the meaning of the word defined, but is instead something that conveys the same meaning as that word.
There are many sub-types of definitions specific to a given field of knowledge or study. These include, among many others, lexical definitions, or the common dictionary definitions of words in a language. An intensional definition called a connotative definition, specifies the necessary and sufficient conditions for a thing being a member of a specific set. Any definition that attempts to set out the essence of something, such as that by genus and differentia, is an intensional definition. An extensional definition called a denotative definition, of a concept or term specifies its extension, it is a list naming every object, a member of a specific set. Thus, the "seven deadly sins" can be defined intensionally as those singled out by Pope Gregory I as destructive of the life of grace and charity within a person, thus creating the threat of eternal damnation. An extensional definition would be the list of wrath, sloth, lust and gluttony. In contrast, while an intensional definition of "Prime Minister" might be "the most senior minister of a cabinet in the executive branch of government in a parliamentary system", an extensional definition is not possible since it is not known who future prime ministers will be.
A genus–differentia definition is a type of intensional definition that takes a large category and narrows it down to a smaller category by a distinguishing characteristic. More formally, a genus-differentia definition consists of: a genus: An existing definition that serves as a portion of the new definition; the differentia: The portion of the new definition, not provided by the genus. For example, consider the following genus-differentia definitions: a triangle: A plane figure that has three straight bounding sides. A quadrilateral: A plane figure that has four straight bounding sides; those definitions can be expressed as two differentiae. It is possible to have two different genus-differentia definitions that describe the same term when the term describes the overlap of two large categories. For instance, both of these genus-differentia definitions of "square" are acceptable: a square: a rectangle, a rhombus. A square: a rhombus, a rectangle. Thus, a "square" is a member of both the genus "rectangle" and the genus "rhombus".
One important form of the extensional definition is ostensive definition. This gives the meaning of a term by pointing, in the case of an individual, to the thing itself, or in the case of a class, to examples of the right kind. So one can explain; the process of ostensive definition itself was critically appraised by Ludwig Wittgenstein. An enumerative definition of a concept or term is an extensional definition that gives an explicit and exhaustive listing of all the objects that fall under the concept or term in question. Enumerative definitions are only possible for finite sets and only practical for small sets. Divisio and partitio are classical terms for definitions. A partitio is an intensional definition. A divisio is not an extensional definition, but an exhaustive list of subsets of a set, in the sense that every member of the "divided" set is a member of one of the subsets. An extreme form of divisio lists all sets; the difference between this and an extensional definition is that extensional definitions list members, not subsets.
In classical thought, a definition was taken to be a statement of the essence of a thing. Aristotle had it that an object's essential attributes form its "essential nature", that a definition of the object must include these essential attributes; the idea that a definition should state the essence of a thing led to the distinction between nominal and real essence, originating with Aristotle. In
Niklaus Emil Wirth is a Swiss computer scientist. He has designed several programming languages, including Pascal, pioneered several classic topics in software engineering. In 1984 he won the Turing Award recognized as the highest distinction in computer science, for developing a sequence of innovative computer languages. Wirth was born in Winterthur, Switzerland, in 1934. In 1959 he earned a degree in Electronics Engineering from the Swiss Federal Institute of Technology Zürich. In 1960 he earned an M. Sc. from Université Laval, Canada. In 1963 he was awarded a Ph. D. in Electrical Engineering and Computer Science from the University of California, supervised by the computer designer pioneer Harry Huskey. From 1963 to 1967 he served as assistant professor of Computer Science at Stanford University and again at the University of Zurich. In 1968 he became Professor of Informatics at ETH Zürich, taking two one-year sabbaticals at Xerox PARC in California. Wirth retired in 1999. In 2004, he was made a Fellow of the Computer History Museum "for seminal work in programming languages and algorithms, including Euler, Algol-W, Pascal and Oberon."
Wirth was the chief designer of the programming languages Euler, Algol W, Modula, Modula-2, Oberon-2, Oberon-07. He was a major part of the design and implementation team for the Lilith and Oberon operating systems, for the Lola digital hardware design and simulation system, he received the Association for Computing Machinery Turing Award for the development of these languages in 1984 and in 1994 he was inducted as a Fellow of the ACM. His book, written jointly with Kathleen Jensen, The Pascal User Manual and Report, served as the basis of many language implementation efforts in the 1970s and 1980s in the United States and across Europe, his article Program Development by Stepwise Refinement, about the teaching of programming, is considered to be a classic text in software engineering. In 1975 he wrote the book Algorithms + Data Structures = Programs. Major revisions of this book with the new title Algorithms + Data Structures were published in 1985 and 2004; the examples in the first edition were written in Pascal.
These were replaced in the editions with examples written in Modula-2 and Oberon respectively. His textbook, Systematic Programming: An Introduction, was considered a good source for students who wanted to do more than just coding. Regarded as a challenging text to work through, it was sought as imperative reading for those interested in numerical mathematics. In 1992 he published the full documentation of the Oberon OS.. A second book was intended as a programmer's guide. In 1995, he popularized the adage now known as Wirth's law, which states that software is getting slower more than hardware becomes faster. In his 1995 paper A Plea for Lean Software he attributes it to Martin Reiser. Asteroid 21655 Niklauswirth Extended Backus–Naur Form Wirth syntax notation Bucky bit Wirth–Weber precedence relationship List of pioneers in computer science Biography at ETH Zürich. Personal home page at ETH Zürich. Niklaus Wirth at DBLP Bibliography Server Niklaus E. Wirth at ACM. Wirth, Niklaus. "Program Development by Stepwise Refinement".
Communications of the ACM. 14: 221–7. Doi:10.1145/362575.362577. Wirth, N.. "On the Design of Programming Languages". Proc. IFIP Congress 74: 386–393. Turing Award Lecture, 1984 Pascal and its Successors paper by Niklaus Wirth – includes short biography. A Few Words with Niklaus Wirth The School of Niklaus Wirth: The Art of Simplicity, by László Böszörményi, Jürg Gutknecht, Gustav Pomberger. Dpunkt.verlag / Morgan Kaufmann Publishers, 2000. ISBN 3-932588-85-1 / ISBN 1-55860-723-4; the book Compiler Construction The book Algorithms and Data Structures The book Project Oberon – The Design of an Operating System and Compiler. The book about the Oberon language and Operating System is now available as a PDF file; the PDF file has an additional appendix Ten Years After: From Objects to Components. Project Oberon 2013
C++ is a general-purpose programming language, developed by Bjarne Stroustrup as an extension of the C language, or "C with Classes". It has imperative, object-oriented and generic programming features, while providing facilities for low-level memory manipulation, it is always implemented as a compiled language, many vendors provide C++ compilers, including the Free Software Foundation, Intel, IBM, so it is available on many platforms. C++ was designed with a bias toward system programming and embedded, resource-constrained software and large systems, with performance and flexibility of use as its design highlights. C++ has been found useful in many other contexts, with key strengths being software infrastructure and resource-constrained applications, including desktop applications and performance-critical applications. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2017 as ISO/IEC 14882:2017.
The C++ programming language was standardized in 1998 as ISO/IEC 14882:1998, amended by the C++03, C++11 and C++14 standards. The current C++ 17 standard supersedes these with an enlarged standard library. Before the initial standardization in 1998, C++ was developed by Danish computer scientist Bjarne Stroustrup at Bell Labs since 1979 as an extension of the C language. C++20 is the next planned standard, keeping with the current trend of a new version every three years. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on "C with Classes", the predecessor to C++; the motivation for creating a new language originated from Stroustrup's experience in programming for his Ph. D. thesis. Stroustrup found that Simula had features that were helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level to be suitable for large software development; when Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing.
Remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast and used; as well as C and Simula's influences, other languages influenced C++, including ALGOL 68, Ada, CLU and ML. Stroustrup's "C with Classes" added features to the C compiler, including classes, derived classes, strong typing and default arguments. In 1983, "C with Classes" was renamed to "C++", adding new features that included virtual functions, function name and operator overloading, constants, type-safe free-store memory allocation, improved type checking, BCPL style single-line comments with two forward slashes. Furthermore, it included the development of a standalone compiler for Cfront. In 1985, the first edition of The C++ Programming Language was released, which became the definitive reference for the language, as there was not yet an official standard; the first commercial implementation of C++ was released in October of the same year.
In 1989, C++ 2.0 was released, followed by the updated second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static member functions, const member functions, protected members. In 1990, The Annotated C++ Reference Manual was published; this work became the basis for the future standard. Feature additions included templates, namespaces, new casts, a boolean type. After the 2.0 update, C++ evolved slowly until, in 2011, the C++11 standard was released, adding numerous new features, enlarging the standard library further, providing more facilities to C++ programmers. After a minor C++14 update released in December 2014, various new additions were introduced in C++17, further changes planned for 2020; as of 2017, C++ remains the third most popular programming language, behind Java and C. On January 3, 2018, Stroustrup was announced as the 2018 winner of the Charles Stark Draper Prize for Engineering, "for conceptualizing and developing the C++ programming language".
According to Stroustrup: "the name signifies the evolutionary nature of the changes from C". This name is credited to Rick Mascitti and was first used in December 1983; when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit. The name comes from C's ++ operator and a common naming convention of using "+" to indicate an enhanced computer program. During C++'s development period, the language had been referred to as "new C" and "C with Classes" before acquiring its final name. Throughout C++'s life, its development and evolution has been guided by a set of principles: It must be driven by actual problems and its features should be useful in real world programs; every feature should be implementable. Programmers should be free to pick their own programming style, that style should be supported by C++. Allowing a useful feature is more important than preventing every possible misuse of C++, it should provide facilities for organising programs into separate, well-defined parts, provide facilities for combining separately developed parts.
No implicit violations of the type system (but allow explicit violations.
A synonym is a word or phrase that means or nearly the same as another lexeme in the same language. Words that are synonyms are said to be synonymous, the state of being a synonym is called synonymy. For example, the words begin, start and initiate are all synonyms of one another. Words are synonymous in one particular sense: for example and extended in the context long time or extended time are synonymous, but long cannot be used in the phrase extended family. Synonyms with the exact same meaning share a seme or denotational sememe, whereas those with inexactly similar meanings share a broader denotational or connotational sememe and thus overlap within a semantic field; the former are sometimes called cognitive synonyms and the latter, near-synonyms, plesionyms or poecilonyms. Some lexicographers claim that no synonyms have the same meaning because etymology, phonic qualities, ambiguous meanings, so on make them unique. Different words that are similar in meaning differ for a reason: feline is more formal than cat.
Synonyms are a source of euphemisms. Metonymy can sometimes be a form of synonymy: the White House is used as a synonym of the administration in referring to the U. S. executive branch under a specific president. Thus a metonym is a type of synonym, the word metonym is a hyponym of the word synonym; the analysis of synonymy, polysemy and hypernymy is inherent to taxonomy and ontology in the information-science senses of those terms. It has applications in pedagogy and machine learning, because they rely on word-sense disambiguation; the word comes from ónoma. Synonyms can be any part of speech. Examples: noun drink and beverage verb buy and purchase adjective big and large adverb and speedily preposition on and upon"glass" and"cup"Synonyms are defined with respect to certain senses of words: pupil as the aperture in the iris of the eye is not synonymous with student; such like, he expired means the same as he died, yet my passport has expired cannot be replaced by my passport has died. In English, many synonyms emerged after the Norman conquest of England.
While England's new ruling class spoke Norman French, the lower classes continued to speak Old English. Thus, today we have synonyms like the Norman-derived people and archer, the Saxon-derived folk and bowman. For more examples, see the list of Germanic and Lat Latinate equivalents in English. A thesaurus lists related words; the word poecilonym is a rare synonym of the word synonym. It is not entered in most major dictionaries and is a curiosity or piece of trivia for being an autological word because of its meta quality as a synonym of synonym. Antonyms are words with nearly opposite meanings. For example: hot ↔ cold, large ↔ small, thick ↔ thin, synonym ↔ antonym Hypernyms and hyponyms are words that refer to a general category and a specific instance of that category. For example, vehicle is a hypernym of car, car is a hyponym of vehicle. Homophones are words that have different meanings. For example and which are homophones in most accents. Homographs are words that have different pronunciations.
For example, one can keep a record of documents. Homonyms are words that have different meanings. For example and rose are homonyms. -onym Cognitive synonymy Elegant variation, the gratuitous use of a synonym in prose Synonym ring Synonomy in Japanese Tools which graph words relations: Graph Words – Online tool for visualization word relations Synonyms.net – Online reference resource that provides instant synonyms and antonyms definitions including visualizations, voice pronunciations and translations English/French Semantic Atlas – Graph words relations in English and gives cross representations for translations – offers 500 searches per user per day. Plain words synonyms finder: Synonym Finder – Synonym finder including hypernyms in search result Thesaurus – Online synonyms in English, Italian and German Woxikon Synonyms – Over 1 million synonyms – English, Spanish, Italian, Portuguese and Dutch FindMeWords Synonyms – Online Synonym Dictionary with definitions Classic Thesaurus - Crowdsourced Synonym Dictionary Power Thesaurus - Synonym dictionary with definitions and examples
Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, the implementation of algorithms in a chosen programming language; the source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer for solving a given problem; the process of programming thus requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, formal logic. Tasks accompanying and related to programming include: testing, source code maintenance, implementation of build systems, management of derived artifacts, such as the machine code of computer programs.
These might be considered part of the programming process, but the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is the opposite process. A hacker is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can mean a security hacker in common language. Programmable devices have existed at least as far back as 1206 AD, when the automata of Al-Jazari were programmable, via pegs and cams, to play various rhythms and drum patterns. However, the first computer program is dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. Women would continue to dominate the field of computer programming until the mid 1960s. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form.
A control panel added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the instruction set of the particular machine in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets have different assembly languages. Kathleen Booth created one of the first Assembly languages in 1950 for various computers at Birkbeck College. High-level languages allow the programmer to write programs in terms that are syntactically richer, more capable of abstracting the code, making it targetable to varying machine instruction sets via compilation declarations and heuristics.
The first compiler for a programming language was developed by Grace Hopper. When Hopper went to work on UNIVAC in 1949, she brought the idea of using compilers with her. Compilers harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation for example. FORTRAN, the first used high-level language to have a functional implementation which permitted the abstraction of reusable blocks of code, came out in 1957. In 1951 Frances E. Holberton developed the first sort-merge generator which ran on the UNIVAC I. Another woman working at UNIVAC, Adele Mildred Koss, developed a program, a precursor to report generators. In USSR, Kateryna Yushchenko developed the Address programming language for the MESM in 1955; the idea for the creation of COBOL started in 1959 when Mary K. Hawes, who worked for Burroughs Corporation, set up a meeting to discuss creating a common business language, she invited six people, including Grace Hopper.
Hopper was involved in developing COBOL as a business language and creating "self-documenting" programming. Hopper's contribution to COBOL was based on her programming language, called FLOW-MATIC. In 1961, Jean E. Sammet developed FORMAC and published Programming Languages: History and Fundamentals which went on to be a standard work on programming languages. Programs were still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Frances Holberton created a code to allow keyboard inputs while she worked at UNIVAC. Text editors were developed that allowed changes and corrections to be made much more than with punched cards. Sister Mary Kenneth Keller worked on developing the programming language, BASIC when she was a graduate student at Dartmouth in the 1960s. One of the first object-oriented programming languages, was developed by seven programmers, including Adele Goldberg, in the 1970s.
In 1985, Radia Perlman developed the Spannin