1.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
2.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers
3.
Command-line interface
–
A program which handles the interface is called a command language interpreter or shell. The interface is implemented with a command line shell, which is a program that accepts commands as text input. Command-line interfaces to computer operating systems are widely used by casual computer users. Alternatives to the line include, but are not limited to text user interface menus, keyboard shortcuts. Examples of this include the Windows versions 1,2,3,3.1, and 3.11, DosShell, and Mouse Systems PowerPanel. Command-line interfaces are preferred by more advanced computer users, as they often provide a more concise. Programs with command-line interfaces are generally easier to automate via scripting, a program that implements such a text interface is often called a command-line interpreter, command processor or shell. Under most operating systems, it is possible to replace the shell program with alternatives, examples include 4DOS for DOS, 4OS2 for OS/2. For example, the default Windows GUI is a program named EXPLORER. EXE. These programs are shells, but not CLIs, application programs may also have command line interfaces. When a program is launched from an OS command line shell, interactive command line sessions, After launch, a program may provide an operator with an independent means to enter commands in the form of text. OS inter-process communication, Most operating systems support means of inter-process communication, Command lines from client processes may be redirected to a CLI program by one of these methods. Some applications support only a CLI, presenting a CLI prompt to the user, Some examples of CLI-only applications are, DEBUG Diskpart Ed Edlin Fdisk Ping Some computer programs support both a CLI and a GUI. In some cases, a GUI is simply a wrapper around a separate CLI executable file, in other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality, for example, all features of MATLAB, a numerical analysis computer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features. The early Sierra games, like the first three Kings Quest games, used commands from a command line to move the character around in the graphic window. Early computer systems often used teleprinter machines as the means of interaction with a human operator, the computer became one end of the human-to-human teleprinter model. So instead of a human communicating with another human over a teleprinter, in time, the actual mechanical teleprinter was replaced by a glass tty, and then by a smart terminal
4.
Version 6 Unix
–
Sixth Edition Unix, also called Version 6 Unix or just V6, was the first version of the Unix operating system to see wide release outside Bell Labs. It was released in May 1975 and, like its direct predecessor and it was superseded by Version 7 Unix in 1978/1979, although V6 systems remained in regular operation until at least 1985. AT&T Corporation licensed Version 5 Unix to educational institutions only, but licensed Version 6 also to commercial users for $20,000, an enhanced V6 was the basis of the first ever commercially sold Unix version, INTERACTIVEs IS/1. Bells own PWB/UNIX1.0 was also based on V6, whitesmiths produced and marketed a V6 clone under the name Idris. The code for the original V6 Unix has been available under a BSD License under agreement from the SCO Group. UC Berkeley distributed a set of programs called the First Berkeley Software Distribution or 1BSD. Due to license restrictions on later Unix versions, the book was distributed by samizdat photo-copying. Their Wollongong Interdata UNIX, Level 6 also included utilities developed at Wollongong, in 1980, this version was licensed to The Wollongong Group in Palo Alto that published it as Edition 7. Around the same time, a Bell Labs port to the Interdata 8/32 was completed, the goal of this port was to improve the portability of Unix more generally, as well to produce a portable version of the C compiler. The resulting Portable C Compiler was distributed with V7 and many versions of Unix. A third Unix portability project was completed at Princeton, N. J. in 1976–1977 and this version became the nucleus of Amdahls first internal UNIX offering. After AT&T decided the distribution by Bell Labs of a number of bug fixes would constitute support a tape with the patchset was slipped to Lou Katz of USENIX. The University of Sydney released the Australian Unix Share Accounting Method in November 1979, in the Eastern Bloc, clones of V6 Unix appeared for local-built PDP-11 clones and for the Elektronika BK personal computer. V6 was used for teaching at MIT in 2002 through 2006, V6 source code Wollongong Interdata UNIX source code Unix V6 Manuals - Web interface to the V6 manual pages
5.
Bell Labs
–
Nokia Bell Labs is an American research and scientific development company, owned by Finnish company Nokia. Its headquarters are located in Murray Hill, New Jersey, in addition to laboratories around the rest of the United States. The historic laboratory originated in the late 19th century as the Volta Laboratory, Bell Labs was also at one time a division of the American Telephone & Telegraph Company, half-owned through its Western Electric manufacturing subsidiary. Eight Nobel Prizes have been awarded for work completed at Bell Laboratories, in 1880, the French government awarded Alexander Graham Bell the Volta Prize of 50,000 francs, approximately US$10,000 at that time for the invention of the telephone. Bell used the award to fund the Volta Laboratory in Washington, D. C. in collaboration with Sumner Tainter, the laboratory is also variously known as the Volta Bureau, the Bell Carriage House, the Bell Laboratory and the Volta Laboratory. The laboratory focused on the analysis, recording, and transmission of sound, Bell used his considerable profits from the laboratory for further research and education to permit the diffusion of knowledge relating to the deaf. This resulted in the founding of the Volta Bureau c,1887, located at Bells fathers house at 1527 35th Street in Washington, D. C. where its carriage house became their headquarters in 1889. In 1893, Bell constructed a new building, close by at 1537 35th St. specifically to house the lab, the building was declared a National Historic Landmark in 1972. In 1884, the American Bell Telephone Company created the Mechanical Department from the Electrical, the first president of research was Frank B. Jewett, who stayed there until 1940, ownership of Bell Laboratories was evenly split between AT&T and the Western Electric Company. Its principal work was to plan, design, and support the equipment that Western Electric built for Bell System operating companies and this included everything from telephones, telephone exchange switches, and transmission equipment. Bell Labs also carried out consulting work for the Bell Telephone Company, a few workers were assigned to basic research, and this attracted much attention, especially since they produced several Nobel Prize winners. Until the 1940s, the principal locations were in and around the Bell Labs Building in New York City. Of these, Murray Hill and Crawford Hill remain in existence, the largest grouping of people in the company was in Illinois, at Naperville-Lisle, in the Chicago area, which had the largest concentration of employees prior to 2001. Since 2001, many of the locations have been scaled down or closed. The Holmdel site, a 1.9 million square foot structure set on 473 acres, was closed in 2007, the mirrored-glass building was designed by Eero Saarinen. In August 2013, Somerset Development bought the building, intending to redevelop it into a commercial and residential project. The prospects of success are clouded by the difficulty of readapting Saarinens design and by the current glut of aging, eight Nobel Prizes have been awarded for work completed at Bell Laboratories
6.
Reverse Polish notation
–
Reverse Polish notation is a mathematical notation in which every operator follows all of its operands, in contrast to Polish notation, which puts the operator before its operands. It is also known as postfix notation and it does not need any parentheses as long as each operator has a fixed number of operands. The description Polish refers to the nationality of logician Jan Łukasiewicz, the algorithms and notation for this scheme were extended by Australian philosopher and computer scientist Charles Hamblin in the mid-1950s. In computer science, postfix notation is used in stack-based. It is also common in dataflow and pipeline-based systems, including Unix pipelines, most of what follows is about binary operators. An example of a unary operator whose standard notation uses postfix is the factorial, in reverse Polish notation the operators follow their operands, for instance, to add 3 and 4, one would write 34 + rather than 3 +4. An advantage of RPN is that it removes the need for parentheses that are required by infix notation, while 3 −4 ×5 can also be written 3 −, that means something quite different from ×5. In postfix, the former could be written 345 × −, which unambiguously means 3 − which reduces to 320 −, the latter could be written 34 −5 ×, which unambiguously means 5 ×. In comparison testing of reverse Polish notation with algebraic notation, reverse Polish has been found to lead to faster calculations, because reverse Polish calculators do not need expressions to be parenthesized, fewer operations need to be entered to perform typical calculations. Additionally, users of reverse Polish calculators made fewer mistakes than for other types of calculator, however, anecdotal evidence suggests that reverse Polish notation is more difficult for users to learn than algebraic notation. The algorithm for evaluating any postfix expression is fairly straightforward, While there are input tokens left Read the next token from input, if the token is a value Push it onto the stack. Otherwise, the token is an operator and it is already known that the operator takes n arguments. If there are fewer than n values on the stack The user has not input sufficient values in the expression, else, Pop the top n values from the stack. Evaluate the operator, with the values as arguments, Push the returned results, if any, back onto the stack. If there is one value in the stack That value is the result of the calculation. Otherwise, there are more values in the stack The user input has too many values,12 +4 ×5 +3 − Edsger Dijkstra invented the shunting-yard algorithm to convert infix expressions to postfix, so named because its operation resembles that of a railroad shunting yard. There are other ways of producing postfix expressions from infix notation, one of the designers of the B5000, Robert S. Designed by Robert Bob Appleby Ragen, Friden introduced RPN to the calculator market with the EC-130 supporting a four-level stack in June 1963
7.
Compiler
–
A compiler is a computer program that transforms source code written in a programming language into another computer language, with the latter often having a binary form known as object code. The most common reason for converting source code is to create an executable program, the name compiler is primarily used for programs that translate source code from a high-level programming language to a lower level language. If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, more generally, compilers are a specific type of translator. While all programs that take a set of programming specifications and translate them, a program that translates from a low-level language to a higher level one is a decompiler. A program that translates between high-level languages is called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language, the term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser. A compiler is likely to many or all of the following operations, lexical analysis, preprocessing, parsing, semantic analysis, code generation. Program faults caused by incorrect compiler behavior can be difficult to track down and work around, therefore. Software for early computers was written in assembly language. The notion of a high level programming language dates back to 1943, no actual implementation occurred until the 1970s, however. The first actual compilers date from the 1950s, identifying the very first is hard, because there is subjectivity in deciding when programs become advanced enough to count as the full concept rather than a precursor. 1952 saw two important advances. Grace Hopper wrote the compiler for the A-0 programming language, though the A-0 functioned more as a loader or linker than the notion of a full compiler. Also in 1952, the first autocode compiler was developed by Alick Glennie for the Mark 1 computer at the University of Manchester and this is considered by some to be the first compiled programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the first unambiguously complete compiler, COBOL was an early language to be compiled on multiple architectures, in 1960. In many application domains the idea of using a higher level language quickly caught on, because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers have become more complex. Early compilers were written in assembly language, the first self-hosting compiler – capable of compiling its own source code in a high-level language – was created in 1962 for the Lisp programming language by Tim Hart and Mike Levin at MIT. Since the 1970s, it has become practice to implement a compiler in the language it compiles
8.
Plan 9 from Bell Labs
–
Plan 9 from Bell Labs is a distributed operating system, originally developed by the Computing Sciences Research Center at Bell Labs between the mid-1980s and 2002. It takes some of the principles of Unix, developed in the research group. In Plan 9, virtually all computing resources, including files, network connections, a unified network protocol called 9P ties a network of computers running Plan 9 together, allowing them to share all resources so represented. The name Plan 9 from Bell Labs is a reference to the Ed Wood 1959 cult science fiction Z-movie Plan 9 from Outer Space, also, Glenda, the Plan 9 Bunny, is presumably a reference to Woods film Glen or Glenda. The system continues to be used and developed by operating system researchers, Plan 9 from Bell Labs was originally developed, starting mid-1980s, by members of the Computing Science Research Center at Bell Labs, the same group that originally developed Unix and C. The Plan 9 team was led by Rob Pike, Ken Thompson, Dave Presotto and Phil Winterbottom. Over the years, many developers have contributed to the project including Brian Kernighan, Tom Duff, Doug McIlroy, Bjarne Stroustrup. Plan 9 replaced Unix as Bell Labss primary platform for operating systems research and it explored several changes to the original Unix model that facilitate the use and programming of the system, notably in distributed multi-user environments. After several years of development and internal use, Bell Labs shipped the system to universities in 1992. Three years later, in 1995, Plan 9 was made available for parties by AT&T via the book publisher Harcourt Brace. By early 1996, the Plan 9 project had been put on the burner by AT&T in favor of Inferno. In the late 1990s, Bell Labs new owner Lucent Technologies dropped commercial support for the project and in 2000, a fourth release under a new free software license occurred in 2002. A user and development community, including current and former Bell Labs personnel, the development source tree is accessible over the 9P and HTTP protocols and is used to update existing installations. In addition to the components of the OS included in the ISOs, Bell Labs also hosts a repository of externally developed applications. Plan 9 is a operating system, designed to make a network of heterogeneous. In a typical Plan 9 installation, users work at running the window system rio. Permanent data storage is provided by additional network hosts acting as file servers and its designers state that, he foundations of the system are built on two ideas, a per-process name space and a simple message-oriented file system protocol. The potential complexity of this setup is controlled by a set of locations for common resources
9.
Free software
–
The right to study and modify software entails availability of the software source code to its users. This right is conditional on the person actually having a copy of the software. Richard Stallman used the existing term free software when he launched the GNU Project—a collaborative effort to create a freedom-respecting operating system—and the Free Software Foundation. The FSFs Free Software Definition states that users of software are free because they do not need to ask for permission to use the software. Free software thus differs from proprietary software, such as Microsoft Office, Google Docs, Sheets, and Slides or iWork from Apple, which users cannot study, change, freeware, which is a category of freedom-restricting proprietary software that does not require payment for use. For computer programs that are covered by law, software freedom is achieved with a software license. Software that is not covered by law, such as software in the public domain, is free if the source code is in the public domain. Proprietary software, including freeware, use restrictive software licences or EULAs, Users are thus prevented from changing the software, and this results in the user relying on the publisher to provide updates, help, and support. This situation is called vendor lock-in, Users often may not reverse engineer, modify, or redistribute proprietary software. Other legal and technical aspects, such as patents and digital rights management may restrict users in exercising their rights. Free software may be developed collaboratively by volunteer computer programmers or by corporations, as part of a commercial, from the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public domain software. Software was commonly shared by individuals who used computers and by manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to exchange of software. As software was written in an interpreted language such as BASIC. Software was also shared and distributed as printed source code in computer magazines and books, in United States vs. IBM, filed January 17,1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be an amount of software produced primarily for sale. In the 1970s and early 1980s, the industry began using technical measures to prevent computer users from being able to study or adapt the software as they saw fit. In 1980, copyright law was extended to computer programs, Software development for the GNU operating system began in January 1984, and the Free Software Foundation was founded in October 1985
10.
GNU
–
GNU /ɡnuː/ is an operating system and an extensive collection of computer software. GNU is composed wholly of free software, most of which is licensed under the GNU Projects own GPL, GNU is a recursive acronym for GNUs Not Unix. Chosen because GNUs design is Unix-like, but differs from Unix by being free software, the GNU project includes an operating system kernel, GNU HURD, which was the original focus of the Free Software Foundation. However, non-GNU kernels, most famously Linux, can also be used with GNU software, and since the kernel is the least mature part of GNU, the combination of GNU software and the Linux kernel is commonly known as Linux. Richard Stallman, the founder of the project, views GNU as a means to a social end. unix-wizards. Software development began on January 5,1984, when Stallman quit his job at the Lab so that they could not claim ownership or interfere with distributing GNU components as free software, Richard Stallman chose the name by using various plays on words, including the song The Gnu. The goal was to bring a free software operating system into existence. This philosophy was published as the GNU Manifesto in March 1985. It was thus decided that the development would be started using C and Lisp as system programming languages, at the time, Unix was already a popular proprietary operating system. The design of Unix was modular, so it could be reimplemented piece by piece, in October 1985, Stallman set up the Free Software Foundation. In the late 1980s and 1990s, the FSF hired software developers to write the software needed for GNU, as GNU gained prominence, interested businesses began contributing to development or selling GNU software and technical support. The most prominent and successful of these was Cygnus Solutions, now part of Red Hat, GNU developers have contributed to Linux ports of GNU applications and utilities, which are now also widely used on other operating systems such as BSD variants, Solaris and macOS. Many GNU programs have been ported to other operating systems, including proprietary platforms such as Microsoft Windows, GNU programs have been shown to be more reliable than their proprietary Unix counterparts. As of November 2015, there are a total of 466 GNU packages hosted on the official GNU development site. With the April 30,2015 release of the Debian GNU/HURD2015 distro, GNU OS now provides the components to assemble a system that users can install. This includes the GNU Hurd kernel, that is currently in a pre-production state, the Hurd status page states that it may not be ready for production use, as there are still some bugs and missing features. However, it should be a base for further development. Due to Hurd not being ready for use, in practice these operating systems are Linux distributions
11.
Interpreter (computing)
–
An interpreter generally uses one of the following strategies for program execution, parse the source code and perform its behavior directly. Translate source code into some efficient intermediate representation and immediately execute this, explicitly execute stored precompiled code made by a compiler which is part of the interpreter system. Early versions of Lisp programming language and Dartmouth BASIC would be examples of the first type, perl, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, some systems, such as Smalltalk and contemporary versions of BASIC and Java may also combine two and three. Interpreters of various types have also constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol. The terms interpreted language or compiled language signify that the implementation of that language is an interpreter or a compiler. A high level language is ideally an abstraction independent of particular implementations, the first interpreted high-level language was Lisp. Lisp was first implemented in 1958 by Steve Russell on an IBM704 computer, Russell had read John McCarthys paper, and realized that the Lisp eval function could be implemented in machine code. The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, Programs written in a high level language are either directly executed by some kind of interpreter or converted into machine code by a compiler for the CPU to execute. While compilers generally produce machine code directly executable by computer hardware and this is basically the same machine specific code but augmented with a symbol table with names and tags to make executable blocks identifiable and relocatable. Compiled programs will typically use building blocks kept in a library of object code modules. A linker is used to combine library files with the file of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, thus, both compilers and interpreters generally turn source code into tokens, both may generate a parse tree, and both may generate immediate instructions. However, a program still runs much faster, under most circumstances, in part because compilers are designed to optimize code. This is especially true for simpler high level languages without dynamic data structures, checks, when the program is loaded for execution. Historically, most interpreter-systems have had an editor built in. This is becoming more common also for compilers, although some prefer to use an editor of their choice. During the software development cycle, programmers make frequent changes to source code, the larger the program, the longer the wait
12.
Array data structure
–
In computer science, an array data structure, or simply an array, is a data structure consisting of a collection of elements, each identified by at least one array index or key. An array is stored so that the position of each element can be computed from its index tuple by a mathematical formula, the simplest type of data structure is a linear array, also called one-dimensional array. For example, an array of 10 32-bit integer variables, with indices 0 through 9,2036, so that the element with index i has the address 2000 +4 × i. The memory address of the first element of an array is called first address or foundation address, because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called matrices. In some cases the term vector is used in computing to refer to an array, arrays are often used to implement tables, especially lookup tables, the word table is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program and they are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers, in most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of a data structure are required to have the same size. The set of valid index tuples and the addresses of the elements are usually, Array types are often implemented by array structures, however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, john von Neumann wrote the first array-sorting program in 1945, during the building of the first stored-program computer. p. 159 Array indexing was originally done by self-modifying code, and later using index registers, some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware. Assembly languages generally have no support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN, Lisp, COBOL, and ALGOL60, had support for multi-dimensional arrays, in C++, class templates exist for multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays. Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables, many databases, small and large, consist of one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, one or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the way to allocate dynamic memory portably
13.
Base (exponentiation)
–
In exponentiation, the base is the number b in an expression of the form bn. The number n is called the exponent and the expression is known formally as exponentiation of b by n or the exponential of n with base b and it is more commonly expressed as the nth power of b, b to the nth power or b to the power n. For example, the power of 10 is 10,000 because 104 =10 ×10 ×10 ×10 =10,000. The term power strictly refers to the expression, but is sometimes used to refer to the exponent. When the nth power of b equals a number a, or a = bn, for example,10 is a fourth root of 10,000. The inverse function to exponentiation with base b is called the logarithm to base b, for example, log1010,000 =4
14.
Modulus operator
–
In computing, the modulo operation finds the remainder after division of one number by another. Given two positive numbers, a and n, a n is the remainder of the Euclidean division of a by n. Although typically performed with a and n both being integers, many computing systems allow other types of numeric operands, the range of numbers for an integer modulo of n is 0 to n −1. See modular arithmetic for an older and related convention applied in number theory, when either a or n is negative, the naive definition breaks down and programming languages differ in how these values are defined. In mathematics, the result of the operation is the remainder of the Euclidean division. Computers and calculators have various ways of storing and representing numbers, usually, in number theory, the positive remainder is always chosen, but programming languages choose depending on the language and the signs of a or n. Standard Pascal and ALGOL68 give a positive remainder even for negative divisors, a modulo 0 is undefined in most systems, although some do define it as a. Despite its widespread use, truncated division is shown to be inferior to the other definitions, when the result of a modulo operation has the sign of the dividend, it can lead to surprising mistakes. For special cases, on some hardware, faster alternatives exist, optimizing compilers may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression &. This can allow writing clearer code without compromising performance and this optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend, unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, some modulo operations can be factored or expanded similar to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange, identity, mod n = a mod n. nx mod n =0 for all positive integer values of x. If p is a number which is not a divisor of b, then abp−1 mod p = a mod p. B−1 mod n denotes the multiplicative inverse, which is defined if and only if b and n are relatively prime. Distributive, mod n = mod n. ab mod n = mod n, division, a/b mod n = mod n, when the right hand side is defined. Inverse multiplication, mod n = a mod n, modulo and modulo – many uses of the word modulo, all of which grew out of Carl F. Gausss introduction of modular arithmetic in 1801. Modular exponentiation ^ Perl usually uses arithmetic modulo operator that is machine-independent, for examples and exceptions, see the Perl documentation on multiplicative operators. ^ Mathematically, these two choices are but two of the number of choices available for the inequality satisfied by a remainder
15.
Exclusive or
–
Exclusive or or exclusive disjunction is a logical operation that outputs true only when inputs differ. It is symbolized by the prefix operator J and by the infix operators XOR, EOR, EXOR, ⊻, ⊕, ↮, the negation of XOR is logical biconditional, which outputs true only when both inputs are the same. It gains the exclusive or because the meaning of or is ambiguous when both operands are true, the exclusive or operator excludes that case. This is sometimes thought of as one or the other but not both and this could be written as A or B, but not, A and B. More generally, XOR is true only when an odd number of inputs are true, a chain of XORs—a XOR b XOR c XOR d —is true whenever an odd number of the inputs are true and is false whenever an even number of inputs are true. The truth table of A XOR B shows that it outputs true whenever the inputs differ,0, false 1, true Exclusive disjunction essentially means either one, in other words, the statement is true if and only if one is true and the other is false. For example, if two horses are racing, then one of the two win the race, but not both of them. The exclusive or is equivalent to the negation of a logical biconditional, by the rules of material implication. This unfortunately prevents the combination of two systems into larger structures, such as a mathematical ring. However, the system using exclusive or is an abelian group, the combination of operators ∧ and ⊕ over elements produce the well-known field F2. This field can represent any logic obtainable with the system and has the benefit of the arsenal of algebraic analysis tools for fields. The Oxford English Dictionary explains either, or as follows, The primary function of either, etc. is to emphasize the perfect indifference of the two things or courses. But a secondary function is to emphasize the mutual exclusiveness, = either of the two, but not both, the exclusive-or explicitly states one or the other, but not neither nor both. Following this kind of common-sense intuition about or, it is argued that in many natural languages, English included. The exclusive disjunction of a pair of propositions, is supposed to mean that p is true or q is true, but not both. For example, it might be argued that the intention of a statement like You may have coffee. Certainly under some circumstances a sentence like this example should be taken as forbidding the possibility of accepting both options. Even so, there is reason to suppose that this sort of sentence is not disjunctive at all
16.
Bitwise operation
–
In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, simple action directly supported by the processor, on simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. In the explanations below, any indication of a position is counted from the right side. For example, the binary value 0001 has zeroes at every position, the bitwise NOT, or complement, is a unary operation that performs logical negation on each bit, forming the ones complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0, for example, NOT0111 =1000 NOT10101011 =01010100 The bitwise complement is equal to the twos complement of the value minus one. If twos complement arithmetic is used, then NOT x = -x −1, for unsigned integers, the bitwise complement of a number is the mirror reflection of the number across the half-way point of the unsigned integers range. A simple but illustrative use is to invert a grayscale image where each pixel is stored as an unsigned integer. A bitwise AND takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits, thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1, otherwise, the result is 0. For example,0101 AND0011 =0001 The operation may be used to determine whether a bit is set or clear. This is often called bit masking, the bitwise AND may be used to clear selected bits of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible, for example,0110 can be considered a set of four flags, where the first and fourth flags are clear, and the second and third flags are set. Using the example above,0110 AND0001 =0000 Because 6 AND1 is zero,6 is divisible by two and therefore even, a bitwise OR takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1, for example,0101 OR0011 =0111 The bitwise OR may be used to set to 1 the selected bits of the register described above. The result in each position is 1 if only the first bit is 1 or only the bit is 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same, for example,0101 XOR0011 =0110 The bitwise XOR may be used to invert selected bits in a register. Any bit may be toggled by XORing it with 1, assembly language programmers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and memory than loading a zero value, in these operations the digits are moved, or shifted, to the left or right. In an arithmetic shift, the bits that are shifted out of either end are discarded
17.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
18.
Conditional (computer programming)
–
Apart from the case of branch predication, this is always achieved by selectively altering the control flow based on some condition. Although dynamic dispatch is not usually classified as a conditional construct, the if–then construct is common across many programming languages. If the condition is true, the following the then are executed. Otherwise, the execution continues in the following branch – either in the block, or if there is no else branch. After either branch has been executed, control returns to the point after the end If, in early programming languages, especially some dialects of BASIC in the 1980s home computers, an if–then statement could only contain GOTO statements. This led to a style of programming known as spaghetti programming. A subtlety is that the optional else clause found in many languages means that the grammar is ambiguous. Specifically, if a then if b then s else s2 can be parsed as if a then else s2 or if a then depending on whether the else is associated with the first if or second if. This is known as the dangling else problem, and is resolved in various ways, by using else if, it is possible to combine several conditions. Only the statements following the first condition that is found to be true will be executed, all other statements will be skipped. The statements of elsif, in Ada, is syntactic sugar for else followed by if. In Ada, the difference is only one end if is needed. Similarly, the earlier UNIX shells use elif too, but giving the choice of delimiting with spaces, line breaks and this works because in these languages, any single statement can follow a conditional without being enclosed in a block. If all terms in the sequence of conditionals are testing the value of an expression, then an alternative is the switch statement. Conversely, in languages that do not have a switch statement, many languages support if expressions, which are similar to if statements, but return a value as a result. Thus, they are true expressions, not statements, logic that would be expressed with conditionals in other languages is usually expressed with pattern matching in recursive functions. It can be written like this, C and C-like languages has a special ternary operator for conditional expressions with a function that may be described by a template like this, condition. In Visual Basic and some languages, a function called IIf is provided
19.
Square root
–
In mathematics, a square root of a number a is a number y such that y2 = a, in other words, a number y whose square is a. For example,4 and −4 are square roots of 16 because 42 =2 =16, every nonnegative real number a has a unique nonnegative square root, called the principal square root, which is denoted by √a, where √ is called the radical sign or radix. For example, the square root of 9 is 3, denoted √9 =3. The term whose root is being considered is known as the radicand, the radicand is the number or expression underneath the radical sign, in this example 9. Every positive number a has two roots, √a, which is positive, and −√a, which is negative. Together, these two roots are denoted ± √a, although the principal square root of a positive number is only one of its two square roots, the designation the square root is often used to refer to the principal square root. For positive a, the square root can also be written in exponent notation. Square roots of numbers can be discussed within the framework of complex numbers. In Ancient India, the knowledge of theoretical and applied aspects of square and square root was at least as old as the Sulba Sutras, a method for finding very good approximations to the square roots of 2 and 3 are given in the Baudhayana Sulba Sutra. Aryabhata in the Aryabhatiya, has given a method for finding the root of numbers having many digits. It was known to the ancient Greeks that square roots of positive numbers that are not perfect squares are always irrational numbers, numbers not expressible as a ratio of two integers. This is the theorem Euclid X,9 almost certainly due to Theaetetus dating back to circa 380 BC, the particular case √2 is assumed to date back earlier to the Pythagoreans and is traditionally attributed to Hippasus. Mahāvīra, a 9th-century Indian mathematician, was the first to state that square roots of negative numbers do not exist, a symbol for square roots, written as an elaborate R, was invented by Regiomontanus. An R was also used for Radix to indicate square roots in Gerolamo Cardanos Ars Magna, according to historian of mathematics D. E. Smith, Aryabhatas method for finding the root was first introduced in Europe by Cataneo in 1546. According to Jeffrey A. Oaks, Arabs used the letter jīm/ĝīm, the letter jīm resembles the present square root shape. Its usage goes as far as the end of the century in the works of the Moroccan mathematician Ibn al-Yasamin. The symbol √ for the root was first used in print in 1525 in Christoph Rudolffs Coss
20.
Sine
–
In mathematics, the sine is a trigonometric function of an angle. More generally, the definition of sine can be extended to any value in terms of the length of a certain line segment in a unit circle. The function sine can be traced to the jyā and koṭi-jyā functions used in Gupta period Indian astronomy, via translation from Sanskrit to Arabic and then from Arabic to Latin. The word sine comes from a Latin mistranslation of the Arabic jiba, to define the trigonometric functions for an acute angle α, start with any right triangle that contains an angle of measure α, in the accompanying figure, angle A in triangle ABC has measure α. The three sides of the triangle are named as follows, The opposite side is the side opposite to the angle of interest, the hypotenuse is the side opposite the right angle, in this case side h. The hypotenuse is always the longest side of a right-angled triangle, the adjacent side is the remaining side, in this case side b. It forms a side of both the angle of interest and the right angle, once such a triangle is chosen, the sine of the angle is equal to the length of the opposite side divided by the length of the hypotenuse. As stated, the value sin appears to depend on the choice of right triangle containing an angle of measure α, however, this is not the case, all such triangles are similar, and so the ratio is the same for each of them. The trigonometric functions can be defined in terms of the rise, run, when the length of the line segment is 1, sine takes an angle and tells the rise. Sine takes an angle and tells the rise per unit length of the line segment, rise is equal to sin θ multiplied by the length of the line segment. In contrast, cosine is used for telling the run from the angle, arctan is used for telling the angle from the slope. The line segment is the equivalent of the hypotenuse in the right-triangle, in trigonometry, a unit circle is the circle of radius one centered at the origin in the Cartesian coordinate system. Let a line through the origin, making an angle of θ with the half of the x-axis. The x- and y-coordinates of this point of intersection are equal to cos θ and sin, the points distance from the origin is always 1. Unlike the definitions with the triangle or slope, the angle can be extended to the full set of real arguments by using the unit circle. This can also be achieved by requiring certain symmetries and that sine be a periodic function. Exact identities, These apply for all values of θ. sin = cos =1 csc The reciprocal of sine is cosecant, i. e. the reciprocal of sin is csc, or cosec. Cosecant gives the ratio of the length of the hypotenuse to the length of the opposite side, the inverse function of sine is arcsine or inverse sine
21.
Trigonometric functions
–
In planar geometry, an angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane, Angles are also formed by the intersection of two planes in Euclidean and other spaces. Angles formed by the intersection of two curves in a plane are defined as the angle determined by the tangent rays at the point of intersection. Similar statements hold in space, for example, the angle formed by two great circles on a sphere is the dihedral angle between the planes determined by the great circles. Angle is also used to designate the measure of an angle or of a rotation and this measure is the ratio of the length of a circular arc to its radius. In the case of an angle, the arc is centered at the vertex. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation. The word angle comes from the Latin word angulus, meaning corner, cognate words are the Greek ἀγκύλος, meaning crooked, curved, both are connected with the Proto-Indo-European root *ank-, meaning to bend or bow. Euclid defines a plane angle as the inclination to each other, in a plane, according to Proclus an angle must be either a quality or a quantity, or a relationship. In mathematical expressions, it is common to use Greek letters to serve as variables standing for the size of some angle, lower case Roman letters are also used, as are upper case Roman letters in the context of polygons. See the figures in this article for examples, in geometric figures, angles may also be identified by the labels attached to the three points that define them. For example, the angle at vertex A enclosed by the rays AB, sometimes, where there is no risk of confusion, the angle may be referred to simply by its vertex. However, in geometrical situations it is obvious from context that the positive angle less than or equal to 180 degrees is meant. Otherwise, a convention may be adopted so that ∠BAC always refers to the angle from B to C. Angles smaller than an angle are called acute angles. An angle equal to 1/4 turn is called a right angle, two lines that form a right angle are said to be normal, orthogonal, or perpendicular. Angles larger than an angle and smaller than a straight angle are called obtuse angles. An angle equal to 1/2 turn is called a straight angle, Angles larger than a straight angle but less than 1 turn are called reflex angles
22.
Inverse trigonometric functions
–
In mathematics, the inverse trigonometric functions are the inverse functions of the trigonometric functions. Specifically, they are the inverses of the sine, cosine, tangent, cotangent, secant, and cosecant functions, inverse trigonometric functions are widely used in engineering, navigation, physics, and geometry. There are several notations used for the trigonometric functions. The most common convention is to name inverse trigonometric functions using a prefix, e. g. arcsin, arccos, arctan. This convention is used throughout the article, when measuring in radians, an angle of θ radians will correspond to an arc whose length is rθ, where r is the radius of the circle. Similarly, in programming languages the inverse trigonometric functions are usually called asin, acos. The notations sin−1, cos−1, tan−1, etc, the confusion is somewhat ameliorated by the fact that each of the reciprocal trigonometric functions has its own name—for example, −1 = sec. Nevertheless, certain authors advise against using it for its ambiguity, since none of the six trigonometric functions are one-to-one, they are restricted in order to have inverse functions. There are multiple numbers y such that sin = x, for example, sin =0, when only one value is desired, the function may be restricted to its principal branch. With this restriction, for x in the domain the expression arcsin will evaluate only to a single value. These properties apply to all the trigonometric functions. The principal inverses are listed in the following table, if x is allowed to be a complex number, then the range of y applies only to its real part. Trigonometric functions of trigonometric functions are tabulated below. This is derived from the tangent addition formula tan = tan + tan 1 − tan tan , like the sine and cosine functions, the inverse trigonometric functions can be calculated using power series, as follows. For arcsine, the series can be derived by expanding its derivative,11 − z 2, as a binomial series, the series for arctangent can similarly be derived by expanding its derivative 11 + z 2 in a geometric series and applying the integral definition above. Arcsin = z + z 33 + z 55 + z 77 + ⋯ = ∑ n =0 ∞, for example, arccos x = π /2 − arcsin x, arccsc x = arcsin , and so on. Alternatively, this can be expressed, arctan z = ∑ n =0 ∞22 n 2. There are two cuts, from −i to the point at infinity, going down the imaginary axis and it works best for real numbers running from −1 to 1
23.
Natural logarithm
–
The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2.718281828459. The natural logarithm of x is written as ln x, loge x, or sometimes, if the base e is implicit. Parentheses are sometimes added for clarity, giving ln, loge or log and this is done in particular when the argument to the logarithm is not a single symbol, to prevent ambiguity. The natural logarithm of x is the power to which e would have to be raised to equal x. The natural log of e itself, ln, is 1, because e1 = e, while the natural logarithm of 1, ln, is 0, since e0 =1. The natural logarithm can be defined for any real number a as the area under the curve y = 1/x from 1 to a. The simplicity of this definition, which is matched in many other formulas involving the natural logarithm, like all logarithms, the natural logarithm maps multiplication into addition, ln = ln + ln . However, logarithms in other bases differ only by a constant multiplier from the natural logarithm, for instance, the binary logarithm is the natural logarithm divided by ln, the natural logarithm of 2. Logarithms are useful for solving equations in which the unknown appears as the exponent of some other quantity, for example, logarithms are used to solve for the half-life, decay constant, or unknown time in exponential decay problems. They are important in many branches of mathematics and the sciences and are used in finance to solve problems involving compound interest, by Lindemann–Weierstrass theorem, the natural logarithm of any positive algebraic number other than 1 is a transcendental number. The concept of the natural logarithm was worked out by Gregoire de Saint-Vincent and their work involved quadrature of the hyperbola xy =1 by determination of the area of hyperbolic sectors. Their solution generated the requisite hyperbolic logarithm function having properties now associated with the natural logarithm, the notations ln x and loge x both refer unambiguously to the natural logarithm of x. log x without an explicit base may also refer to the natural logarithm. This usage is common in mathematics and some scientific contexts as well as in many programming languages, in some other contexts, however, log x can be used to denote the common logarithm. Historically, the notations l. and l were in use at least since the 1730s, finally, in the twentieth century, the notations Log and logh are attested. The graph of the logarithm function shown earlier on the right side of the page enables one to glean some of the basic characteristics that logarithms to any base have in common. Chief among them are, the logarithm of the one is zero. What makes natural logarithms unique is to be found at the point where all logarithms are zero. At that specific point the slope of the curve of the graph of the logarithm is also precisely one
24.
Exponential function
–
In mathematics, an exponential function is a function of the form in which the input variable x occurs as an exponent. A function of the form f = b x + c, as functions of a real variable, exponential functions are uniquely characterized by the fact that the growth rate of such a function is directly proportional to the value of the function. The constant of proportionality of this relationship is the logarithm of the base b. The argument of the function can be any real or complex number or even an entirely different kind of mathematical object. Its ubiquitous occurrence in pure and applied mathematics has led mathematician W. Rudin to opine that the function is the most important function in mathematics. In applied settings, exponential functions model a relationship in which a constant change in the independent variable gives the same change in the dependent variable. The graph of y = e x is upward-sloping, and increases faster as x increases, the graph always lies above the x -axis but can get arbitrarily close to it for negative x, thus, the x -axis is a horizontal asymptote. The slope of the tangent to the graph at each point is equal to its y -coordinate at that point, as implied by its derivative function. Its inverse function is the logarithm, denoted log, ln, or log e, because of this. The exponential function exp, C → C can be characterized in a variety of equivalent ways, the constant e is then defined as e = exp = ∑ k =0 ∞. The exponential function arises whenever a quantity grows or decays at a proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number lim n → ∞ n now known as e, later, in 1697, Johann Bernoulli studied the calculus of the exponential function. If instead interest is compounded daily, this becomes 365, letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, exp = lim n → ∞ n first given by Euler. This is one of a number of characterizations of the exponential function, from any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity, exp = exp ⋅ exp which is why it can be written as ex. The derivative of the function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself is expressible in terms of the exponential function and this function property leads to exponential growth and exponential decay. The exponential function extends to a function on the complex plane. Eulers formula relates its values at purely imaginary arguments to trigonometric functions, the exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra
25.
Bessel function
–
The most important cases are for α an integer or half-integer. Bessel functions for integer α are also known as functions or the cylindrical harmonics because they appear in the solution to Laplaces equation in cylindrical coordinates. Spherical Bessel functions with half-integer α are obtained when the Helmholtz equation is solved in spherical coordinates, Bessels equation arises when finding separable solutions to Laplaces equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore important for many problems of wave propagation. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of order, in spherical problems. Because this is a differential equation, there must be two linearly independent solutions. Depending upon the circumstances, however, various formulations of these solutions are convenient, different variations are summarized in the table below, and described in the following sections. Bessel functions of the kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn. It is possible to define the function by its series expansion around x =0, Γ2 m + α, where Γ is the gamma function, a shifted generalization of the factorial function to non-integer values. The Bessel function of the first kind is a function if α is an integer. For non-integer α, the functions J α and J−α are linearly independent, on the other hand, for integer order α, the following relationship is valid, J − n = n J n. This means that the two solutions are no longer linearly independent, in this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below. Another definition of the Bessel function, for values of n, is possible using an integral representation. Another integral representation is, J n =12 π ∫ − π π e i d τ and this was the approach that Bessel used, and from this definition he derived several properties of the function. The Bessel functions can be expressed in terms of the hypergeometric series as J α = α Γ0 F1. This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function. In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t and these are sometimes called Weber functions as they were introduced by H. M. Weber, and also Neumann functions after Carl Neumann. For non-integer α, Yα is related to Jα by, Y α = J α cos − J − α sin
26.
Radian
–
The radian is the standard unit of angular measure, used in many areas of mathematics. The length of an arc of a circle is numerically equal to the measurement in radians of the angle that it subtends. The unit was formerly an SI supplementary unit, but this category was abolished in 1995, separately, the SI unit of solid angle measurement is the steradian. The radian is represented by the symbol rad, so for example, a value of 1.2 radians could be written as 1.2 rad,1.2 r,1. 2rad, or 1. 2c. Radian describes the angle subtended by a circular arc as the length of the arc divided by the radius of the arc. One radian is the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle. Conversely, the length of the arc is equal to the radius multiplied by the magnitude of the angle in radians. As the ratio of two lengths, the radian is a number that needs no unit symbol, and in mathematical writing the symbol rad is almost always omitted. When quantifying an angle in the absence of any symbol, radians are assumed, and it follows that the magnitude in radians of one complete revolution is the length of the entire circumference divided by the radius, or 2πr / r, or 2π. Thus 2π radians is equal to 360 degrees, meaning that one radian is equal to 180/π degrees, the concept of radian measure, as opposed to the degree of an angle, is normally credited to Roger Cotes in 1714. He described the radian in everything but name, and he recognized its naturalness as a unit of angular measure, the idea of measuring angles by the length of the arc was already in use by other mathematicians. For example, al-Kashi used so-called diameter parts as units where one part was 1/60 radian. The term radian first appeared in print on 5 June 1873, in examination questions set by James Thomson at Queens College, Belfast. He had used the term as early as 1871, while in 1869, Thomas Muir, then of the University of St Andrews, in 1874, after a consultation with James Thomson, Muir adopted radian. As stated, one radian is equal to 180/π degrees, thus, to convert from radians to degrees, multiply by 180/π. The length of circumference of a circle is given by 2 π r, so, to convert from radians to gradians multiply by 200 / π, and to convert from gradians to radians multiply by π /200. This is because radians have a mathematical naturalness that leads to a more elegant formulation of a number of important results, most notably, results in analysis involving trigonometric functions are simple and elegant when the functions arguments are expressed in radians. Because of these and other properties, the trigonometric functions appear in solutions to problems that are not obviously related to the functions geometrical meanings
27.
Logical connective
–
The most common logical connectives are binary connectives which join two sentences which can be thought of as the functions operands. Also commonly, negation is considered to be a unary connective, logical connectives along with quantifiers are the two main types of logical constants used in formal systems such as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, a logical connective is similar to but not equivalent to a conditional operator. In the grammar of natural languages two sentences may be joined by a grammatical conjunction to form a compound sentence. Some but not all such grammatical conjunctions are truth functions, for example, consider the following sentences, A, Jack went up the hill. B, Jill went up the hill, C, Jack went up the hill and Jill went up the hill. D, Jack went up the hill so Jill went up the hill, the words and and so are grammatical conjunctions joining the sentences and to form the compound sentences and. The and in is a connective, since the truth of is completely determined by and, it would make no sense to affirm. Various English words and word pairs express logical connectives, and some of them are synonymous, examples are, In formal languages, truth functions are represented by unambiguous symbols. These symbols are called logical connectives, logical operators, propositional operators, or, in classical logic, see well-formed formula for the rules which allow new well-formed formulas to be constructed by joining other well-formed formulas using truth-functional connectives. Logical connectives can be used to more than two statements, so one can speak about n-ary logical connective. For example, the meaning of the statements it is raining, comes from Booles interpretation of logic as an elementary algebra. True, the symbol 1 comes from Booles interpretation of logic as an algebra over the two-element Boolean algebra. False, the symbol 0 comes also from Booles interpretation of logic as a ring, some authors used letters for connectives at some time of the history, u. for conjunction and o. Such a logical connective as converse implication ← is actually the same as material conditional with swapped arguments, thus, in some logical calculi certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is the equivalence between ¬P ∨ Q and P → Q. There are sixteen Boolean functions associating the input truth values P and Q with four-digit binary outputs and these correspond to possible choices of binary logical connectives for classical logic. Different implementations of classical logic can choose different functionally complete subsets of connectives, One approach is to choose a minimal set, and define other connectives by some logical form, as in the example with the material conditional above
28.
Pi
–
The number π is a mathematical constant, the ratio of a circles circumference to its diameter, commonly approximated as 3.14159. It has been represented by the Greek letter π since the mid-18th century, being an irrational number, π cannot be expressed exactly as a fraction. Still, fractions such as 22/7 and other numbers are commonly used to approximate π. The digits appear to be randomly distributed, in particular, the digit sequence of π is conjectured to satisfy a specific kind of statistical randomness, but to date no proof of this has been discovered. Also, π is a number, i. e. a number that is not the root of any non-zero polynomial having rational coefficients. This transcendence of π implies that it is impossible to solve the ancient challenge of squaring the circle with a compass, ancient civilizations required fairly accurate computed values for π for practical reasons. It was calculated to seven digits, using techniques, in Chinese mathematics. The extensive calculations involved have also used to test supercomputers. Because its definition relates to the circle, π is found in many formulae in trigonometry and geometry, especially those concerning circles, ellipses, and spheres. Because of its role as an eigenvalue, π appears in areas of mathematics. It is also found in cosmology, thermodynamics, mechanics, attempts to memorize the value of π with increasing precision have led to records of over 70,000 digits. In English, π is pronounced as pie, in mathematical use, the lowercase letter π is distinguished from its capitalized and enlarged counterpart ∏, which denotes a product of a sequence, analogous to how ∑ denotes summation. The choice of the symbol π is discussed in the section Adoption of the symbol π, π is commonly defined as the ratio of a circles circumference C to its diameter d, π = C d The ratio C/d is constant, regardless of the circles size. For example, if a circle has twice the diameter of another circle it will also have twice the circumference, preserving the ratio C/d. This definition of π implicitly makes use of geometry, although the notion of a circle can be extended to any curved geometry. Here, the circumference of a circle is the arc length around the perimeter of the circle, a quantity which can be defined independently of geometry using limits. An integral such as this was adopted as the definition of π by Karl Weierstrass, definitions of π such as these that rely on a notion of circumference, and hence implicitly on concepts of the integral calculus, are no longer common in the literature. One such definition, due to Richard Baltzer, and popularized by Edmund Landau, is the following, the cosine can be defined independently of geometry as a power series, or as the solution of a differential equation
29.
Normal distribution
–
In probability theory, the normal distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are used in the natural and social sciences to represent real-valued random variables whose distributions are not known. The normal distribution is useful because of the limit theorem. Physical quantities that are expected to be the sum of independent processes often have distributions that are nearly normal. Moreover, many results and methods can be derived analytically in explicit form when the relevant variables are normally distributed, the normal distribution is sometimes informally called the bell curve. However, many other distributions are bell-shaped, the probability density of the normal distribution is, f =12 π σ2 e −22 σ2 Where, μ is mean or expectation of the distribution. σ is standard deviation σ2 is variance A random variable with a Gaussian distribution is said to be distributed and is called a normal deviate. The simplest case of a distribution is known as the standard normal distribution. The factor 1 /2 in the exponent ensures that the distribution has unit variance and this function is symmetric around x =0, where it attains its maximum value 1 /2 π and has inflection points at x = +1 and x = −1. Authors may differ also on which normal distribution should be called the standard one, the probability density must be scaled by 1 / σ so that the integral is still 1. If Z is a normal deviate, then X = Zσ + μ will have a normal distribution with expected value μ. Conversely, if X is a normal deviate, then Z = /σ will have a standard normal distribution. Every normal distribution is the exponential of a function, f = e a x 2 + b x + c where a is negative. In this form, the mean value μ is −b/, for the standard normal distribution, a is −1/2, b is zero, and c is − ln /2. The standard Gaussian distribution is denoted with the Greek letter ϕ. The alternative form of the Greek phi letter, φ, is used quite often. The normal distribution is often denoted by N. Thus when a random variable X is distributed normally with mean μ and variance σ2, some authors advocate using the precision τ as the parameter defining the width of the distribution, instead of the deviation σ or the variance σ2
30.
Pipeline (Unix)
–
In Unix-like computer operating systems, a pipeline is a sequence of processes chained together by their standard streams, so that the output of each process feeds directly as input to the next one. The concept of pipelines was championed by Douglas McIlroy at Unixs ancestral home of Bell Labs, during the development of Unix and it is named by analogy to a physical pipeline. The standard shell syntax for pipelines is to list multiple commands, each process takes input from the previous process and produces output for the next process via standard streams. Pipes are unidirectional, data flows through the pipeline from left to right, all widely used Unix shells have a special syntax construct for the creation of pipelines. In all usage one writes the commands in sequence, separated by the ASCII vertical bar character |, the shell starts the processes and arranges for the necessary connections between their standard streams. By default, the standard streams of the processes in a pipeline are not passed on through the pipe, instead. However, many shells have additional syntax for changing this behavior, in the csh shell, for instance, using |& instead of | signifies that the standard error stream should also be merged with the standard output and fed to the next process. The Bourne Shell can also merge standard error, using 2>&1, in the most commonly used simple pipelines the shell connects a series of sub-processes via pipes, and executes external commands within each sub-process. Thus the shell itself is doing no direct processing of the data flowing through the pipeline, however, its possible for the shell to perform processing directly, using a so-called mill, or pipemill. There are a couple of ways to avoid this behavior. First, some support an option to disable reading from stdin. Alternatively, if the drain does not need to read any input from stdin to do something useful, pipelines can be created under program control. The Unix pipe system call asks the operating system to construct a new anonymous pipe object and this results in two new, opened file descriptors in the process, the read-only end of the pipe, and the write-only end. The pipe ends appear to be normal, anonymous file descriptors, to avoid deadlock and exploit parallelism, the Unix process with one or more new pipes will then, generally, call fork to create new processes. Each process will then close the end of the pipe that it not be using before producing or consuming any data. Alternatively, a process might create a new thread and use the pipe to communicate between them, named pipes may also be created using mkfifo or mknod and then presented as the input or output file to programs as they are invoked. They allow multi-path pipes to be created, and are effective when combined with standard error redirection. Instead, the output of the program is held in the buffer
31.
Shell script
–
A shell script is a computer program designed to be run by the Unix shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages, typical operations performed by shell scripts include file manipulation, program execution, and printing text. A script which sets up the environment, runs the program, the typical Unix/Linux/Posix-compliant installation includes the Korn Shell in several possible versions such as ksh88, Korn Shell 93 and others. On the other hand, the various shells plus tools like awk, sed, grep, other shells available on a machine or available for download and/or purchase include ash, msh, ysh, zsh, the Tenex C Shell, a Perl-like shell and others. Related programmes such as based on Python, Ruby, C, Java, Perl, Pascal. Another somewhat common shell is osh, whose manual page states it is an enhanced, the user could then simply use l for the most commonly used short listing. Another example of a script that could be used as a shortcut would be to print a list of all the files and directories within a given directory. In this case, the script would start with its normal starting line of #. /bin/sh. Following this, the script executes the command clear which clears the terminal of all text before going to the next line, the following line provides the main function of the script. The ls -al command list the files and directories that are in the directory from which the script is being run, the ls command attributes could be changed to reflect the needs of the user. Note, If an implementation does not have the command, try using the clr command instead. /build to create the updated program, test it. Since the 1980s or so, however, scripts of this type have been replaced with utilities like make which are specialized for building programs, a modern shell script is not just on the same footing as system commands, but rather many system commands are actually shell scripts. Like standard system commands, shell scripts classically omit any kind of filename extension unless intended to be read into a shell through a special mechanism for this purpose. With these sorts of features available, it is possible to write reasonably sophisticated applications as shell scripts, the standard Unix tools sed and awk provide extra capabilities for shell programming, Perl can also be embedded in shell scripts as can other scripting languages like Tcl. Perl and Tcl come with graphics toolkits as well, the specifics of what separates scripting languages from high-level programming languages is a frequent source of debate. But generally speaking a language is one which requires an interpreter. While files with the. sh file extension are usually a script of some kind. Perhaps the biggest advantage of writing a script is that the commands
32.
Bash (Unix shell)
–
Bash is a Unix shell and command language written by Brian Fox for the GNU Project as a free software replacement for the Bourne shell. First released in 1989, it has been distributed widely as the shell for Linux distributions. A version is available for Windows 10, Bash is a command processor that typically runs in a text window, where the user types commands that cause actions. Bash can also read commands from a file, called a script, like all Unix shells, it supports filename globbing, piping, here documents, command substitution, variables and control structures for condition-testing and iteration. The keywords, syntax and other features of the language are all copied from sh. Other features, e. g. history, are copied from csh and ksh, Bash is a POSIX shell, but with a number of extensions. A security hole in Bash dating from version 1.03, dubbed Shellshock, was discovered in early September 2014, patches to fix the bugs were made available soon after the bugs were identified, but not all computers have yet been updated. Brian Fox began coding Bash on January 10,1988 after Richard Stallman became dissatisfied with the lack of progress being made by a prior developer, Fox released Bash as a beta, version. Since then, Bash has become by far the most popular shell among users of Linux, becoming the default shell on that operating systems various distributions. Bash has also ported to Microsoft Windows and distributed with Cygwin and MinGW, to DOS by the DJGPP project, to Novell NetWare. In September 2014, Stéphane Chazelas, a Unix/Linux, network and telecom specialist working in the UK, the bug, first disclosed on September 24, was named Shellshock and assigned the numbers CVE-2014-6271, CVE-2014-6277 and CVE-2014-7169. The bug was regarded as severe, since CGI scripts using Bash could be vulnerable, the bug was related to how Bash passes function definitions to subshells through environment variables. The Bash command syntax is a superset of the Bourne shell command syntax, when a user presses the tab key within an interactive command-shell, Bash automatically uses command line completion to match partly typed program names, filenames and variable names. The Bash command-line completion system is flexible and customizable, and is often packaged with functions that complete arguments and filenames for specific programs. Bashs syntax has many extensions lacking in the Bourne shell, Bash can perform integer calculations without spawning external processes. It uses the command and the $ variable syntax for this purpose, for example, it can redirect standard output and standard error at the same time using the &> operator. This is simpler to type than the Bourne shell equivalent command > file 2>&1, Bash supports process substitution using the < and >syntax, which substitutes the output of a command where a filename is normally used. But in POSIX mode, Bash conforms with POSIX more closely, since version 2. 05b Bash can redirect standard input from a here string using the <<< operator
33.
Free Software Foundation
–
The FSF was incorporated in Massachusetts, USA, where it is also based. From its founding until the mid-1990s, FSFs funds were used to employ software developers to write free software for the GNU Project. Since the mid-1990s, the FSFs employees and volunteers have worked on legal and structural issues for the free software movement. Consistent with its goals, the FSF aims to use free software on its own computers. The Free Software Foundation was founded in 1985 as a non-profit corporation supporting free software development and it continued existing GNU projects such as the sale of manuals and tapes, and employed developers of the free software system. Since then, it has continued these activities, as well as advocating for the free software movement, the FSF is also the steward of several free software licenses, meaning it publishes them and has the ability to make revisions as needed. The FSF holds the copyrights on many pieces of the GNU system, as holder of these copyrights, it has the authority to enforce the copyleft requirements of the GNU General Public License when copyright infringement occurs on that software. From 1991 until 2001, GPL enforcement was done informally, usually by Stallman himself, often with assistance from FSFs lawyer, typically, GPL violations during this time were cleared up by short email exchanges between Stallman and the violator. In the interest of promoting copyleft assertiveness by software companies to the level that the FSF was already doing, in 2004 Harald Welte launched gpl-violations. org. In late 2001, Bradley M. Kuhn, with the assistance of Moglen, David Turner, from 2002-2004, high-profile GPL enforcement cases, such as those against Linksys and OpenTV, became frequent. GPL enforcement and educational campaigns on GPL compliance was a focus of the FSFs efforts during this period. In March 2003, SCO filed suit against IBM alleging that IBMs contributions to free software, including FSFs GNU. While FSF was never a party to the lawsuit, FSF was subpoenaed on November 5,2003, during 2003 and 2004, FSF put substantial advocacy effort into responding to the lawsuit and quelling its negative impact on the adoption and promotion of free software. From 2003 to 2005, FSF held legal seminars to explain the GPL, usually taught by Bradley M. Kuhn and Daniel Ravicher, these seminars offered CLE credit and were the first effort to give formal legal education on the GPL. In 2007, the FSF published the version of the GNU General Public License after significant outside input. In December 2008, FSF filed a lawsuit against Cisco for using GPL-licensed components shipped with Linksys products, Cisco was notified of the licensing issue in 2003 but Cisco repeatedly disregarded its obligations under the GPL. The GNU project The original purpose of the FSF was to promote the ideals of free software, the organization developed the GNU operating system as an example of this. GNU licenses The GNU General Public License is a widely used license for software projects