1.
Software developer
–
A software developer is a person concerned with facets of the software development process, including the research, design, programming, and testing of computer software. Other job titles which are used with similar meanings are programmer, software analyst. According to developer Eric Sink, the differences between system design, software development, and programming are more apparent, even more so that developers become systems architects, those who design the multi-leveled architecture or component interactions of a large software system. In a large company, there may be employees whose sole responsibility consists of one of the phases above. In smaller development environments, a few people or even an individual might handle the complete process. The word software was coined as a prank as early as 1953, before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide products and services was Computer Usage Company in 1955. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities, universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers, some were distributed freely between users of a particular machine for no charge. Others were done on a basis, and other firms such as Computer Sciences Corporation started to grow. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines, new software was built for microcomputers, so other manufacturers including IBM, followed DECs example quickly, resulting in the IBM AS/400 amongst others. The industry expanded greatly with the rise of the computer in the mid-1970s. In the following years, it created a growing market for games, applications. DOS, Microsofts first operating system product, was the dominant operating system at the time, by 2014 the role of cloud developer had been defined, in this context, one definition of a developer in general was published, Developers make software for the world to use. The job of a developer is to crank out code -- fresh code for new products, code fixes for maintenance, code for business logic, bus factor Software Developer description from the US Department of Labor
2.
Software release life cycle
–
Usage of the alpha/beta test terminology originated at IBM. As long ago as the 1950s, IBM used similar terminology for their hardware development, a test was the verification of a new product before public announcement. B test was the verification before releasing the product to be manufactured, C test was the final test before general availability of the product. Martin Belsky, a manager on some of IBMs earlier software projects claimed to have invented the terminology, IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of beta test to refer to testing done by customers was not done in IBM, rather, IBM used the term field test. Pre-alpha refers to all activities performed during the project before formal testing. These activities can include requirements analysis, software design, software development, in typical open source development, there are several types of pre-alpha versions. Milestone versions include specific sets of functions and are released as soon as the functionality is complete, the alpha phase of the release life cycle is the first phase to begin software testing. In this phase, developers generally test the software using white-box techniques, additional validation is then performed using black-box or gray-box techniques, by another testing team. Moving to black-box testing inside the organization is known as alpha release, alpha software can be unstable and could cause crashes or data loss. Alpha software may not contain all of the features that are planned for the final version, in general, external availability of alpha software is uncommon in proprietary software, while open source software often has publicly available alpha versions. The alpha phase usually ends with a freeze, indicating that no more features will be added to the software. At this time, the software is said to be feature complete, Beta, named after the second letter of the Greek alphabet, is the software development phase following alpha. Software in the stage is also known as betaware. Beta phase generally begins when the software is complete but likely to contain a number of known or unknown bugs. Software in the phase will generally have many more bugs in it than completed software, as well as speed/performance issues. The focus of beta testing is reducing impacts to users, often incorporating usability testing, the process of delivering a beta version to the users is called beta release and this is typically the first time that the software is available outside of the organization that developed it. Beta version software is useful for demonstrations and previews within an organization
3.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
4.
Software license
–
A software license is a legal instrument governing the use or redistribution of software. Under United States copyright law all software is copyright protected, in code as also object code form. The only exception is software in the public domain, most distributed software can be categorized according to its license type. Two common categories for software under copyright law, and therefore with licenses which grant the licensee specific rights, are proprietary software and free, unlicensed software outside the copyright protection is either public domain software or software which is non-distributed, non-licensed and handled as internal business trade secret. Contrary to popular belief, distributed unlicensed software is copyright protected. Examples for this are unauthorized software leaks or software projects which are placed on public software repositories like GitHub without specified license. As voluntarily handing software into the domain is problematic in some international law domains, there are also licenses granting PD-like rights. Therefore, the owner of a copy of software is legally entitled to use that copy of software. Hence, if the end-user of software is the owner of the respective copy, as many proprietary licenses only enumerate the rights that the user already has under 17 U. S. C. §117, and yet proclaim to take away from the user. Proprietary software licenses often proclaim to give software publishers more control over the way their software is used by keeping ownership of each copy of software with the software publisher. The form of the relationship if it is a lease or a purchase, for example UMG v. Augusto or Vernor v. Autodesk. The ownership of goods, like software applications and video games, is challenged by licensed. The Swiss based company UsedSoft innovated the resale of business software and this feature of proprietary software licenses means that certain rights regarding the software are reserved by the software publisher. Therefore, it is typical of EULAs to include terms which define the uses of the software, the most significant effect of this form of licensing is that, if ownership of the software remains with the software publisher, then the end-user must accept the software license. In other words, without acceptance of the license, the end-user may not use the software at all, one example of such a proprietary software license is the license for Microsoft Windows. The most common licensing models are per single user or per user in the appropriate volume discount level, Licensing per concurrent/floating user also occurs, where all users in a network have access to the program, but only a specific number at the same time. Another license model is licensing per dongle which allows the owner of the dongle to use the program on any computer, Licensing per server, CPU or points, regardless the number of users, is common practice as well as site or company licenses
5.
Proof assistant
–
In computer science and mathematical logic, a proof assistant or interactive theorem prover is a software tool to assist with the development of formal proofs by human-machine collaboration. ACL2 – a programming language, a logical theory. HOL theorem provers – A family of tools ultimately derived from the LCF theorem prover, in these systems the logical core is a library of their programming language. Theorems represent new elements of the language and can only be introduced via strategies which guarantee logical correctness, strategy composition gives users the ability to produce significant proofs with relatively few interactions with the system. Members of the include, HOL4 – The primary descendant. Support for both Moscow ML and Poly/ML, HOL Light – A thriving minimalist fork. ProofPower – Went proprietary, then returned to open source, Isabelle is an interactive theorem prover, successor of HOL. The main code-base is BSD-licensed, but the Isabelle distribution bundles many add-on tools with different licenses, LEGO Matita – A light system based on the Calculus of Inductive Constructions. MINLOG – A proof assistant based on first-order minimal logic, mizar – A proof assistant based on first-order logic, in a natural deduction style, and Tarski–Grothendieck set theory. PhoX – A proof assistant based on higher-order logic which is eXtensible, prototype Verification System – a proof language and system based on higher-order logic. TPS and ETPS – Interactive theorem provers also based on simply-typed lambda calculus, typelab Yarrow A popular front-end for proof assistants is the Emacs-based Proof General, developed at the University of Edinburgh. Coq includes CoqIDE, which is based on OCaml/Gtk, Isabelle includes Isabelle/jEdit, which is based on jEdit and the Isabelle/Scala infrastructure for document-oriented proof processing. Types in computer science, philosophy and logic, Proof assistants, History, ideas and future. The Seventeen Provers of the World Introduction in Certified Programming with Dependent Types
6.
First-order logic
–
First-order logic – also known as first-order predicate calculus and predicate logic – is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. This distinguishes it from propositional logic, which does not use quantifiers, Sometimes theory is understood in a more formal sense, which is just a set of sentences in first-order logic. In first-order theories, predicates are associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets, There are many deductive systems for first-order logic which are both sound and complete. Although the logical relation is only semidecidable, much progress has been made in automated theorem proving in first-order logic. First-order logic also satisfies several metalogical theorems that make it amenable to analysis in proof theory, such as the Löwenheim–Skolem theorem, first-order logic is the standard for the formalization of mathematics into axioms and is studied in the foundations of mathematics. Peano arithmetic and Zermelo–Fraenkel set theory are axiomatizations of number theory and set theory, respectively, no first-order theory, however, has the strength to uniquely describe a structure with an infinite domain, such as the natural numbers or the real line. Axioms systems that do fully describe these two structures can be obtained in stronger logics such as second-order logic, for a history of first-order logic and how it came to dominate formal logic, see José Ferreirós. While propositional logic deals with simple declarative propositions, first-order logic additionally covers predicates, a predicate takes an entity or entities in the domain of discourse as input and outputs either True or False. Consider the two sentences Socrates is a philosopher and Plato is a philosopher, in propositional logic, these sentences are viewed as being unrelated and might be denoted, for example, by variables such as p and q. The predicate is a philosopher occurs in both sentences, which have a structure of a is a philosopher. The variable a is instantiated as Socrates in the first sentence and is instantiated as Plato in the second sentence, while first-order logic allows for the use of predicates, such as is a philosopher in this example, propositional logic does not. Relationships between predicates can be stated using logical connectives, consider, for example, the first-order formula if a is a philosopher, then a is a scholar. This formula is a statement with a is a philosopher as its hypothesis. The truth of this depends on which object is denoted by a. Quantifiers can be applied to variables in a formula, the variable a in the previous formula can be universally quantified, for instance, with the first-order sentence For every a, if a is a philosopher, then a is a scholar. The universal quantifier for every in this sentence expresses the idea that the if a is a philosopher. The negation of the sentence For every a, if a is a philosopher, then a is a scholar is logically equivalent to the sentence There exists a such that a is a philosopher and a is not a scholar
7.
Unification (computing)
–
In logic and computer science, unification is an algorithmic process of solving equations between symbolic expressions. Depending on which expressions are allowed to occur in a set. If higher-order variables, that is, variables representing functions, are allowed in an expression, a solution of a unification problem is denoted as a substitution, that is, a mapping assigning a symbolic value to each variable of the problems expressions. A unification algorithm should compute for a problem a complete, and minimal substitution set, that is, a set covering all its solutions. Depending on the framework, a complete and minimal substitution set may have at most one, at most finitely many, or possibly infinitely many members, in some frameworks it is generally impossible to decide whether any solution exists. For example, using x, y, z as variables, the syntactic first-order unification problem has no solution over the set of finite terms, however, it has the single solution over the set of infinite trees. The singleton set is a syntactic second-order unification problem, since y is a function variable, one solution is, another one is. Today, automated reasoning is still the main area of unification. Syntactical first-order unification is used in programming and programming language type system implementation. Semantic unification is used in SMT solvers, term rewriting algorithms, formally, a unification approach presupposes An infinite set V of variables. For higher-order unification, it is convenient to choose V disjoint from the set of lambda-term bound variables, a set T of terms such that V ⊆ T. For first-order unification and higher-order unification, T is usually the set of terms and lambda terms. A mapping vars, T → ℙ, assigning to each term t the set vars ⊊ V of free variables occurring in t, an equivalence relation ≡ on T, indicating which terms are considered equal. For higher-order unification, usually t ≡ u if t and u are alpha equivalent. tn, and every n-ary function symbol f ∈ Fn, the latter term is usually written as x+1, using infix notation and the more common operator symbol + for convenience. A substitution is a mapping σ, V → T from variables to terms, the notation refers to a substitution mapping each variable xi to the term ti, for i=1. k, and every other variable to itself. Applying that substitution to a term t is written in postfix notation as t, the result tσ of applying a substitution σ to a term t is called an instance of that term t. For example, x ⊕ a is more general than a ⊕ b if ⊕ is commutative, for example, f is a variant of f, since f = f and f = f. However, f is not a variant of f, since no substitution can transform the latter term into the former one, the latter term is therefore properly more special than the former one
8.
Term rewriting
–
In mathematics, computer science, and logic, rewriting covers a wide range of methods of replacing subterms of a formula with other terms. What is considered are rewriting systems, in their most basic form, they consist of a set of objects, plus relations on how to transform those objects. One rule to rewrite a term could be applied in different ways to that term. Rewriting systems then do not provide an algorithm for changing one term to another, when combined with an appropriate algorithm, however, rewrite systems can be viewed as computer programs, and several declarative programming languages are based on term rewriting. In logic, the procedure for obtaining the normal form of a formula can be conveniently written as a rewriting system. In this system, we can perform a rewrite from left to only when the logical interpretation of the left hand side is equivalent to that of the right. In linguistics, rewrite rules, also called phrase structure rules, are used in systems of generative grammar. From the above examples, its clear that we can think of rewriting systems in an abstract manner and we need to specify a set of objects and the rules that can be applied to transform them. The most general setting of this notion is called a reduction system. Suppose the set of objects is T = and the relation is given by the rules a → b, b → a, a → c. Observe that these rules can be applied to both a and b in any fashion to get the term c, such a property is clearly an important one. Note also, that c is, in a sense, a simplest term in the system and this example leads us to define some important notions in the general setting of an ARS. First we need some basic notions and notations. → ∗ is the closure of → ∪ =. It is also known as the reflexive transitive closure of →. An object x in A is called if there exists some other y in A such that x → y. An object y is called a form of x if x → ∗ y. If x has a normal form, then this is usually denoted with x ↓. In example 1 above, c is a form, and c = a ↓= b ↓
9.
Method of analytic tableaux
–
In proof theory, the semantic tableau, also called truth tree, is a decision procedure for sentential and related logics, and a proof procedure for formulas of first-order logic. The tableau method can determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics, the method of semantic tableaux was invented by the Dutch logician Evert Willem Beth and simplified, for classical logic, by Raymond Smullyan. It is Smullyans simplification, one-sided tableaux, that is described below, Smullyans method has been generalized to arbitrary many-valued propositional and first-order logics by Walter Carnielli. Tableaux can be seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systems was formally established in, an analytic tableau has, for each node, a subformula of the formula at the origin. In other words, it is a tableau satisfying the subformula property, for refutation tableaux, the objective is to show that the negation of a formula cannot be satisfied. There are rules for handling each of the connectives, starting with the main connective. In many cases, applying these rules causes the subtableau to divide into two, if any branch of a tableau leads to an evident contradiction, the branch closes. If all branches close, the proof is complete and the formula is a logical truth. More specifically, a tableau calculus consists of a collection of rules with each rule specifying how to break down one logical connective into its constituent parts. Such a set is easily recognizable as satisfiable or unsatisfiable with respect to the semantics of the logic in question. To keep track of this process, the nodes of a tableau itself are set out in the form of a tree, such a systematic method for searching this tree gives rise to an algorithm for performing deduction and automated reasoning. Note that this tree is present regardless of whether the nodes contain sets, multisets, lists or trees. This section presents the tableau calculus for classical propositional logic, a tableau checks whether a given set of formulae is satisfiable or not. It can be used to check either validity or entailment, a formula is valid if its negation is unsatisfiable, the main principle of propositional tableaux is to attempt to break complex formulae into smaller ones until complementary pairs of literals are produced or no further expansion is possible. The method works on a tree whose nodes are labeled with formulae, at each step, this tree is modified, in the propositional case, the only allowed changes are additions of a node as descendant of a leaf. The procedure starts by generating the tree made of a chain of all formulae in the set to prove unsatisfiability
10.
Axiom of choice
–
In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that the Cartesian product of a collection of non-empty sets is non-empty. It states that for every indexed family i ∈ I of nonempty sets there exists an indexed family i ∈ I of elements such that x i ∈ S i for every i ∈ I. The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem. Informally put, the axiom of choice says that any collection of bins, each containing at least one object. One motivation for use is that a number of generally accepted mathematical results, such as Tychonoffs theorem. Contemporary set theorists also study axioms that are not compatible with the axiom of choice, the axiom of choice is avoided in some varieties of constructive mathematics, although there are varieties of constructive mathematics in which the axiom of choice is embraced. A choice function is a function f, defined on a collection X of nonempty sets, each choice function on a collection X of nonempty sets is an element of the Cartesian product of the sets in X. The axiom of choice asserts the existence of elements, it is therefore equivalent to, Given any family of nonempty sets. In this article and other discussions of the Axiom of Choice the following abbreviations are common, ZF – Zermelo–Fraenkel set theory omitting the Axiom of Choice. ZFC – Zermelo–Fraenkel set theory, extended to include the Axiom of Choice, There are many other equivalent statements of the axiom of choice. These are equivalent in the sense that, in the presence of basic axioms of set theory. One variation avoids the use of functions by, in effect. Given any set X of pairwise disjoint non-empty sets, there exists at least one set C that contains one element in common with each of the sets in X. This guarantees for any partition of a set X the existence of a subset C of X containing exactly one element from each part of the partition. Another equivalent axiom only considers collections X that are essentially powersets of other sets, For any set A, authors who use this formulation often speak of the choice function on A, but be advised that this is a slightly different notion of choice function. With this alternate notion of function, the axiom of choice can be compactly stated as Every set has a choice function. Which is equivalent to For any set A there is a function f such that for any non-empty subset B of A, f lies in B. The negation of the axiom can thus be expressed as, There is a set A such that for all functions f, however, that particular case is a theorem of Zermelo–Fraenkel set theory without the axiom of choice, it is easily proved by mathematical induction
11.
Prime number theorem
–
In number theory, the prime number theorem describes the asymptotic distribution of the prime numbers among the positive integers. It formalizes the idea that primes become less common as they become larger by precisely quantifying the rate at which this occurs. The theorem was proved independently by Jacques Hadamard and Charles Jean de la Vallée-Poussin in 1896 using ideas introduced by Bernhard Riemann, the first such distribution found is π ~ N/log, where π is the prime-counting function and log is the natural logarithm of N. This means that for large enough N, the probability that an integer not greater than N is prime is very close to 1 / log. Consequently, an integer with at most 2n digits is about half as likely to be prime as a random integer with at most n digits. For example, among the integers of at most 1000 digits, about one in 2300 is prime, whereas among positive integers of at most 2000 digits. In other words, the gap between consecutive prime numbers among the first N integers is roughly log. Let π be the function that gives the number of primes less than or equal to x. For example, π =4 because there are four prime numbers less than or equal to 10, using asymptotic notation this result can be restated as π ∼ x log x. This notation does not say anything about the limit of the difference of the two functions as x increases without bound, instead, the theorem states that x / log x approximates π in the sense that the relative error of this approximation approaches 0 as x increases without bound. For example, the 7017200000000000000♠2×1017th prime number is 7018851267738604819♠8512677386048191063, and log rounds to 7018796741875229174♠7967418752291744388, a relative error of about 6. 4%. The prime number theorem is equivalent to lim x → ∞ ϑ x = lim x → ∞ ψ x =1, where ϑ and ψ are the first. Based on the tables by Anton Felkel and Jurij Vega, Adrien-Marie Legendre conjectured in 1797 or 1798 that π is approximated by the function a /, where A and B are unspecified constants. In the second edition of his book on number theory he made a more precise conjecture. Carl Friedrich Gauss considered the question at age 15 or 16 in the year 1792 or 1793. In 1838 Peter Gustav Lejeune Dirichlet came up with his own approximating function, in two papers from 1848 and 1850, the Russian mathematician Pafnuty Chebyshev attempted to prove the asymptotic law of distribution of prime numbers. He was able to prove unconditionally that this ratio is bounded above, an important paper concerning the distribution of prime numbers was Riemanns 1859 memoir On the Number of Primes Less Than a Given Magnitude, the only paper he ever wrote on the subject. In particular, it is in paper of Riemann that the idea to apply methods of complex analysis to the study of the real function π originates
12.
Information security
–
Information security, sometimes shortened to InfoSec, is the practice of preventing unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction of information. It is a term that can be used regardless of the form the data may take. IT security Sometimes referred to as security, information technology security is information security applied to technology. It is worthwhile to note that a computer does not necessarily mean a home desktop, a computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones, IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. These issues include, but are not limited to, natural disasters, since most information is stored on computers in our modern era, information assurance is typically dealt with by IT security specialists. A common method of providing information assurance is to have a backup of the data in case one of the mentioned issues arise. Information security threats come in different forms. Some of the most common today are software attacks, theft of intellectual property, identity theft, theft of equipment or information, sabotage. Most people have experienced software attacks of some sort, viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of property has also been an extensive issue for many businesses in the IT field. Identity theft is the attempt to act as someone else usually to obtain that persons personal information or to take advantage of their access to vital information, theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile. Cell phones are prone to theft, and have become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of a website in an attempt to cause loss of confidence on the part of its customers. There are many ways to protect yourself from some of these attacks. Most of this information is now collected, processed and stored on electronic computers, from a business perspective, information security must be balanced against cost, the Gordon-Loeb Model provides a mathematical economic approach for addressing this concern. For the individual, information security has a significant effect on privacy, the field of information security has grown and evolved significantly in recent years. Julius Caesar is credited with the invention of the Caesar cipher c.50 B. C, sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box
13.
Free software
–
The right to study and modify software entails availability of the software source code to its users. This right is conditional on the person actually having a copy of the software. Richard Stallman used the existing term free software when he launched the GNU Project—a collaborative effort to create a freedom-respecting operating system—and the Free Software Foundation. The FSFs Free Software Definition states that users of software are free because they do not need to ask for permission to use the software. Free software thus differs from proprietary software, such as Microsoft Office, Google Docs, Sheets, and Slides or iWork from Apple, which users cannot study, change, freeware, which is a category of freedom-restricting proprietary software that does not require payment for use. For computer programs that are covered by law, software freedom is achieved with a software license. Software that is not covered by law, such as software in the public domain, is free if the source code is in the public domain. Proprietary software, including freeware, use restrictive software licences or EULAs, Users are thus prevented from changing the software, and this results in the user relying on the publisher to provide updates, help, and support. This situation is called vendor lock-in, Users often may not reverse engineer, modify, or redistribute proprietary software. Other legal and technical aspects, such as patents and digital rights management may restrict users in exercising their rights. Free software may be developed collaboratively by volunteer computer programmers or by corporations, as part of a commercial, from the 1950s up until the early 1970s, it was normal for computer users to have the software freedoms associated with free software, which was typically public domain software. Software was commonly shared by individuals who used computers and by manufacturers who welcomed the fact that people were making software that made their hardware useful. Organizations of users and suppliers, for example, SHARE, were formed to exchange of software. As software was written in an interpreted language such as BASIC. Software was also shared and distributed as printed source code in computer magazines and books, in United States vs. IBM, filed January 17,1969, the government charged that bundled software was anti-competitive. While some software might always be free, there would henceforth be an amount of software produced primarily for sale. In the 1970s and early 1980s, the industry began using technical measures to prevent computer users from being able to study or adapt the software as they saw fit. In 1980, copyright law was extended to computer programs, Software development for the GNU operating system began in January 1984, and the Free Software Foundation was founded in October 1985
14.
Square root of 2
–
The square root of 2, or the th power of 2, written in mathematics as √2 or 2 1⁄2, is the positive algebraic number that, when multiplied by itself, gives the number 2. Technically, it is called the square root of 2. Geometrically the square root of 2 is the length of a diagonal across a square sides of one unit of length. It was probably the first number known to be irrational, the rational approximation of the square root of two,665, 857/470,832, derived from the fourth step in the Babylonian algorithm starting with a0 =1, is too large by approx. 1. 6×10−12, its square is 2. 0000000000045… The rational approximation 99/70 is frequently used, despite having a denominator of only 70, it differs from the correct value by less than 1/10,000. The numerical value for the root of two, truncated to 65 decimal places, is,1. 41421356237309504880168872420969807856967187537694807317667973799….41421296 ¯. That is,1 +13 +13 ×4 −13 ×4 ×34 =577408 =1.4142156862745098039 ¯. This approximation is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as a secret the discovery that the square root of two is irrational, and, according to legend, Hippasus was murdered for divulging it. The square root of two is occasionally called Pythagoras number or Pythagoras constant, for example by Conway & Guy, there are a number of algorithms for approximating √2, which in expressions as a ratio of integers or as a decimal can only be approximated. The most common algorithm for this, one used as a basis in many computers and calculators, is the Babylonian method of computing square roots, which is one of many methods of computing square roots. It goes as follows, First, pick a guess, a0 >0, then, using that guess, iterate through the following recursive computation, a n +1 = a n +2 a n 2 = a n 2 +1 a n. The more iterations through the algorithm, the approximation of the square root of 2 is achieved. Each iteration approximately doubles the number of correct digits, starting with a0 =1 the next approximations are 3/2 =1.5 17/12 =1.416. The value of √2 was calculated to 137,438,953,444 decimal places by Yasumasa Kanadas team in 1997, in February 2006 the record for the calculation of √2 was eclipsed with the use of a home computer. Shigeru Kondo calculated 1 trillion decimal places in 2010, for a development of this record, see the table below. Among mathematical constants with computationally challenging decimal expansions, only π has been calculated more precisely, such computations aim to check empirically whether such numbers are normal
15.
Hewlett-Packard
–
The Hewlett-Packard Company or shortened to Hewlett-Packard was an American multinational information technology company headquartered in Palo Alto, California. The company was founded in a garage in Palo Alto by William Bill Redington Hewlett and David Dave Packard. HP was the worlds leading PC manufacturer from 2007 to Q22013 and it specialized in developing and manufacturing computing, data storage, and networking hardware, designing software and delivering services. HP also had services and consulting business around its products and partner products.4 billion in 2008, in November 2009, HP announced the acquisition of 3Com, with the deal closing on April 12,2010. On April 28,2010, HP announced the buyout of Palm, on September 2,2010, HP won its bidding war for 3PAR with a $33 a share offer, which Dell declined to match. On October 6,2014, Hewlett-Packard announced plans to split the PC and printers business from its enterprise products, the split closed on November 1,2015, and resulted in two publicly traded companies, HP Inc. and Hewlett Packard Enterprise. William Redington Hewlett and David Packard graduated with degrees in engineering from Stanford University in 1935. The company originated in a garage in nearby Palo Alto during a fellowship they had with a past professor, Terman was considered a mentor to them in forming Hewlett-Packard. In 1939, Packard and Hewlett established Hewlett-Packard in Packards garage with a capital investment of US$538. Hewlett and Packard tossed a coin to decide whether the company they founded would be called Hewlett-Packard or Packard-Hewlett, HP incorporated on August 18,1947, and went public on November 6,1957. Of the many projects they worked on, their very first financially successful product was an audio oscillator. This allowed them to sell the Model 200A for $54.40 when competitors were selling less stable oscillators for over $200, the Model 200 series of generators continued until at least 1972 as the 200AB, still tube-based but improved in design through the years. They worked on technology and artillery shell fuses during World War II. Hewlett-Packards HP Associates division, established around 1960, developed semiconductor devices primarily for internal use, instruments and calculators were some of the products using these devices. HP partnered in the 1960s with Sony and the Yokogawa Electric companies in Japan to develop several high-quality products, the products were not a huge success, as there were high costs in building HP-looking products in Japan. HP and Yokogawa formed a joint venture in 1963 to market HP products in Japan, HP bought Yokogawa Electrics share of Hewlett-Packard Japan in 1999. HP spun off a company, Dynac, to specialize in digital equipment. The name was picked so that the HP logo hp could be turned upside down to be a reverse image of the logo dy of the new company
16.
HP 9000
–
HP9000 is a line of workstation and server computer systems produced by the Hewlett-Packard Company. The native operating system for almost all HP9000 systems is HP-UX, the HP9000 brand was introduced in 1984 to encompass several existing technical workstation models previously launched in the early 1980s. The HP9000 line was discontinued in 2008, being superseded by the HP Integrity platform running HP-UX, the first HP9000 models comprised the HP9000 Series 200 and Series 500 ranges. These were followed by the HP9000 Series 300 and Series 400 workstations which also used 68k-series microprocessors. From the mid-1980s onwards, HP started to switch over to its own based on its proprietary PA-RISC ISA, for the Series 600,700,800. More recent models use either the PA-RISC or its successor, the HP/Intel IA-64 ISA, HP released the Series 400, also known as the Apollo 400, after acquiring Apollo Computer in 1989. These models had the ability to run either HP-UX or Apollos Domain/OS, from the early 1990s onwards, HP replaced the HP9000 Series numbers with an alphabetical Class nomenclature. In 2001, HP again changed the scheme for their HP9000 servers. The A-class systems were renamed as the rp2400s, the L-class became the rp5400s, the rp prefix signified a PA-RISC architecture, while rx was used for IA-64-based systems, later rebranded HP Integrity. On 30 April 2008, HP announced end of sales for the HP9000, the last order date for HP9000 systems was 31 December 2008 and the last ship date was 1 April 2009. The last order date for new HP9000 options was December 31,2009, HP intends to support these systems through to 2013, with possible extensions. The end of life for HP9000 also marks the end of an era, the first model was the HP 9826A, followed by the HP 9836A. Later, a version of the 9836 was introduced. There was also a rack-mount version, the HP 9920A and these were all based on the Motorola 68000 chip. There were S versions of the models that included memory bundled in, when HP-UX was included as an OS, there was a U version of the 9836s and 9920 that used the 68012 processor. The model numbers included the letter U, later versions of the Series 200s included the 9816,9817, and 9837. These systems were renamed as the HP Series 200 line, before being renamed again as part HP9000 family. There was also a version of the Series 200 called the Integral
17.
NICTA
–
NICTA was Australias Information and Communications Technology Research Centre of Excellence. The term Centre of Excellence is common marketing terminology used by some Australian government organisations for titles of science research groups, nICTAs role is to pursue potentially economically significant ICT related research for the Australian economy. NICTA was structured around groups focused primarily on research and the implementation of those ideas within business teams. NICTA merged with CSIRO in July 2016, the creation of the centre was intended to address a previously identified weakness in long-term strategic ICT research in Australia. NICTA was officially opened on 27 February 2003, the founding members of NICTA were the University of New South Wales, Australian National University, the NSW Government, and the ACT Government. NICTA later acquired other university and government partners, in January 2003, The University of Sydney became a partner. In July 2004, the Victorian Government and The University of Melbourne became partners, in January 2005, the Queensland Government, the University of Queensland, Griffith University, and the Queensland University of Technology became partners. The University of Melbourne and the Victorian Government became members in May 2011, Australian Federal Government funding of NICTA is due to expire in June 2016 and is there a concerted effort in securing a merger with CSIRO. Ostensibly this merger would be with the CSIRO Digital Productivity Flagship, duane Zitzner left NICTA at the end of May 2015 in the hope that the merger with CSIRO would be completed by the end of June 2015. As of August 2015 the merger had not been finalized and Professor Robert Williamson is acting, NICTA formally merged with CSIRO to form a new entity called Data61 on 28 August 2015. Mr Adrian Turner was appointed to head the merged unit and he reports to a Deputy Chief Executive of CSIRO and has the positional equivalence of a CSIRO Flagship Director. NICTA is primarily funded by government and engages in additional industry partnerships to augment its base funding and these may include start-up corporations or other specific Australian government funds. In addition, NICTA collaborates with many Australian universities and research organisations, NICTA is funded by the Australian Government as represented by the Department of Communications and the Australian Research Council. NICTA also receives funding from industry and several governments, and is supported by its member and partner universities
18.
L4 microkernel family
–
L4 is a family of second-generation microkernels, generally used to implement Unix-like operating systems, but also used in a variety of other systems. L4, like its predecessor L3, was created by German computer scientist Jochen Liedtke as a response to the performance of earlier microkernel-based operating systems. Liedtke felt that a system designed from the start for high performance, rather than other goals and his original implementation in hand-coded Intel i386-specific assembly language code in 1993 sparked intense interest in the computer industry. Since its introduction, L4 has been developed for platform independence and also in improving security, isolation, there have been various re-implementations of the original binary L4 kernel interface and its successors, including L4Ka, Pistachio, L4/MIPS and Fiasco. For this reason, the name L4 has been generalized and no longer refers to Liedtkes original implementation. It now applies to the whole family including the L4 kernel interface. One variant, OKL4 from Open Kernel Labs, shipped in billions of mobile devices, in this spirit, the L4 microkernel provides few basic mechanisms, address spaces, threads and scheduling, and inter-process communication. An operating system based on a microkernel like L4 provides services as servers in user space that monolithic kernels like Linux or older generation microkernels include internally. For example, in order to implement a secure Unix-like system, the poor performance of first-generation microkernels, such as Mach, led a number of developers to re-examine the entire microkernel concept in the mid-1990s. The asynchronous in-kernel-buffering process communication concept used in Mach turned out to be one of the reasons for its poor performance. This induced developers of Mach-based operating systems to move some time-critical components, like file systems or drivers, while this somewhat ameliorated the performance issues, it plainly violates the minimality concept of a true microkernel. This analysis gave rise to the principle that an efficient microkernel should be enough that the majority of performance-critical code fits into the cache. Jochen Liedtke set out to prove that a well designed thinner IPC layer, with attention to performance. Instead of Machs complex IPC system, his L3 microkernel simply passed the message without any additional overhead, defining and implementing the required security policies were considered to be duties of the user space servers. The role of the kernel was only to provide the mechanism to enable the user-level servers to enforce the policies. L3, developed in 1988, proved itself a safe and robust operating system, after some experience using L3, Liedtke came to the conclusion that several other Mach concepts were also misplaced. By simplifying the microkernel concepts even further he developed the first L4 kernel which was designed with high performance in mind. In order to wring out every bit of performance the entire kernel was written in assembly language, at IBMs Thomas J. Watson Research Center Liedtke and his colleagues continued research on L4 and microkernel based systems in general, especially the Sawmill OS
19.
Microkernel
–
In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system. These mechanisms include low-level address space management, thread management, If the hardware provides multiple rings or CPU modes, the microkernel may be the only software executing at the most privileged level, which is generally referred to as supervisor or kernel mode. Traditional operating system functions, such as drivers, protocol stacks. In terms of the code size, as a general rule microkernels tend to be smaller than monolithic kernels. The MINIX3 microkernel, for example, has approximately 12,000 lines of code, in 1967, Regnecentralen was installing a RC4000 prototype in a Polish fertilizer plant in Puławy. The computer used a small operating system tailored for the needs of the plant. Brinch Hansen and his team became concerned with the lack of generality and reusability of the RC4000 system and they feared that each installation would require a different operating system so they started to investigate novel and more general ways of creating software for the RC4000. In 1969, their effort resulted in the completion of the RC4000 Multiprogramming System and its nucleus provided inter-process communication based on message-passing for up to 23 unprivileged processes, out of which 8 at a time were protected from one another. Besides these elementary mechanisms, it had no strategy for program execution. This strategy was to be implemented by a hierarchy of running programs in which parent processes had complete control over child processes, microkernels were first developed in the 1980s as a response to changes in the computer world, and to several challenges adapting existing mono-kernels to these new systems. New device drivers, protocol stacks, file systems and other systems were being developed all the time. This code was located in the monolithic kernel, and thus required considerable work. This would not only allow these services to be easily worked on. Moreover, it would allow entirely new operating systems to be built up on a common core, microkernels were a very hot topic in the 1980s when the first usable local area networks were being introduced. The same mechanisms that allowed the kernel to be distributed into user space also allowed the system to be distributed across network links. The first microkernels, notably Mach, proved to have disappointing performance, however, during this time the speed of computers grew greatly in relation to networking systems, and the disadvantages in performance came to overwhelm the advantages in development terms. As of 2012, the Mach-based GNU Hurd is also functional and included in testing versions of Arch Linux, although major work on microkernels had largely ended, experimenters continued development. Microkernels are closely related to exokernels, early operating system kernels were rather small, partly because computer memory was limited
20.
Coq
–
Coenzyme Q10, also known as ubiquinone, ubidecarenone, coenzyme Q, and abbreviated at times to CoQ10 /ˌkoʊ ˌkjuː ˈtɛn/, CoQ, or Q10 is a coenzyme that is ubiquitous in the bodies of most animals. It is a 1, 4-benzoquinone, where Q refers to the chemical group and 10 refers to the number of isoprenyl chemical subunits in its tail. This fat-soluble substance, which resembles a vitamin, is present in most eukaryotic cells and it is a component of the electron transport chain and participates in aerobic cellular respiration, which generates energy in the form of ATP. Ninety-five percent of the bodys energy is generated this way. Therefore, those organs with the highest energy requirements—such as the heart, liver, there are three redox states of CoQ10, fully oxidized, semiquinone, and fully reduced. There are two factors that lead to deficiency of CoQ10 in humans, reduced biosynthesis, and increased use by the body. Biosynthesis is the source of CoQ10. Biosynthesis requires at least 12 genes, and mutations in many of them cause CoQ deficiency, CoQ10 levels also may be affected by other genetic defects. The role of statins in deficiencies is controversial, some chronic disease conditions also are thought to reduce the biosynthesis of and increase the demand for CoQ10 in the body, but there are no definite data to support these claims. Usually, toxicity is not observed with high doses of CoQ10, a daily dosage up to 3600 mg was found to be tolerated by healthy as well as unhealthy persons. Some adverse effects, however, largely gastrointestinal, are reported with high intakes. The observed safe level risk assessment method indicated that the evidence of safety is strong at intakes up to 1200 mg/day, although CoQ10 may be measured in blood plasma, these measurements reflect dietary intake rather than tissue status. Currently, most clinical centers measure CoQ10 levels in cultured skin fibroblasts, muscle biopsies, culture fibroblasts can be used also to evaluate the rate of endogenous CoQ10 biosynthesis, by measuring the uptake of 14C-labelled p-hydroxybenzoate. CoQ10 shares a biosynthetic pathway with cholesterol, the synthesis of an intermediary precursor of CoQ10, mevalonate, is inhibited by some beta blockers, blood pressure-lowering medication, and statins, a class of cholesterol-lowering drugs. Statins can reduce levels of CoQ10 by up to 40%. CoQ10 is not approved by the U. S. Food and it is sold as a dietary supplement. In the U. S. supplements are not regulated as drugs, how CoQ10 is manufactured is not regulated and different batches and brands may vary significantly. A2004 laboratory analysis by ConsumerLab. com of CoQ10 supplements on the found that some did not contain the quantity identified on the product label
21.
Metamath
–
While the large database of proved theorems follows conventional ZFC set theory, the Metamath language is a metalanguage, suitable for developing a wide variety of formal systems. The set of symbols that can be used for constructing formulas is declared using $c and $v statements, for example, axioms and rules of inference are specified with $a statements along with $ for block scoping, for example, $ a1 $a |- $. $ The metamath program can convert statements to more conventional TeX notation, for example, note the inclusion of the proof in the $p statement. This substitution is just the replacement of a variable with an expression. Even if Metamath is used for mathematical proof checking, its algorithm is so general we can extend the field of its usage. In fact Metamath could be used with every sort of formal systems, in contrast, it is largely incompatible with logical systems which uses other things than formulas and inference rules. The original natural deduction system, which uses a stack, is an example of a system that cannot be implemented with Metamath. In the case of natural deduction however it is possible to append the stack to the formulas so that Metamaths requirements are met, what makes Metamath so generic is its substitution algorithm. This algorithm makes no assumption about the logic and only checks the substitutions of variables are correctly done. So here is an example of how this algorithm works. Steps 1 and 2 of the theorem 2p2e4 in set. mm are depicted left, lets explain how Metamath uses its substitution algorithm to check that step 2 is the logical consequence of step 1 when you use the theorem opreq2i. It is the conclusion of the theorem opreq2i, the theorem opreq2i states that if A = B, then =. This theorem would never appear under this form in a textbook but its literate formulation is banal. To check the proof Metamath attempts to unify = with =, there is only one way to do so, unifying C with 2, F with +, A with 2 and B with. So now Metamath uses the premise of opreq2i and this premise states that A = B. As a consequence of its previous computation, Metamath knows that A should be substituted by 2 and B by, the premise A = B becomes 2 = and thus step 1 is therefore generated. In its turn step 1 is unified with df-2, df-2 is the definition of the number 2 and states that 2 =. Here the unification is simply a matter of constants and is straightforward, so the verification is finished and these two steps of the proof of 2p2e4 are correct
22.
Philip Wadler
–
Philip Lee Wadler is an American computer scientist known for his contributions to programming language design and type theory. In 1984, he created the Orwell programming language, Wadler was involved in adding generic types to Java 5.0. He is also author of the paper Theorems for free and that gave rise to much research on functional language optimization. Wadler received a Bachelor of Science degree in Mathematics from Stanford University in 1977, and he completed his Doctor of Philosophy in Computer Science at Carnegie Mellon University in 1984. His thesis was entitled Listlessness is Better than Laziness and was supervised by Nico Habermann, wadlers research interests are in programming languages. Wadler was a Research Fellow at the Programming Research Group and St Cross College and he was progressively Lecturer, Reader, and Professor at the University of Glasgow from 1987–96. Wadler was a Member of Technical Staff at Bell Labs, Lucent Technologies, since 2003, he has been Professor of Theoretical Computer Science in the School of Informatics at the University of Edinburgh. Wadler was editor of the Journal of Functional Programming from 1990–2004, Wadler is currently working on a new functional language designed for writing web applications, called Links. He has supervised numerous doctoral students to completion, Wadler received the Most Influential POPL Paper Award in 2003 for the 1993 POPL Symposium paper Imperative Functional Programming, jointly with Simon Peyton Jones. In 2005, he was elected Fellow of the Royal Society of Edinburgh, in 2007, he was inducted as an ACM Fellow by the Association for Computing Machinery. Media related to Philip Wadler at Wikimedia Commons
23.
Lecture Notes in Computer Science
–
Springer Lecture Notes in Computer Science is a series of computer science books published by Springer Science+Business Media since 1973. LNCS reports research results in science, especially in the form of proceedings, post-proceedings. In addition, tutorials, state-of-the-art surveys and hot topics are increasingly being included, as of 2013, more than 8,000 LNCS volumes have appeared. An online subscription to the series costs nearly 23,000 euros per year. LNCS is among the largest series of science conference proceedings, along with those of ACM, IEEE. As an example, the post-proceedings of the bioinformatics CIBB conferences are edited and published in the associated Springer LNBI series
24.
International Standard Serial Number
–
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication. The ISSN is especially helpful in distinguishing between serials with the same title, ISSN are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature. The ISSN system was first drafted as an International Organization for Standardization international standard in 1971, ISO subcommittee TC 46/SC9 is responsible for maintaining the standard. When a serial with the content is published in more than one media type. For example, many serials are published both in print and electronic media, the ISSN system refers to these types as print ISSN and electronic ISSN, respectively. The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers, as an integer number, it can be represented by the first seven digits. The last code digit, which may be 0-9 or an X, is a check digit. Formally, the form of the ISSN code can be expressed as follows, NNNN-NNNC where N is in the set, a digit character. The ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, for calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, the modulus 11 of the sum must be 0. There is an online ISSN checker that can validate an ISSN, ISSN codes are assigned by a network of ISSN National Centres, usually located at national libraries and coordinated by the ISSN International Centre based in Paris. The International Centre is an organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, at the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept, where ISBNs are assigned to individual books, an ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an identifier associated with a serial title. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change, separate ISSNs are needed for serials in different media. Thus, the print and electronic versions of a serial need separate ISSNs. Also, a CD-ROM version and a web version of a serial require different ISSNs since two different media are involved, however, the same ISSN can be used for different file formats of the same online serial