Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, the implementation of algorithms in a chosen programming language; the source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer for solving a given problem; the process of programming thus requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, formal logic. Tasks accompanying and related to programming include: testing, source code maintenance, implementation of build systems, management of derived artifacts, such as the machine code of computer programs.
These might be considered part of the programming process, but the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is the opposite process. A hacker is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can mean a security hacker in common language. Programmable devices have existed at least as far back as 1206 AD, when the automata of Al-Jazari were programmable, via pegs and cams, to play various rhythms and drum patterns. However, the first computer program is dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. Women would continue to dominate the field of computer programming until the mid 1960s. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form.
A control panel added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the instruction set of the particular machine in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets have different assembly languages. Kathleen Booth created one of the first Assembly languages in 1950 for various computers at Birkbeck College. High-level languages allow the programmer to write programs in terms that are syntactically richer, more capable of abstracting the code, making it targetable to varying machine instruction sets via compilation declarations and heuristics.
The first compiler for a programming language was developed by Grace Hopper. When Hopper went to work on UNIVAC in 1949, she brought the idea of using compilers with her. Compilers harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation for example. FORTRAN, the first used high-level language to have a functional implementation which permitted the abstraction of reusable blocks of code, came out in 1957. In 1951 Frances E. Holberton developed the first sort-merge generator which ran on the UNIVAC I. Another woman working at UNIVAC, Adele Mildred Koss, developed a program, a precursor to report generators. In USSR, Kateryna Yushchenko developed the Address programming language for the MESM in 1955; the idea for the creation of COBOL started in 1959 when Mary K. Hawes, who worked for Burroughs Corporation, set up a meeting to discuss creating a common business language, she invited six people, including Grace Hopper.
Hopper was involved in developing COBOL as a business language and creating "self-documenting" programming. Hopper's contribution to COBOL was based on her programming language, called FLOW-MATIC. In 1961, Jean E. Sammet developed FORMAC and published Programming Languages: History and Fundamentals which went on to be a standard work on programming languages. Programs were still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Frances Holberton created a code to allow keyboard inputs while she worked at UNIVAC. Text editors were developed that allowed changes and corrections to be made much more than with punched cards. Sister Mary Kenneth Keller worked on developing the programming language, BASIC when she was a graduate student at Dartmouth in the 1960s. One of the first object-oriented programming languages, was developed by seven programmers, including Adele Goldberg, in the 1970s.
In 1985, Radia Perlman developed the Spannin
In computer science, control flow is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language. Within an imperative programming language, a control flow statement is a statement, the execution of which results in a choice being made as to which of two or more paths to follow. For non-strict functional languages and language constructs exist to achieve the same result, but they are not termed control flow statements. A set of statements is in turn structured as a block, which in addition to grouping defines a lexical scope. Interrupts and signals are low-level mechanisms that can alter the flow of control in a way similar to a subroutine, but occur as a response to some external stimulus or event, rather than execution of an in-line control flow statement. At the level of machine language or assembly language, control flow instructions work by altering the program counter.
For some central processing units, the only control flow instructions available are conditional or unconditional branch instructions termed jumps. The kinds of control flow statements supported by different languages vary, but can be categorized by their effect: Continuation at a different statement Executing a set of statements only if some condition is met Executing a set of statements zero or more times, until some condition is met Executing a set of distant statements, after which the flow of control returns Stopping the program, preventing any further execution A label is an explicit name or number assigned to a fixed position within the source code, which may be referenced by control flow statements appearing elsewhere in the source code. A label marks a position within source code, has no other effect. Line numbers are an alternative to a named label, that are whole numbers placed at the start of each line of text in the source code. Languages which use these impose the constraint that the line numbers must increase in value in each following line, but may not require that they be consecutive.
For example, in BASIC: In other languages such as C and Ada, a label is an identifier appearing at the start of a line and followed by a colon. For example, in C: The language ALGOL 60 allowed both whole numbers and identifiers as labels, but few if any other ALGOL variants allowed whole numbers. Early Fortran compilers only allowed whole numbers as labels. Beginning with Fortran-90, alphanumeric labels have been allowed; the goto statement is the most basic form of unconditional transfer of control. Although the keyword may either be in upper or lower case depending on the language, it is written as: goto label The effect of a goto statement is to cause the next statement to be executed to be the statement appearing at the indicated label. Goto statements have been considered harmful by many computer scientists, notably Dijkstra; the terminology for subroutines varies. In the 1950s, computer memories were small by current standards so subroutines were used to reduce program size. A piece of code was written once and used many times from various other places in a program.
Today, subroutines are more used to help make a program more structured, e.g. by isolating some algorithm or hiding some data access method. If many programmers are working on one program, subroutines are one kind of modularity that can help divide the work. In structured programming, the ordered sequencing of successive commands is considered one of the basic control structures, used as a building block for programs alongside iteration and choice. In May 1966, Böhm and Jacopini published an article in Communications of the ACM which showed that any program with gotos could be transformed into a goto-free form involving only choice and loops with duplicated code and/or the addition of Boolean variables. Authors showed that choice can be replaced by loops; that such minimalism is possible does not mean that it is desirable. What Böhm and Jacopini's article showed was that all programs could be goto-free. Other research showed that control structures with one entry and one exit were much easier to understand than any other form because they could be used anywhere as a statement without disrupting the control flow.
In other words, they were composable. Some academics took a purist approach to the Böhm-Jacopini result and argued that instructions like break and return from the middle of loops are bad practice as they are not needed in the Böhm-Jacopini proof, th
Andrew Wilson Appel is the Eugene Higgins Professor of computer science at Princeton University, New Jersey. He is well-known because of his compiler books, the Modern Compiler Implementation in ML series, as well as Compiling With Continuations, he is a major contributor to the Standard ML of New Jersey compiler, along with David MacQueen, John H. Reppy, Matthias Blume and others and one of the authors of Rog-O-Matic. Appel gained an A. B. summa cum laude at Princeton University in 1981, a Ph. D. at Carnegie-Mellon University, in 1985. He became an ACM Fellow in 1998. From July 2005 to July 2006, he was a visiting researcher at the Institut national de recherche en informatique et en automatique, France, on sabbatical from Princeton. Andrew Appel campaigns on issues related to the interaction of computer technology, he testified in the penalty phase of the Microsoft antitrust case in 2002. He is opposed to the introduction of some computerized voting machines, which he deemed untrustworthy. In 2007, he received attention when he purchased a number of voting machines for the purpose of investigating their security.
In 1981, Appel developed a better approach to the n-body problem in linearithmic instead of quadratic time. Andrew Appel is the son of mathematician Kenneth Appel, who proved the Four-Color Theorem in 1976. Website at Princeton
Douglas W. Jones
Douglas W. Jones is an American computer scientist at the University of Iowa, his research focuses on computer security electronic voting. Jones received a B. S. in physics from Carnegie Mellon University in 1973, a M. S. and Ph. D. in computer science from the University of Illinois at Urbana-Champaign in 1976 and 1980 respectively. Jones' involvement with electronic voting research began in 1994, when he was appointed to the Iowa Board of Examiners for Voting Machines and Electronic Voting Systems, he chaired the board from 1999 to 2003, has testified before the United States Commission on Civil Rights, the United States House Committee on Science and the Federal Election Commission on voting issues. In 2005 he participated as an election observer for the presidential election in Kazakhstan. Jones was the technical advisor for HBO's documentary on electronic voting machine issues, "Hacking Democracy", released in 2006, he was a member of the ACCURATE electronic voting project from 2005 to 2011.
On December 11, 2009, the Election Assistance Commission appointed Douglas Jones to the Technical Guidelines Development Committee. Together with Barbara Simons, Jones has published a book on electronic voting entitled Broken Ballots: Will Your Vote Count?. Jones's most cited work centers on the evaluation of priority queue implementations; this work has been credited with helping relaunch the empirical study of algorithm performance. In related work, Jones applied splay trees to data compression and developed algorithms for applying parallel computing to discrete event simulation. Jones's PhD thesis was in the area of capability-based addressing, he has published on other aspects of computer architecture, he has published work on computer architecture on an occasional basis, such as his proposal for a one instruction set computer. Douglas Jones' website NPR Science Friday interview on voting technology NPR Talk of the Nation interview on voting technology Douglas W. Jones at the Internet Speculative Fiction Database
OpenWrt is an open source project for embedded operating system based on Linux used on embedded devices to route network traffic. The main components are Linux, util-linux and BusyBox. All components have been optimized to be small enough to fit into the limited storage and memory available in home routers. OpenWrt is configured using a web interface. There are about 3500 optional software packages available for installation via the opkg package management system. OpenWrt can run on various types of devices, including CPE routers, residential gateways, pocket computers, laptops, it is possible to run OpenWrt on personal computers, which are most based on the x86 architecture. The OpenWrt project was started in 2004 after Linksys had built the firmware for their WRT54G series of wireless routers with code licensed under the GNU General Public License. Under the terms of that license, Linksys was required to make the source code of its modified version available under the same license, which enabled independent developers to create derivative versions.
Support was limited to the WRT54G series, but has since been expanded to include many other routers and devices from many different manufacturers. Using this code as a base and as a reference, developers created a Linux distribution that offers many features not found in consumer-level routers; some features required proprietary software. Before the introduction of OpenWrt 8.09, using Linux 2.6.25 and the b43 kernel module, WLAN for many Broadcom-based routers was only available through the proprietary wl.o module, provided for Linux 2.4.x only. OpenWrt releases were named after cocktails, such as White Russian, Backfire, Attitude Adjustment, Barrier Breaker and Chaos Calmer, their recipes were included in the message of the day displayed after logging in using the command-line interface. In May 2016, OpenWrt was forked by a group of core OpenWrt contributors due to disagreements on internal process; the fork was dubbed Linux Embedded Development Environment. The schism was reconciled a year later.
Following the remerger, announced in January 2018, the OpenWrt branding is preserved, with many of the LEDE processes and rules used. The LEDE project name was used for v17.01, with development versions of 18.01 branded OpenWrt, dropping the original cocktail based naming scheme. With the Attitude Adjustment release of OpenWrt, all hardware devices with 16 MB or less RAM are no longer supported as they can run out of memory easily; the Linux Embedded Development Environment project was a fork of the OpenWrt project and shared many of the same goals. It was created in May 2016 by a group of core OpenWrt contributors due to disagreements on OpenWrt internal processes; the schism was nominally reconciled a year in May 2017 pending approval of the LEDE developers. The remerger uses many of the LEDE processes and rules; the remerge proposal vote was passed by LEDE developers in June 2017, formally announced in January 2018. Merging process was completed before OpenWRT 18.06 release. Released were: OpenWrt features a writeable root file system, enabling users to modify any file and install additional software.
This is in contrast with other firmware based on read-only file systems which don't allow modifying installed software without rebuilding and flashing a complete firmware image. This is accomplished by overlaying a read-only compressed SquashFS file system with a writeable JFFS2 file system using overlayfs. Additional software can be installed with the opkg package manager and the package repository contains 3500 packages. OpenWrt can be configured through either a command-line interface or a web interface called LuCI. OpenWrt provides set of scripts called UCI to unify and simplify configuration through the command-line interface. Additional web interfaces, such as Gargoyle, are available. OpenWrt provides regular bug fixes and security updates for devices that are no longer supported by their manufacturers. Other features include: Extensible configuration of the entire hardware drivers, e.g. built-in network switches and their VLAN-capabilities, WNICs, DSL modems, FX, available hardware buttons, etc.
Exhaustive possibilities to configure network-related features, like: IPv4 support. IPv6 native stack: Prefix Handling, Native IPv6 configuration, IPv6 transitioning technologies, Downstream IPv6 configuration. Routing through iproute2, Quagga, BIRD, Babel etc. Mesh networking through B. A. T. M. A. N. OLSR and IEEE 802.11s-capabilities of the WNIC drivers and other ad hoc mesh routing protocols that have been implemented within Linux. Wireless functionality, e.g. make the device act as a wireless repeater, a wireless access point, a wireless bridge, a captive portal, or a combination of these with e.g. ChilliSpot, WiFiDog Captive Portal, etc. Wireless security: Packet injection, e.g. Airpwn, lorcon, e.a. Stateful firewall, NAT and port forwarding through netfilter. Port knocking via knockd and knock TR-069 client IPS via Snort Active queue management through the network scheduler of the Linux kernel, with many available queuing disciplines. CoDel has been backported to Kernel 3.3. This encapsulates Traffic shaping to ensure fair distribution of bandwidth among multiple users and Quality of Service for simulta