Computer programming is the process of designing and building an executable computer program for accomplishing a specific computing task. Programming involves tasks such as: analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, the implementation of algorithms in a chosen programming language; the source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task on a computer for solving a given problem; the process of programming thus requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, formal logic. Tasks accompanying and related to programming include: testing, source code maintenance, implementation of build systems, management of derived artifacts, such as the machine code of computer programs.
These might be considered part of the programming process, but the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is the opposite process. A hacker is any skilled computer expert that uses their technical knowledge to overcome a problem, but it can mean a security hacker in common language. Programmable devices have existed at least as far back as 1206 AD, when the automata of Al-Jazari were programmable, via pegs and cams, to play various rhythms and drum patterns. However, the first computer program is dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine. Women would continue to dominate the field of computer programming until the mid 1960s. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form.
A control panel added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way. However, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the instruction set of the particular machine in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format, with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, any two machines with different instruction sets have different assembly languages. Kathleen Booth created one of the first Assembly languages in 1950 for various computers at Birkbeck College. High-level languages allow the programmer to write programs in terms that are syntactically richer, more capable of abstracting the code, making it targetable to varying machine instruction sets via compilation declarations and heuristics.
The first compiler for a programming language was developed by Grace Hopper. When Hopper went to work on UNIVAC in 1949, she brought the idea of using compilers with her. Compilers harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation for example. FORTRAN, the first used high-level language to have a functional implementation which permitted the abstraction of reusable blocks of code, came out in 1957. In 1951 Frances E. Holberton developed the first sort-merge generator which ran on the UNIVAC I. Another woman working at UNIVAC, Adele Mildred Koss, developed a program, a precursor to report generators. In USSR, Kateryna Yushchenko developed the Address programming language for the MESM in 1955; the idea for the creation of COBOL started in 1959 when Mary K. Hawes, who worked for Burroughs Corporation, set up a meeting to discuss creating a common business language, she invited six people, including Grace Hopper.
Hopper was involved in developing COBOL as a business language and creating "self-documenting" programming. Hopper's contribution to COBOL was based on her programming language, called FLOW-MATIC. In 1961, Jean E. Sammet developed FORMAC and published Programming Languages: History and Fundamentals which went on to be a standard work on programming languages. Programs were still entered using punched cards or paper tape. See computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Frances Holberton created a code to allow keyboard inputs while she worked at UNIVAC. Text editors were developed that allowed changes and corrections to be made much more than with punched cards. Sister Mary Kenneth Keller worked on developing the programming language, BASIC when she was a graduate student at Dartmouth in the 1960s. One of the first object-oriented programming languages, was developed by seven programmers, including Adele Goldberg, in the 1970s.
In 1985, Radia Perlman developed the Spannin
The space bar, blank, or space key is a key on a typewriter or alphanumeric keyboard in the form of a horizontal bar in the lowermost row wider than other keys. Its main purpose is to conveniently enter e.g. between words during typing. A typical space bar key is large, enough so that a thumb from either hand can use it, is always found on the bottom row of standard keyboard layouts; the "bar" was a metal bar running across the full width of the keyboard that triggered the carriage advance without firing any of the typebars towards the platen. Examples shrank and developed into their current more ergonomic form as a wide, centrally located but otherwise normal "key", as typewriter keyboards began to incorporate additional function keys and were more deliberately "styled". Although it varies by keyboard type, the space bar lies between the Alt keys and below the letter keys: C, V, B, N and M on a standard QWERTY keyboard; some early typewriter and computer keyboards used a different method of inserting spaces a smaller, less distinct "space" key, often set in a less central position, e.g. the Hansen Writing Ball, Hammond typewriters or the Sinclair ZX Spectrum and Jupiter Ace ranges.
The earliest known example and Glidden typewriter used a lever to provide space between words, placing the invention of the inset spacebar after 1843. However these methods were usually just one part of idiosyncratic full keyboard layouts, designed more to cope with particular technical requirements or limitations than with any sense of user friendliness and as such met with limited success, sometimes being dropped on models in the same line. Depending on the operating system, the space bar used with a modifier key such as the control key may have functions such as resizing or closing the current window, half-spacing, or backspacing. On web browsers, the space bar allows the user to page down or to page up when the space bar is used with the shift key. In many programs for playback of linear media, the space bar is used for pausing and resuming playback, or for manually advancing through text
In punctuation, a word divider is a glyph that separates written words. In languages which use the Latin and Arabic alphabets, as well as other scripts of Europe and West Asia, the word divider is a blank space, or whitespace, a convention, spreading, along with other aspects of European punctuation, to Asia and Africa. However, many languages of East Asia are written without word separation. In character encoding, word segmentation depends on. In Ancient Egyptian, determinatives may have been used as much to demarcate word boundaries as to disambiguate the semantics of words. In Assyrian cuneiform, but in the cuneiform Ugaritic alphabet, a vertical stroke was used to separate words. In Old Persian cuneiform, a diagonally sloping wedge was used; as the alphabet spread throughout the ancient world, words were run together without division, this practice remains or remained until in much of South and Southeast Asia. However, not infrequently in inscriptions a vertical line, in manuscripts a single, double, or triple interpunct was used to divide words.
This practice was found in Phoenician, Hebrew and Latin, continues today with Ethiopic, though there whitespace is gaining ground. The early alphabetic writing systems, such as the Phoenician alphabet, had only signs for consonants. Without some form of visible word dividers, parsing a text into its separate words would have been a puzzle. With the introduction of letters representing vowels in the Greek alphabet, the need for inter-word separation lessened; the earliest Greek inscriptions used interpuncts, as was common in the writing systems which preceded it, but soon the practice of scriptio continua, continuous writing in which all words ran together without separation became common. The interpunct died out in Latin only after the Classic period, sometime around the year 200 CE, as the Greek style of scriptio continua became fashionable. In the 7th century, Irish monks started using blank spaces, introduced their script to France. By the 8th or 9th century, spacing was being used consistently across Europe.
Alphabetic writing without inter-word separation, known as scriptio continua, was used in Ancient Egyptian. It appeared in Post-classical Latin after several centuries of the use of the interpunct. Traditionally, scriptio continua was used for the Indic alphabets of South and Southeast Asia and hangul of Korea, but spacing is now used with hangul and with the Indic alphabets. Today Chinese and Japanese are the main scripts written without punctuation to separate words. In Classical Chinese, a word and a character were the same thing, so that word dividers would have been superfluous. Although Modern Mandarin has numerous polysyllabic words, each syllable is written with a distinct character, the conceptual link between character and word or at least morpheme remains strong, no need is felt for word separation apart from what characters provide. Space is the most common word divider in Latin script. Ancient inscribed and cuneiform scripts such as Anatolian hieroglyphs used short vertical lines to separate words, as did Linear B.
In manuscripts, vertical lines were more used for larger breaks, equivalent to the Latin comma and period. This continues with many Indic scripts today; as noted above, the single and double interpunct were used in manuscripts throughout the ancient world. For example, Ethiopic inscriptions used a vertical line, whereas manuscripts used double dots resembling a colon; the latter practice continues today. Classical Latin used the interpunct in both paper manuscripts and stone inscriptions. Ancient Greek orthography used between two and five dots as word separators, as well as the hypodiastole. In the modern Hebrew and Arabic alphabets, some letters have distinct forms at the ends and/or beginnings of words; this demarcation is used in addition to spacing. The Nastaʿlīq form of Islamic calligraphy uses vertical arrangement to separate words; the beginning of each word is written higher than the end of the preceding word, so that a line of text takes on a sawtooth appearance. Nastaliq spread from Persia and today is used for Persian, Uyghur and Urdu.
In finger spelling and in Morse code, words are separated by a pause. Whitespace Sentence spacing Speech segmentation Zero-width non-joiner Zero-width space Substitute blank Underscore Daniels, Peter T.. The World's Writing Systems. Oxford University Press. Knight, Stan. "The Roman Alphabet". In Daniels, Peter T.. The World's Writing Systems. Oxford University Press. Ritner, Robert. "Egyptian Writing". In Daniels, Peter T.. The World's Writing Systems. Oxford University Press. Saenger, Paul. Space Between Words: The Origins of Silent Reading. Stanford University Press. ISBN 0-8047-4016-X. Wingo, E. Otha. Latin Punctuation in the Classical Age. Mouton. P. 16
A counterbore is a cylindrical flat-bottomed hole that enlarges another coaxial hole, or the tool used to create that feature. A counterbore hole is used when a fastener, such as a socket head cap screw, is required to sit flush with or below the level of a workpiece's surface. Whereas a counterbore is a flat-bottomed enlargement of a smaller coaxial hole, a countersink is a conical enlargement of such. A spotface takes the form of a shallow counterbore; as mentioned above, the cutters that produce counterbores are also called counterbores. A counterbore hole is used when the head of a fastener, such as a hex head or socket head capscrew, is required to be flush with or below the level of a workpiece's surface. For a spotface, material is removed from a surface to make it flat and smooth for a fastener or a bearing. Spotfacing is required on workpieces that are forged or cast. A tool referred to as a counterbore is used to cut the spotface, although an endmill may be used. Only enough material is removed to make the surface flat.
A counterbore is used to create a perpendicular surface for a fastener head on a non-perpendicular surface. If this is not feasible a self-aligning nut may be required. By comparison, a countersink is used to seat a flathead screw. Standards exist for the sizes of counterbores for fastener head seating areas; these standards can vary between standards organizations. For example, in Boeing Design Manual BDM-1327 section 3.5, the nominal diameter of the spotfaced surface is the same as the nominal size of the cutter, is equal to the flat seat diameter plus twice the fillet radius. This is in contrast to the ASME Y14.5-2009 definition of a spotface, equal to the flat seat diameter. Counterbores are made with standard dimensions for a certain size of screw or are produced in sizes that are not related to any particular screw size. In either case, the tip of the counterbore has a reduced diameter section referred to as the pilot, a feature essential to assuring concentricity between the counterbore and the hole being counterbored.
Counterbores matched to specific screw sizes have integral pilots that fit the clearance hole diameter associated with a particular screw size. Counterbores that are not related to a specific screw size are designed to accept a removable pilot, allowing any given counterbore size to be adapted to a variety of hole sizes; the pilot matters little when running the cutter in a milling setup where rigidity is assured and hole center location is achieved via X-Y positioning. The uppermost counterbore tools shown in the image are the same device; the smaller top item is an insert, the middle shows another three-fluted counterbore insert, assembled in the holder. The shank of this holder is a Morse taper, although there are other machine tapers that are used in the industry; the lower counterbore is designed to fit into a drill chuck, being smaller, is economical to make as one piece. Countersink Degarmo, E. Paul. Materials and Processes in Manufacturing, Wiley, ISBN 0-471-65653-4
The zero-width non-joiner is a non-printing character used in the computerization of writing systems that make use of ligatures. When placed between two characters that would otherwise be connected into a ligature, a ZWNJ causes them to be printed in their final and initial forms, respectively; this is an effect of a space character, but a ZWNJ is used when it is desirable to keep the words closer together or to connect a word with its morpheme. The ZWNJ is encoded in Unicode as U+200C ZERO WIDTH NON-JOINER. In certain languages, the ZWNJ is necessary for unambiguously specifying the correct typographic form of a character sequence; the ASCII control code unit separator was used. The picture shows how the code looks when it is rendered and in every row the correct and incorrect pictures should be different. If the correct display and the incorrect one look the same to you, or if either of them is different from the corresponding picture, your system is not displaying the Unicode correctly. In this Biblical Hebrew example, the placement of the meteg to the left of the segol is correct, which has a shva sign written as two vertical dots to denote short vowel.
If a meteg were placed to the left of shva, it would cause erroneous. In Modern Hebrew, there is no reason to use the meteg for spoken language, so it is used in Modern Hebrew typesetting. In German typography, ligatures may not cross the constituent boundaries within compounds. Thus, in the first German example, the prefix Auf- is separated from the rest of the word to prohibit the ligature fl. In English, ligatures should not cross morpheme boundaries. For example, in some words'fly' and'fish' are morphemes but in others they're not. Persian uses this character extensively for certain prefixes and compound words, it is necessary for disambiguating compounds from non-compound words. In Indic scripts, insertion of a ZWNJ after a consonant either with a halant or before a dependent vowel prevents the characters from being joined properly: In Devanagari, the characters क् and ष combine to form क्ष, but when a ZWNJ is inserted between them, क्ष is seen instead. In Kannada, the characters ನ್ and ನ combine to form ನ್ನ, but when a ZWNJ is inserted between them, ನ್ನ is displayed.
That style is used to write non-Kannada words in Kannada script: "Facebook" is written as ಫೇಸ್ಬುಕ್, though it can be written as ಫೇಸ್ಬುಕ್. ರಾಜ್ಕುಮಾರ್ and ರಾಮ್ಗೊಪಾಲ್ are examples of other proper nouns that need ZWNJ. In Bengali, the characters র and the অ্যা combine to form র্য, a typographic ligature of র and য, because অ্যা isn't a single character but a sequence of 3 letters. To fix the problem ZWNJ is used. Examples: The words র্যাব, র্যান্ডম etc. are fixed by inserting ZWNJ. Without ZWNJ they are shown as র্যাব, র্যান্ডম. Words like উদ্ঘাটন and ইক্রা require ZWNJ to be displayed properly; the symbol to be used on keyboards which enable the input of the ZWNJ directly is standardized in Amendment 1 of ISO/IEC 9995-7:2009 "Information technology – Keyboard layouts for text and office systems – Symbols used to represent functions" as symbol number 81, in IEC 60417 "Graphical Symbols for use on Equipment" as symbol no. IEC 60417-6177-2. Zero-width joiner Zero-width space Word divider Using the ZWNJ in Persian Every character has a story #19: U+200c and U+200d
Newline is a control character or sequence of control characters in a character encoding specification, used to signify the end of a line of text and the start of a new one. Text editors set this special character; when displaying a text file, this control character causes the text editor to show the following characters in a new line. In the mid-1800s, long before the advent of teleprinters and teletype machines, Morse code operators or telegraphists invented and used Morse code prosigns to encode white space text formatting in formal written text messages. In particular the Morse prosign represented by the concatenation of two literal textual Morse code "A" characters sent without the normal inter-character spacing is used in Morse code to encode and indicate a new line in a formal text message. In the age of modern teleprinters standardized character set control codes were developed to aid in white space text formatting. ASCII was developed by the International Organization for Standardization and the American Standards Association, the latter being the predecessor organization to American National Standards Institute.
During the period of 1963 to 1968, the ISO draft standards supported the use of either CR+LF or LF alone as a newline, while the ASA drafts supported only CR+LF. The sequence CR+LF was used on many early computer systems that had adopted Teletype machines—typically a Teletype Model 33 ASR—as a console device, because this sequence was required to position those printers at the start of a new line; the separation of newline into two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in time to print the next character. Any character printed after a CR would print as a smudge in the middle of the page while the print head was still moving the carriage back to the first position. "The solution was to make the newline two characters: CR to move the carriage to column one, LF to move the paper up." In fact, it was necessary to send extra characters—extraneous CRs or NULs—which are ignored but give the print head time to move to the left margin.
Many early video displays required multiple character times to scroll the display. On such systems, applications had to talk directly to the Teletype machine and follow its conventions since the concept of device drivers hiding such hardware details from the application was not yet well developed. Therefore, text was composed to satisfy the needs of Teletype machines. Most minicomputer systems from DEC used this convention. CP/M used it in order to print on the same terminals that minicomputers used. From there MS-DOS adopted CP/M's CR+LF in order to be compatible, this convention was inherited by Microsoft's Windows operating system; the Multics operating system used LF alone as its newline. Multics used a device driver to translate this character to whatever sequence a printer needed, the single byte was more convenient for programming. What seems like a more obvious choice—CR—was not used, as CR provided the useful function of overprinting one line with another to create boldface and strikethrough effects.
More the use of LF alone as a line terminator had been incorporated into drafts of the eventual ISO/IEC 646 standard. Unix followed the Multics practice, Unix-like systems followed Unix; the concepts of line feed and carriage return are associated, can be considered either separately or together. In the physical media of typewriters and printers, two axes of motion, "down" and "across", are needed to create a new line on the page. Although the design of a machine must consider them separately, the abstract logic of software can combine them together as one event; this is why a newline in character encoding can be defined as CR combined into one. Some character sets provide a separate newline character code. EBCDIC, for example, provides an NL character code in addition to the LF codes. Unicode, in addition to providing the ASCII CR and LF control codes provides a "next line" control code, as well as control codes for "line separator" and "paragraph separator" markers. Software applications and operating systems represent a newline with one or two control characters: EBCDIC systems—mainly IBM mainframe systems, including z/OS and i5/OS —use NL as the character combining the functions of line-feed and carriage-return.
The equivalent UNICODE character is called NEL. EBCDIC has control characters called CR and LF, but the numerical value of LF differs from the one used by ASCII. Additionally, some EBCDIC variants use NL but assign a different numeric code to the character. However, those operating systems use a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are stored. Operating systems for the CDC 6000 series defined a newline as two or more zero-valued six-bit characters at the end of a 60-bit word; some configurations defined a zero-valued character as a colon character, with the result that multiple colons could be interpreted as a newline depending on position. RSX-11 and OpenVMS use a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are stored, but the Record Management Services facility can transparently add a terminator to each line when it is retrieved by
Typography is the art and technique of arranging type to make written language legible and appealing when displayed. The arrangement of type involves selecting typefaces, point sizes, line lengths, line-spacing, letter-spacing, adjusting the space between pairs of letters; the term typography is applied to the style and appearance of the letters and symbols created by the process. Type design is a related craft, sometimes considered part of typography. Typography may be used as a decorative device, unrelated to communication of information. Typography is the work of typesetters, graphic designers, art directors, manga artists, comic book artists, graffiti artists, now, anyone who arranges words, letters and symbols for publication, display, or distribution, from clerical workers and newsletter writers to anyone self-publishing materials; until the Digital Age, typography was a specialized occupation. Digitization opened up typography to new generations of unrelated designers and lay users; as the capability to create typography has become ubiquitous, the application of principles and best practices developed over generations of skilled workers and professionals has diminished.
So at a time when scientific techniques can support the proven traditions through understanding the limitations of human vision, typography as encountered may fail to achieve its principal objective: effective communication. The word "typography" in English comes from the Greek roots τύπος typos = "impression" and -γραφία -graphia = "writing". Although applied to printed, published and reproduced materials in contemporary times, all words, letters and numbers written alongside the earliest naturalistic drawings by humans may be called typography; the word, typography, is derived from the Greek words τύπος typos "form" or "impression" and γράφειν graphein "to write", traces its origins to the first punches and dies used to make seals and currency in ancient times, which ties the concept to printing. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the second millennium B. C. may be evidence of type, wherein the reuse of identical characters was applied to create cuneiform text.
Babylonian cylinder seals were used to create an impression on a surface by rolling the seal on wet clay. Typography was implemented in the Phaistos Disc, an enigmatic Minoan printed item from Crete, which dates to between 1850 and 1600 B. C, it has been proposed that Roman lead pipe inscriptions were created with movable type printing, but German typographer Herbert Brekle dismissed this view. The essential criterion of type identity was met by medieval print artifacts such as the Latin Pruefening Abbey inscription of 1119, created by the same technique as the Phaistos Disc; the silver altarpiece of patriarch Pellegrinus II in the cathedral of Cividale was printed with individual letter punches. The same printing technique may be found in tenth to twelfth century Byzantine reliquaries. Other early examples include individual letter tiles where the words are formed by assembling single letter tiles in the desired order, which were reasonably widespread in medieval Northern Europe. Typography with movable type was invented during the eleventh-century Song dynasty in China by Bi Sheng.
His movable type system was manufactured from ceramic materials, clay type printing continued to be practiced in China until the Qing Dynasty. Wang Zhen was one of the pioneers of wooden movable type. Although the wooden type was more durable under the mechanical rigors of handling, repeated printing wore the character faces down and the types could be replaced only by carving new pieces. Metal movable type was first invented in Korea during the Goryeo Dynasty 1230. Hua Sui introduced bronze type printing to China in 1490 AD; the diffusion of both movable-type systems was limited and the technology did not spread beyond East and Central Asia, however. Modern lead-based movable type, along with the mechanical printing press, is most attributed to the goldsmith Johannes Gutenberg in 1439, his type pieces, made from a lead-based alloy, suited printing purposes so well that the alloy is still used today. Gutenberg developed specialized techniques for casting and combining cheap copies of letter punches in the vast quantities required to print multiple copies of texts.
This technical breakthrough was instrumental in starting the Printing Revolution and the first book printed with lead-based movable type was the Gutenberg Bible. Advancing technology revolutionized typography in the latter twentieth century. During the 1960s some camera-ready typesetting could be produced in any office or workshop with stand-alone machines such as those introduced by IBM. During the mid-1980s personal computers such as the Macintosh allowed type designers to create typefaces digitally using commercial graphic design software. Digital technology enabled designers to create more experimental typefaces as well as the practical typefaces of traditional typography. Designs for typefaces could be created faster with the new technology, for more specific functions; the cost for developing typefaces was drastically lowered, becoming available to the masses. The change has been called the "democratization of type" and has given new designers more opportunities to enter the field; the design of typefaces has de