It describes 18 elements comprising the initial simple design of HTML. Except for the hyperlink tag, these were influenced by SGMLguid, an in-house Standard Generalized Markup Language -based documentation format at CERN. Eleven of these elements still exist in HTML 4. HTML is a markup language that web browsers use to interpret and compose text and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements rather than print effects, with the separation of structure and markup.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document type definition to define the grammar; the draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Dave Raggett's competing Internet-Draft, "HTML+", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms. After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Further development under the auspices of the IETF was stalled by competing interests.
Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium. However, in 2000, HTML became an international standard. HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group, which became a joint deliverable with the W3C in 2008, completed and standardized on 28 October 2014. November 24, 1995 HTML 2.0 was published as RFC 1866. Supplemental RFCs added capabilities: November 25, 1995: RFC 1867 May 1996: RFC 1942 August 1996: RFC 1980 January 1997: RFC 2070 January 14, 1997 HTML 3.2 was published as a W3C Recommendation. It was the first version developed and standardized by the W3C, as the IETF had closed its HTML Working Group on September 12, 1996. Code-named "Wilbur", HTML 3.2 dropped math formulas reconciled overlap among various proprietary extensions and adopted most of Netscape's visual markup tags.
Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies. A markup for mathematical formu
Newline is a control character or sequence of control characters in a character encoding specification, used to signify the end of a line of text and the start of a new one. Text editors set this special character; when displaying a text file, this control character causes the text editor to show the following characters in a new line. In the mid-1800s, long before the advent of teleprinters and teletype machines, Morse code operators or telegraphists invented and used Morse code prosigns to encode white space text formatting in formal written text messages. In particular the Morse prosign represented by the concatenation of two literal textual Morse code "A" characters sent without the normal inter-character spacing is used in Morse code to encode and indicate a new line in a formal text message. In the age of modern teleprinters standardized character set control codes were developed to aid in white space text formatting. ASCII was developed by the International Organization for Standardization and the American Standards Association, the latter being the predecessor organization to American National Standards Institute.
During the period of 1963 to 1968, the ISO draft standards supported the use of either CR+LF or LF alone as a newline, while the ASA drafts supported only CR+LF. The sequence CR+LF was used on many early computer systems that had adopted Teletype machines—typically a Teletype Model 33 ASR—as a console device, because this sequence was required to position those printers at the start of a new line; the separation of newline into two functions concealed the fact that the print head could not return from the far right to the beginning of the next line in time to print the next character. Any character printed after a CR would print as a smudge in the middle of the page while the print head was still moving the carriage back to the first position. "The solution was to make the newline two characters: CR to move the carriage to column one, LF to move the paper up." In fact, it was necessary to send extra characters—extraneous CRs or NULs—which are ignored but give the print head time to move to the left margin.
Many early video displays required multiple character times to scroll the display. On such systems, applications had to talk directly to the Teletype machine and follow its conventions since the concept of device drivers hiding such hardware details from the application was not yet well developed. Therefore, text was composed to satisfy the needs of Teletype machines. Most minicomputer systems from DEC used this convention. CP/M used it in order to print on the same terminals that minicomputers used. From there MS-DOS adopted CP/M's CR+LF in order to be compatible, this convention was inherited by Microsoft's Windows operating system; the Multics operating system used LF alone as its newline. Multics used a device driver to translate this character to whatever sequence a printer needed, the single byte was more convenient for programming. What seems like a more obvious choice—CR—was not used, as CR provided the useful function of overprinting one line with another to create boldface and strikethrough effects.
More the use of LF alone as a line terminator had been incorporated into drafts of the eventual ISO/IEC 646 standard. Unix followed the Multics practice, Unix-like systems followed Unix; the concepts of line feed and carriage return are associated, can be considered either separately or together. In the physical media of typewriters and printers, two axes of motion, "down" and "across", are needed to create a new line on the page. Although the design of a machine must consider them separately, the abstract logic of software can combine them together as one event; this is why a newline in character encoding can be defined as CR combined into one. Some character sets provide a separate newline character code. EBCDIC, for example, provides an NL character code in addition to the LF codes. Unicode, in addition to providing the ASCII CR and LF control codes provides a "next line" control code, as well as control codes for "line separator" and "paragraph separator" markers. Software applications and operating systems represent a newline with one or two control characters: EBCDIC systems—mainly IBM mainframe systems, including z/OS and i5/OS —use NL as the character combining the functions of line-feed and carriage-return.
The equivalent UNICODE character is called NEL. EBCDIC has control characters called CR and LF, but the numerical value of LF differs from the one used by ASCII. Additionally, some EBCDIC variants use NL but assign a different numeric code to the character. However, those operating systems use a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are stored. Operating systems for the CDC 6000 series defined a newline as two or more zero-valued six-bit characters at the end of a 60-bit word; some configurations defined a zero-valued character as a colon character, with the result that multiple colons could be interpreted as a newline depending on position. RSX-11 and OpenVMS use a record-based file system, which stores text files as one record per line. In most file formats, no line terminators are stored, but the Record Management Services facility can transparently add a terminator to each line when it is retrieved by
Application software is software designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a word processor, a spreadsheet, an accounting application, a web browser, an email client,a media player, a file viewer, an aeronautical flight simulator, a console game or a photo editor; the collective noun application software refers to all applications collectively. This contrasts with system software, involved with running the computer. Applications may be bundled with the computer and its system software or published separately, may be coded as proprietary, open-source or university projects. Apps built for mobile platforms are called mobile apps. In information technology, an application, application program or software application is a computer program designed to help people perform an activity. An application thus differs from an operating system, a utility, a programming tool. Depending on the activity for which it was designed, an application can manipulate text, audio, graphics, or a combination of these elements.
Some application packages focus on a single task, such as word processing. User-written software tailors systems to meet the user's specific needs. User-written software includes spreadsheet templates, word processor macros, scientific simulations, audio and animation scripts. Email filters are a kind of user software. Users create this software themselves and overlook how important it is; the delineation between system software such as operating systems and application software is not exact, is the object of controversy. For example, one of the key questions in the United States v. Microsoft Corp. antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven.
The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app: see Application Portfolio Management; the word "application", once used as an adjective, is not restricted to the "of or pertaining to application software" meaning. For example, concepts such as application programming interface, application server, application virtualization, application lifecycle management and portable application apply to all computer programs alike, not just application software; some applications are available in versions for several different platforms. Sometimes a new and popular application arises which only runs on one platform, increasing the desirability of that platform; this is called a killer killer app. For example, VisiCalc was the first modern spreadsheet software for the Apple II and helped selling the then-new personal computers into offices. For Blackberry it was their email software. In recent years, the shortened term "app" has become popular to refer to applications for mobile devices such as smartphones and tablets, the shortened form matching their smaller scope compared to applications on PCs.
More the shortened version is used for desktop application software as well. There are many different and not alternative ways in order to order and classify application software. By the legal point of view, application software is classified with a black box approach, in relation to the rights of its final end-users or subscribers. Software applications are classified in respect of the programming language in which the source code is written or executed, respect of their purpose and outputs. Application software is distinguished among two main classes: closed source vs open source software applications, among free or proprietary software applications. Proprietary software is placed under the exclusive copyright, a software license grants limited usage rights; the open-closed principle states that software may be "open only for extension, but not for modification". Such applications can only get add-on by third-parties. Free and open-source software shall be run, sold or extended for any purpose, -being open- shall be modified or reversed in the same way.
A computer file is a computer resource for recording data discretely in a computer storage device. Just as words can be written to paper, so can information be written to a computer file. Files can be transferred through the internet. There are different types of computer files, designed for different purposes. A file may be designed to store a picture, a written message, a video, a computer program, or a wide variety of other kinds of data; some types of files can store several types of information at once. By using computer programs, a person can open, change and close a computer file. Computer files may be reopened and copied an arbitrary number of times. Files are organised in a file system, which keeps track of where the files are located on disk and enables user access; the word "file" derives from the Latin filum."File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie.
He used it for building a table from successive differences, for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards." In February 1950: In a Radio Corporation of America advertisement in Popular Science Magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept'on file' and taken out again. Such a'file' now exists in a'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones - speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files"; the introduction, circa 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word.
Although the contemporary "register file" demonstrates the early concept of files, its use has decreased. On most modern operating systems, files are organized into one-dimensional arrays of bytes; the format of a file is defined by its content since a file is a container for data, although, on some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file are associated with either ASCII or UTF-8 characters, while the bytes of image and audio files are interpreted otherwise. Most file types allocate a few bytes for metadata, which allows a file to carry some basic information about itself; some file systems can store arbitrary file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via software-specific databases. All those methods, are more susceptible to loss of metadata than are container and archive file formats.
At any instant in time, a file might have a size expressed as number of bytes, that indicates how much storage is associated with the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count; the general definition of a file does not require that its size have any real meaning, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file. For example, the file to which the link /bin/ls points in a typical Unix-like system has a defined size that changes. Compare this with /dev/null, a file, but its size may be obscure. Information in a computer file can consist of smaller packets of information that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details.
A text file may contain lines of corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image or it may contain an executable; the way information is grouped into a file is up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis; the programmers who create the programs decide what files are needed, how they are to be used and their names. In some cases, computer pr
Metadata is "data that provides information about other data". Many distinct types of metadata exist, among these descriptive metadata, structural metadata, administrative metadata, reference metadata and statistical metadata. Descriptive metadata describes a resource for purposes such as identification, it can include elements such as title, abstract and keywords. Structural metadata is metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters, it describes the types, versions and other characteristics of digital materials. Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, who can access it. Reference metadata describes the contents and quality of statistical data Statistical metadata may describe processes that collect, process, or produce statistical data. Metadata was traditionally used in the card catalogs of libraries until the 1980s, when libraries converted their catalog data to digital databases.
In the 2000s, as digital formats were becoming the prevalent way of storing data and information, metadata was used to describe digital data using metadata standards. The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary we have statements in an object language about subject descriptions of data and token codes for the data. We have statements in a meta language describing the data relationships and transformations, ought/is relations between norm and data."There are different metadata standards for each different discipline. Describing the contents and context of data or data files increases its usefulness. For example, a web page may include metadata specifying what software language the page is written in, what tools were used to create it, what subjects the page is about, where to find more information about the subject; this metadata can automatically improve the reader's experience and make it easier for users to find the web page online.
A CD may include metadata providing information about the musicians and songwriters whose work appears on the disc. A principal purpose of metadata is to help users discover resources. Metadata helps to organize electronic resources, provide digital identification, support the archiving and preservation of resources. Metadata assists users in resource discovery by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, giving location information." Metadata of telecommunication activities including Internet traffic is widely collected by various national governmental organizations. This data can be used for mass surveillance. In many countries, the metadata relating to emails, telephone calls, web pages, video traffic, IP connections and cell phone locations are stored by government organizations. Metadata means "data about data". Although the "meta" prefix means "after" or "beyond", it is used to mean "about" in epistemology.
Metadata is defined as the data providing information about one or more aspects of the data. Some examples include:Means of creation of the data Purpose of the data Time and date of creation Creator or author of the data Location on a computer network where the data was created Standards used File size Data quality Source of the data Process used to create the dataFor example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, the shutter speed, other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, a short summary of the document. Metadata within web pages can contain descriptions of page content, as well as key words linked to the content; these links are called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s. The reliance of metatags in web searches was decreased in the late 1990s because of "keyword stuffing".
Metatags were being misused to trick search engines into thinking some websites had more relevance in the search than they did. Metadata can be stored and managed in a database called a metadata registry or metadata repository. However, without context and a point of reference, it might be impossible to identify metadata just by looking at it. For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into an equation - without any other context, the numbers themselves can be perceived as the data, but if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified as ISBNs - information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, "structural metadata" i.e. "data about the containers of data".
A database is an organized collection of data stored and accessed electronically from a computer system. Where databases are more complex they are developed using formal design and modeling techniques; the database management system is the software that interacts with end users and the database itself to capture and analyze the data. The DBMS software additionally encompasses; the sum total of the database, the DBMS and the associated applications can be referred to as a "database system". The term "database" is used to loosely refer to any of the DBMS, the database system or an application associated with the database. Computer scientists may classify database-management systems according to the database models that they support. Relational databases became dominant in the 1980s; these model data as rows and columns in a series of tables, the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, referred to as NoSQL because they use different query languages.
Formally, a "database" refers to the way it is organized. Access to this data is provided by a "database management system" consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database; the DBMS provides various functions that allow entry and retrieval of large quantities of information and provides ways to manage how that information is organized. Because of the close relationship between them, the term "database" is used casually to refer to both a database and the DBMS used to manipulate it. Outside the world of professional information technology, the term database is used to refer to any collection of related data as size and usage requirements necessitate use of a database management system. Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups: Data definition – Creation and removal of definitions that define the organization of the data.
Update – Insertion and deletion of the actual data. Retrieval – Providing information in a form directly usable or for further processing by other applications; the retrieved data may be made available in a form the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database. Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, recovering information, corrupted by some event such as an unexpected system failure. Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, database. Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are multiprocessor computers, with generous memory and RAID disk arrays used for stable storage.
RAID is used for recovery of data. Hardware database accelerators, connected to one or more servers via a high-speed channel, are used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs rely on a standard operating system to provide these functions. Since DBMSs comprise a significant market and storage vendors take into account DBMS requirements in their own development plans. Databases and DBMSs can be categorized according to the database model that they support, the type of computer they run on, the query language used to access the database, their internal engineering, which affects performance, scalability and security; the sizes and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, computer networks.
The development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, post-relational. The two main early navigational data models were the hierarchical model and the CODASYL model The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links; the relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems. By the early 1990s, relational systems dominated in all large-scale data processing applications, as of 2018 they remain dominant: IBM DB2, Oracle, MySQL, Microsoft SQL Server are the most searched DBMS; the dominant database language, standardised SQL for the relational model, has influenced database languages for other data models. Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch, which led to the coining of the term "post-relational" and the development of hybrid object-relational databas
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function. A computer program is written by a computer programmer in a programming language. From the program in its human-readable form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a computer program may be executed with the aid of an interpreter. A collection of computer programs and related data are referred to as software. Computer programs may be categorized along functional lines, such as application software and system software; the underlying method used for some calculation or manipulation is known as an algorithm. The earliest programmable machines preceded the invention of the digital computer. In 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be repeated by arranging the cards.
In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled; the device would have had a "store"—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the "store" would have been transferred to the "mill", for processing, and a "thread" being the execution of programmed instructions by the device. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea; the memoir covered the Analytical Engine. The translation contained Note G which detailed a method for calculating Bernoulli numbers using the Analytical Engine.
This note is recognized by some historians as the world's first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine, it is a finite-state machine. The machine can move the tape forth, changing its contents as it performs an algorithm; the machine starts in the initial state, goes through a sequence of steps, halts when it encounters the halt state. This machine is considered by some to be the origin of the stored-program computer—used by John von Neumann for the "Electronic Computing Instrument" that now bears the von Neumann architecture name; the Z3 computer, invented by Konrad Zuse in Germany, was a programmable computer. A digital computer uses electricity as the calculating component; the Z3 contained 2,400 relays to create the circuits. The circuits provided a floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.
The Electronic Numerical Integrator And Computer was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together, its 40 units weighed 30 tons, occupied 1,800 square feet, consumed $650 per hour in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables; each function table had 728 rotating knobs. Programming the ENIAC involved setting some of the 3,000 switches. Debugging a program took a week; the programmers of the ENIAC were women who were known collectively as the "ENIAC girls." The ENIAC featured parallel operations. Different sets of accumulators could work on different algorithms, it used punched card machines for input and output, it was controlled with a clock signal. It ran for eight years, calculating hydrogen bomb parameters, predicting weather patterns, producing firing tables to aim artillery guns.
The Manchester Baby was a stored-program computer. Programming transitioned away from setting dials. Only three bits of memory were available to store each instruction, so it was limited to eight instructions. 32 switches were available for programming. Computers manufactured; the computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed; this process was repeated. Computer programs were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed. In 1961, the Burroughs B5000 was built to be programmed in the ALGOL 60 language; the hardware featured circuits to ease the compile phase. In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture; the Model 30 was the least expensive. Customers could retain the same application software; each System/360 model featured multiprogramming.
With operating system support, multiple programs could be in memory at once. When one was waiting for input/output, another could compute; each model could emulate other computers. Customers could upgrade to the System/360 and ret