Cascading Style Sheets
This cascading priority scheme is predictable. The CSS specifications are maintained by the World Wide Web Consortium. Internet media type text/css is registered for use with CSS by RFC 2318; the W3C operates a free CSS validation service for CSS documents. In addition to HTML, other markup languages support the use of CSS including XHTML, plain XML, SVG, XUL. CSS has a simple syntax and uses a number of English keywords to specify the names of various style properties. A style sheet consists of a list of rules; each rule or rule-set consists of one or more selectors, a declaration block. In CSS, selectors declare which part of the markup a style applies to by matching tags and attributes in the markup itself. Selectors may apply to the following: all elements of a specific type, e.g. the second-level headers h2 elements specified by attribute, in particular: id: an identifier unique within the document class: an identifier that can annotate multiple elements in a document elements depending on how they are placed relative to others in the document tree.
Classes and IDs are case-sensitive, start with letters, can include alphanumeric characters and underscores. A class may apply to any number of instances of any elements. An ID may only be applied to a single element. Pseudo-classes are used in CSS selectors to permit formatting based on information, not contained in the document tree. One example of a used pseudo-class is:hover, which identifies content only when the user “points to” the visible element by holding the mouse cursor over it, it is #elementid: hover. A pseudo-class classifies document elements, such as:link or:visited, whereas a pseudo-element makes a selection that may consist of partial elements, such as::first-line or::first-letter. Selectors may be combined in many ways to achieve great flexibility. Multiple selectors may be joined in a spaced list to specify elements by location, element type, id, class, or any combination thereof; the order of the selectors is important. For example, div.myClass applies to all elements of class myClass that are inside div elements, whereas.myClass div applies to all div elements that are in elements of class myClass.
The following table provides a summary of selector syntax indicating usage and the version of CSS that introduced it. A declaration block consists of a list of declarations in braces; each declaration itself consists of a property, a colon, a value. If there are multiple declarations in a block, a semi-colon must be inserted to separate each declaration. Properties are specified in the CSS standard; each property has a set of possible values. Some properties can affect any type of element, others apply only to particular groups of elements. Values may be keywords, such as "center" or "inherit", or numerical values, such as 200px, 50vw or 80%. Color values can be specified with keywords, hexadecimal values, RGB values on a 0 to 255 scale, RGBA values that specify both color and alpha transparency, or HSL or HSLA values. Before CSS, nearly all presentational attributes of HTML documents were contained within the HTML markup. All font colors, background styles, element alignments and sizes had to be explicitly described repeatedly, within the HTML.
CSS lets authors move much of that information to another file, the style sheet, resulting in simpler HTML. For example, sub-headings, sub-sub-headings, etc. are defined structurally using HTML. In print and on the screen, choice of font, size and emphasis for these elements is presentational. Before CSS, document authors who wanted to assign such typographic characteristics to, all h2 headings had to repeat HTML presentational markup for each occurrence of that heading type; this made documents more complex and more error-prone and difficult to maintain. CSS allows the separation of presentation from structure. CSS can define color, text alignment, borders, spacing and many other typographic characteristics, can do so independently for on-screen and printed views. CSS defines non-visual styles, such as reading speed and emphasis for aural text readers; the W3C has now deprecated the use of all presentational HTML markup. For example, under pre-CSS HTML, a heading element defined with red text would be written as: Using CSS, the sam
SCXML stands for State Chart XML: State Machine Notation for Control Abstraction. It is an XML-based markup language that provides a generic state-machine-based execution environment based on Harel statecharts. SCXML is able to describe complex finite state machines. For example, it is possible to describe notations such as sub-states, parallel states, synchronization, or concurrency, in SCXML; the objective of this standard is to genericize state diagram notations that are used in other XML contexts. For example, it is expected that SCXML notations will replace the State machines notations used in the next CCXML 2.0 version. It could be used as a multimodal control language in the Multimodal Interaction Activity. One of the goals of this language is to make sure that the language is compatible with CCXML and that there is an easy path for existing CCXML scripts to be converted to SCXML without major changes to the programming model or document structure; the current version of the specification was released by the W3C in September 2015.
According to the W3C SCXML specification, SCXML is a general-purpose event-based state machine language that can be used in many ways, including: As a high-level dialog language controlling VoiceXML 3.0's encapsulated speech modules As a voice application metalanguage, where in addition to VoiceXML 3.0 functionality, it may control database access and business logic modules. As a multimodal control language in the MultiModal Interaction framework, combining VoiceXML 3.0 dialogs with dialogs in other modalities including keyboard and mouse, vision, etc. It may control combined modalities such as lipreading speech input with keyboard as fallback, multiple keyboards for multi-user editing; as the state machine framework for a future version of CCXML. As an extended call center management language, combining CCXML call control functionality with computer-telephony integration for call centers that integrate telephone calls with computer screen pops, as well as other types of message exchange such as chats, instant messaging, etc.
As a general process control language in other contexts not involving speech processing. The draft W3C VoiceXML 3.0 specification includes State Chart and SCXML Representation to define functionality. Multimodal application designs can use different modalities for different parts of a communication best suited to it. For example, voice input can be used to avoid having to type on the small screen of a mobile phone, but the screen may be a faster way of communicating a list or map, compared to listening to long descriptions of available options. SCXML makes it easy to do several things in parallel, the Interaction Manager SCXML application will maintain the synchronization between Voice and Visual dialogues; the W3C document Authoring Applications for the Multimodal Architecture describes a multimodal system that implements the W3C Multimodal Architecture and gives an example of a simple multimodal application authored using various W3C markup languages, including SCXML, CCXML, VoiceXML 2.1 and HTML. scxmlcc An efficient scxml to C++ compiler.
It supports some additional features such as custom tag libraries and includes. It is not W3C compliant. PySCXML a Python-implementation. Supports a wide range of technologies, including websockets and SOAP. Standards-compliant. Supports the ECMAScript datamodel; the PySCXML Console a web-based interactive SCXML console for running and interacting with SCXML documents. Supports the ECMAScript datamodel. SCXML4Flex ActionScript/Flex partial port of PySCXML. SCXMLgui Java Visual Editor for SCXML. VoiceXML CCXML W3C SCXML specification 1.0 SCXML Commons Usecases - Stopwatch example
A homophone is a word, pronounced the same as another word but differs in meaning. A homophone may differ in spelling; the two words may be spelled the same, such as rose and rose, or differently, such as carat, carrot, or to, too. The term "homophone" may apply to units longer or shorter than words, such as phrases, letters, or groups of letters which are pronounced the same as another phrase, letter, or group of letters. Any unit with this property is said to be "homophonous". Homophones that are spelled the same are both homographs and homonyms. Homophones that are spelled differently are called heterographs. "Homophone" derives from the Greek homo-, "same", phōnḗ, "voice, utterance". Homophones are used to create puns and to deceive the reader or to suggest multiple meanings; the last usage is common in creative literature. An example of this is seen in Dylan Thomas's radio play Under Milk Wood: "The shops in mourning" where mourning can be heard as mourning or morning. Another vivid example is Thomas Hood's use of "birth" and "berth" and "told" and "toll'd" in his poem "Faithless Sally Brown": His death, which happen'd in his berth, At forty-odd befell: They went and told the sexton, The sexton toll'd the bell.
In some accents, various sounds have merged in that they are no longer distinctive, thus words that differ only by those sounds in an accent that maintains the distinction are homophonous in the accent with the merger. Some examples from English are: pen in many southern American accents. Merry and Mary in most American accents; the pairs do, due and forward, foreword are homophonous in most American accents but not in most English accents. The pairs talk and court, caught are distinguished in rhotic accents such as Scottish English and most dialects of American English, but are homophones in many non-rhotic accents such as British Received Pronunciation. Wordplay is common in English because the multiplicity of linguistic influences offers considerable complication in spelling and meaning and pronunciation compared with other languages. Malapropisms, which create a similar comic effect, are near-homophones. See Eggcorn. Homophones of multiple words or phrases are known as "oronyms"; this term was coined by Gyles Brandreth and first published in his book The Joy of Lex, it was used in the BBC programme Never Mind the Full Stops, which featured Brandreth as a guest.
Examples of "oronyms" include: "ice cream" vs. "I scream" "euthanasia" vs. "Youth in Asia" "depend" vs. "deep end" "Gemini" vs. "Jim and I" vs. "Jem in eye" "the sky" vs. "this guy" "four candles" vs. "fork handles" "sand, there" vs. "sandwiches there" "philanderer" vs. "Flanders" "example" vs. "egg sample" "some others" vs. "some mothers" vs. "smothers" "minute" vs. "my newt" "vodka" vs. "Ford Ka" "foxhole" vs. "Vauxhall" "big hand" vs. "began" "real eyes" vs. "realize" vs. "real lies" "a dressed male" vs. "addressed mail" "them all" vs. "the mall" "Isle of Dogs" vs. "I love dogs."In his Appalachian comedy routine, American comedian Jeff Foxworthy uses oronyms which play on exaggerated "country" accents. Notable examples include: Initiate: "My wife ate two sandwiches, initiate a bag o' tater chips."Mayonnaise: "Mayonnaise a lot of people here tonight."Innuendo: "Hey dude I saw a bird fly innuendo."Moustache: "I Moustache you a question." There are sites, for example, this archived page, which have lists of homonyms or rather homophones and even'multinyms' which have as many as seven spellings.
There are differences in such lists due to dialect pronunciations and usage of old words. In English, there are 88 triples; the septet is: raise, rase, rehs, res, réisOther than the three common words, there are: rase – a verb meaning "to erase". If proper names are allowed a nonet is Ayr, Eyre, air, ere, e'er, are. There are a large number of homophones in Japanese, due to the use of Sino-Japanese vocabulary, where borrowed words and morphemes from Chinese are used in Japanese, but many sound differences, such as words' tones, are lost; these are to some extent disambiguated via Japanese pitch accent, or from context, but many of these words are or exclusively used in writing, where they are distinguished as they are written with different kanji. An extreme example is kikō, the pronunciation of at least 22 words, including: 機構, 紀行, 稀覯, 騎行, 貴校 (, 奇功, 貴公, 起稿, 奇行, 機巧, 寄港, 帰校, 気功 (breathing exercise/qigo
It describes 18 elements comprising the initial simple design of HTML. Except for the hyperlink tag, these were influenced by SGMLguid, an in-house Standard Generalized Markup Language -based documentation format at CERN. Eleven of these elements still exist in HTML 4. HTML is a markup language that web browsers use to interpret and compose text and other material into visual or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are found in the 1988 ISO technical report TR 9537 Techniques for using SGML, which in turn covers the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS operating system: these formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements rather than print effects, with the separation of structure and markup.
Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document type definition to define the grammar; the draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Dave Raggett's competing Internet-Draft, "HTML+", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms. After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group, which in 1995 completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Further development under the auspices of the IETF was stalled by competing interests.
Since 1996, the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium. However, in 2000, HTML became an international standard. HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group, which became a joint deliverable with the W3C in 2008, completed and standardized on 28 October 2014. November 24, 1995 HTML 2.0 was published as RFC 1866. Supplemental RFCs added capabilities: November 25, 1995: RFC 1867 May 1996: RFC 1942 August 1996: RFC 1980 January 1997: RFC 2070 January 14, 1997 HTML 3.2 was published as a W3C Recommendation. It was the first version developed and standardized by the W3C, as the IETF had closed its HTML Working Group on September 12, 1996. Code-named "Wilbur", HTML 3.2 dropped math formulas reconciled overlap among various proprietary extensions and adopted most of Netscape's visual markup tags.
Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies. A markup for mathematical formu
ActivityPub is an open, decentralized social networking protocol based on Pump.io's ActivityPump protocol. It provides a client/server API for creating and deleting content, as well as a federated server-to-server API for delivering notifications and content. ActivityPub is a standard for the Internet in the Social Web Networking Group of the World Wide Web Consortium. At an earlier stage, the name of the protocol was "ActivityPump", but it was felt that ActivityPub better indicated the cross-publishing purpose of the protocol, it learned from the experiences with the older standard called OStatus. In January 2018, the World Wide Web Consortium published the ActivityPub standard as a Recommendation. Former Diaspora community manager Sean Tilley wrote an article that suggests ActivityPub protocols may provide a way to federate Internet platforms. Mastodon, a social networking software, implemented ActivityPub in version 1.6, released on 10 September 2017. It is intended that ActivityPub offer more security for private messages than the previous OStatus protocol does.
Pleroma, a social networking software implementing ActivityPub. Misskey, a social networking software implementing ActivityPub. Hubzilla a community CMS software platform that uses Zot, added ActivityPub support from version 2.8 with a plugin. Nextcloud, a federated service for file hosting. PeerTube, a federated service for video streaming. Pixelfed, a federated service for image sharing. Friendica, a social networking software, implemented ActivityPub in version 2019.01. Osada, a social networking software implementing Zot & ActivityPub The following solutions are clear client based implementations of ActivityPub: dokielie a client side editor using WebAnnotation and ActivityPub. Go-fed a library that implements ActivityPub in Go; the following solutions are clear server based implementations of ActivityPub: microblog.pub is under development and a self-hosted, single-user microblog implementation for a basic ActivityPub server. Distbin is a distributed pastebin service implemented ActivityPub.
Micropub Comparison of software and protocols for distributed social networking Comparison of microblogging services Fediverse "Socialwg - W3C Wiki". Www.w3.org. Retrieved 2017-11-05. "ActivityPub". Www.w3.org. Retrieved 2017-11-05. "ActivityPub Rocks!". Activitypub.rocks. Retrieved 2017-11-05. Contribute to activitypub development by creating an account on GitHub, World Wide Web Consortium, 2017-11-02, retrieved 2017-11-05
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech computer or speech synthesizer, can be implemented in software or hardware products. A text-to-speech system converts normal language text into speech. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a "synthetic" voice output; the quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s.
A text-to-speech system is composed of two parts: a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words; this process is called text normalization, pre-processing, or tokenization. The front-end assigns phonetic transcriptions to each word, divides and marks the text into prosodic units, like phrases and sentences; the process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation, output by the front-end; the back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody, imposed on the output speech. Long before the invention of electronic signal processing, some people tried to build machines to emulate human speech.
Some early legends of the existence of "Brazen Heads" involved Pope Silvester II, Albertus Magnus, Roger Bacon. In 1779 the German-Danish scientist Christian Gottlieb Kratzenstein won the first prize in a competition announced by the Russian Imperial Academy of Sciences and Arts for models he built of the human vocal tract that could produce the five long vowel sounds. There followed the bellows-operated "acoustic-mechanical speech machine" of Wolfgang von Kempelen of Pressburg, described in a 1791 paper; this machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a "speaking machine" based on von Kempelen's design, in 1846, Joseph Faber exhibited the "Euphonia". In 1923 Paget resurrected Wheatstone's design. In the 1930s Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice-synthesizer called The Voder, which he exhibited at the 1939 New York World's Fair.
Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories built the Pattern playback in the late 1940s and completed it in 1950. There were several different versions of this hardware device; the machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues discovered acoustic cues for the perception of phonetic segments. In 1975 MUSA was released, was one of the first Speech Synthesis systems, it consisted of a stand-alone computer hardware and a specialized software that enabled it to read Italian. A second version, released in 1978, was able to sing Italian in an "a cappella" style. Dominant systems in the 1980s and 1990s were the DECtalk system, based on the work of Dennis Klatt at MIT, the Bell Labs system. Early electronic speech-synthesizers sounded robotic and were barely intelligible; the quality of synthesized speech has improved, but as of 2016 output from contemporary speech synthesis systems remains distinguishable from actual human speech.
Kurzweil predicted in 2005 that as the cost-performance ratio caused speech synthesizers to become cheaper and more accessible, more people would benefit from the use of text-to-speech programs. The first computer-based speech-synthesis systems originated in the late 1950s. Noriko Umeda et al. developed the first general English text-to-speech system in 1968 at the Electrotechnical Laboratory, Japan. In 1961 physicist John Larry Kelly, Jr and his colleague Louis Gerstman used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer recreated the song "Daisy Bell", with musical accompaniment from Max Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey, where the HAL 9000 computer sings the same song as astronaut Dave Bowman puts it to slee