Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
ActivityPub is an open, decentralized social networking protocol based on Pump.io's ActivityPump protocol. It provides a client/server API for creating and deleting content, as well as a federated server-to-server API for delivering notifications and content. ActivityPub is a standard for the Internet in the Social Web Networking Group of the World Wide Web Consortium. At an earlier stage, the name of the protocol was "ActivityPump", but it was felt that ActivityPub better indicated the cross-publishing purpose of the protocol, it learned from the experiences with the older standard called OStatus. In January 2018, the World Wide Web Consortium published the ActivityPub standard as a Recommendation. Former Diaspora community manager Sean Tilley wrote an article that suggests ActivityPub protocols may provide a way to federate Internet platforms. Mastodon, a social networking software, implemented ActivityPub in version 1.6, released on 10 September 2017. It is intended that ActivityPub offer more security for private messages than the previous OStatus protocol does.
Pleroma, a social networking software implementing ActivityPub. Misskey, a social networking software implementing ActivityPub. Hubzilla a community CMS software platform that uses Zot, added ActivityPub support from version 2.8 with a plugin. Nextcloud, a federated service for file hosting. PeerTube, a federated service for video streaming. Pixelfed, a federated service for image sharing. Friendica, a social networking software, implemented ActivityPub in version 2019.01. Osada, a social networking software implementing Zot & ActivityPub The following solutions are clear client based implementations of ActivityPub: dokielie a client side editor using WebAnnotation and ActivityPub. Go-fed a library that implements ActivityPub in Go; the following solutions are clear server based implementations of ActivityPub: microblog.pub is under development and a self-hosted, single-user microblog implementation for a basic ActivityPub server. Distbin is a distributed pastebin service implemented ActivityPub.
Micropub Comparison of software and protocols for distributed social networking Comparison of microblogging services Fediverse "Socialwg - W3C Wiki". Www.w3.org. Retrieved 2017-11-05. "ActivityPub". Www.w3.org. Retrieved 2017-11-05. "ActivityPub Rocks!". Activitypub.rocks. Retrieved 2017-11-05. Contribute to activitypub development by creating an account on GitHub, World Wide Web Consortium, 2017-11-02, retrieved 2017-11-05
Responsive web design
Responsive web design is an approach to web design that makes web pages render well on a variety of devices and window or screen sizes. Recent work considers the viewer proximity as part of the viewing context as an extension for RWD. Content and performance are necessary across all devices to ensure usability and satisfaction. A site designed with RWD adapts the layout to the viewing environment by using fluid, proportion-based grids, flexible images, CSS3 media queries, an extension of the @media rule, in the following ways: The fluid grid concept calls for page element sizing to be in relative units like percentages, rather than absolute units like pixels or points. Flexible images are sized in relative units, so as to prevent them from displaying outside their containing element. Media queries allow the page to use different CSS style rules based on characteristics of the device the site is being displayed on, e.g. width of the rendering surface. Responsive web design has become more important as the amount of mobile traffic now accounts for more than half of total internet traffic.
Google recommends responsive design for smartphone websites over other approaches. Although many publishers are starting to implement responsive designs, one ongoing challenge for RWD is that some banner advertisements and videos are not fluid. However, search advertising and display advertising support specific device platform targeting and different advertisement size formats for desktop and basic mobile devices. Different landing page URLs can be used for different platforms, or Ajax can be used to display different advertisement variants on a page. CSS tables permit hybrid fixed+fluid layouts. There are now many ways of validating and testing RWD designs, ranging from mobile site validators and mobile emulators to simultaneous testing tools like Adobe Edge Inspect; the Chrome and Safari browsers and the Chrome console offer responsive design viewport resizing tools, as do third parties. Use cases of RWD will now expand further with increased mobile usage; the first site to feature a layout that adapts to browser viewport width was Audi.com launched in late 2001, created by a team at razorfish consisting of Jürgen Spangl and Jim Kalbach, Ken Olling, Jan Hoffmann.
Limited browser capabilities meant that for Internet Explorer, the layout could adapt dynamically in the browser whereas for Netscape, the page had to be reloaded from the server when resized. Cameron Adams created a demonstration in 2004, still online. By 2008, a number of related terms such as "flexible", "liquid", "fluid", "elastic" were being used to describe layouts. CSS3 media queries were ready for prime time in late 2008/early 2009. Ethan Marcotte coined the term responsive web design —and defined it to mean fluid grid/ flexible images/ media queries—in a May 2010 article in A List Apart, he described the theory and practice of responsive web design in his brief 2011 book titled Responsive Web Design. Responsive design was listed as #2 in Top Web Design Trends for 2012 by.net magazine after progressive enhancement
An open standard is a standard, publicly available and has various rights to use associated with it, may have various properties of how it was designed. There is no single definition and interpretations vary with usage; the terms open and standard have a wide range of meanings associated with their usage. There are a number of definitions of open standards which emphasize different aspects of openness, including the openness of the resulting specification, the openness of the drafting process, the ownership of rights in the standard; the term "standard" is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis. The definitions of the term open standard used by academics, the European Union and some of its member governments or parliaments such as Denmark and Spain preclude open standards requiring fees for use, as do the New Zealand, South African and the Venezuelan governments. On the standard organisation side, the World Wide Web Consortium ensures that its specifications can be implemented on a royalty-free basis.
Many definitions of the term standard permit patent holders to impose "reasonable and non-discriminatory licensing" royalty fees and other licensing terms on implementers or users of the standard. For example, the rules for standards published by the major internationally recognized standards bodies such as the Internet Engineering Task Force, International Organization for Standardization, International Electrotechnical Commission, ITU-T permit their standards to contain specifications whose implementation will require payment of patent licensing fees. Among these organizations, only the IETF and ITU-T explicitly refer to their standards as "open standards", while the others refer only to producing "standards"; the IETF and ITU-T use definitions of "open standard" that allow "reasonable and non-discriminatory" patent licensing fee requirements. There are those in the open-source software community who hold that an "open standard" is only open if it can be adopted and extended. While open standards or architectures are considered non-proprietary in the sense that the standard is either unowned or owned by a collective body, it can still be publicly shared and not guarded.
The typical example of “open source” that has become a standard is the personal computer originated by IBM and now referred to as Wintel, the combination of the Microsoft operating system and Intel microprocessor. There are three others that are most accepted as “open” which include the GSM phones, Open Group which promotes UNIX and the like, the Internet Engineering Task Force which created the first standards of SMTP and TCP/IP. Buyers tend to prefer open standards which they believe offer them cheaper products and more choice for access due to network effects and increased competition between vendors. Open standards which specify formats are sometimes referred to as open formats. Many specifications that are sometimes referred to as standards are proprietary and only available under restrictive contract terms from the organization that owns the copyright on the specification; as such these specifications are not considered to be open. Joel West has argued that "open" standards are not black and white but have many different levels of "openness".
A standard needs to be open enough that it will become adopted and accepted in the market, but still closed enough that firms can get a return on their investment in developing the technology around the standard. A more open standard tends to occur when the knowledge of the technology becomes dispersed enough that competition is increased and others are able to start copying the technology as they implement it; this occurred with the Wintel architecture. Less open standards exist when a particular firm has much power over the standard, which can occur when a firm’s platform “wins” in standard setting or the market makes one platform most popular. On August 12, 2012, the Institute for Electrical and Electronics Engineers, Internet Society, World Wide Web Consortium, Internet Engineering Task Force and Internet Architecture Board, jointly affirmed a set of principles which have contributed to the exponential growth of the Internet and related technologies; the "OpenStand Principles" establish the building blocks for innovation.
Standards developed using the OpenStand principles are developed through an open, participatory process, support interoperability, foster global competition, are voluntarily adopted on a global level and serve as building blocks for products and services targeted to meet the needs of markets and consumers. This drives innovation which, in turn, contributes to the creation of new markets and the growth and expansion of existing markets. There are five, key OpenStand Principles, as outlined below: 1. Cooperation Respectful cooperation between standards organizations, whereby each respects the autonomy, integrity and intellectual property rules of the others. 2. Adherence to Principles - Adherence to the five fundamental principles of standards development, namely Due process: Decisions are made with equity and fairness among participants. No one party guides standards development. Standards processes are transparent and opportunities exist to appeal decisions. Processes for periodic standards review and updating are well defined.
Broad consensus: Processes allow for all views to be considered and addressed, such that agreement can be found across a range of interests. Transpare
The Platform for Privacy Preferences Project is an obsolete protocol allowing websites to declare their intended use of information they collect about web browser users. Designed to give users more control of their personal information when browsing, P3P was developed by the World Wide Web Consortium and recommended on April 16, 2002. Development ceased shortly thereafter and there have been few implementations of P3P. Microsoft Internet Explorer and Edge were the only major browsers to support P3P. Microsoft has ended support from Windows 10 onwards. Microsoft Internet Explorer and Edge on Windows 10 will no longer support P3P; the president of TRUSTe has stated that P3P has not been implemented due to the difficulty and lack of value. As the World Wide Web became a genuine medium in which to sell products and services, electronic commerce websites tried to collect more information about the people who purchased their merchandise; some companies used controversial practices such as tracker cookies to ascertain the users' demographic information and buying habits, using this information to provide targeted advertisements.
Users who saw this as an invasion of privacy would sometimes turn off HTTP cookies or use proxy servers to keep their personal information secure. P3P is designed to give users a more precise control of the kind of information that they allow to release. According to the W3C, the main goal of P3P “is to increase user trust and confidence in the Web through technical empowerment.”P3P is a machine-readable language that helps to express a website’s data management practices. P3P manages information through privacy policies; when a website uses P3P, they set up a set of policies that allows them to state their intended uses of personal information that may be gathered from their site visitors. When a user decides to use P3P, they set their own set of policies and state what personal information they will allow to be seen by the sites that they visit; when a user visits a site, P3P will compare what personal information the user is willing to release, what information the server wants to get – if the two do not match, P3P will inform the user and ask if he/she is willing to proceed to the site, risk giving up more personal information.
IE provides the ability to display P3P privacy policies, compare the P3P policy with the browser's settings to decide whether or not to allow cookies from a particular site. However, the P3P functionality in Internet Explorer extends only to cookie blocking, will not alert the user to an entire web site that violates active privacy preferences. Microsoft considers the feature deprecated in its browsers and removed P3P support on Windows 10. Mozilla supported some P3P features for a few years, but all P3P related source code was removed by 2007; the Privacy Finder service was created by Carnegie Mellon's Usable Privacy and Security Laboratory. It is a publicly available "P3P-enabled search engine." A user can enter a search term along with their stated privacy preferences, is presented with a list of search results which are ordered based on whether the sites comply with their preferences. This works by crawling the web and maintaining a P3P cache for every site that appears in a search query.