In computing, a web application or web app is a client–server computer program which the client runs in a web browser. Common web applications include webmail, online retail sales, online auction; the general distinction between a dynamic web page of any kind and a "web application" is unclear. Web sites most to be referred to as "web applications" are those which have similar functionality to a desktop software application, or to a mobile app. HTML5 introduced explicit language support for making applications that are loaded as web pages, but can store data locally and continue to function while offline. Single-page applications are more application-like because they reject the more typical web paradigm of moving between distinct pages with different URLs. Single-page frameworks like Sencha Touch and AngularJS might be used to speed development of such a web app for a mobile platform. There are several ways of targeting mobile devices when making a web application: Responsive web design can be used to make a web application - whether a conventional website or a single-page application viewable on small screens and work well with touchscreens.
Progressive Web Apps are web applications that load like regular web pages or websites but can offer the user functionality such as working offline, push notifications, device hardware access traditionally available only to native mobile applications. Native apps or "mobile apps" run directly on a mobile device, just as a conventional software application runs directly on a desktop computer, without a web browser. Frameworks like React Native, Flutter and FuseTools allow the development of native apps for all platforms using languages other than each standard native language. Hybrid apps embed a mobile web site inside a native app using a hybrid framework like Apache Cordova and Ionic or Appcelerator Titanium; this allows development using web technologies while retaining certain advantages of native apps. In earlier computing models like client–server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user's personal computer.
Representational state transfer
Representational State Transfer is a software architectural style that defines a set of constraints to be used for creating Web services. Web services that conform to the REST architectural style, termed RESTful Web services, provide interoperability between computer systems on the Internet. RESTful Web services allow the requesting systems to access and manipulate textual representations of Web resources by using a uniform and predefined set of stateless operations. Other kinds of Web services, such as SOAP Web services, expose their own arbitrary sets of operations."Web resources" were first defined on the World Wide Web as documents or files identified by their URLs. However, today they have a much more generic and abstract definition that encompasses every thing or entity that can be identified, addressed, or handled, in any way whatsoever, on the Web. In a RESTful Web service, requests made to a resource's URI will elicit a response with a payload formatted in HTML, XML, JSON, or some other format.
The response can confirm that some alteration has been made to the stored resource, the response can provide hypertext links to other related resources or collections of resources. When HTTP is used, as is most common, the operations available are GET, HEAD, POST, PUT, PATCH, DELETE, CONNECT, OPTIONS and TRACE. By using a stateless protocol and standard operations, RESTful systems aim for fast performance and the ability to grow, by re-using components that can be managed and updated without affecting the system as a whole while it is running; the term representational state transfer was introduced and defined in 2000 by Roy Fielding in his doctoral dissertation. Fielding's dissertation explained the REST principles that were known as the "HTTP object model" beginning in 1994, were used in designing the HTTP 1.1 and Uniform Resource Identifiers standards. The term is intended to evoke an image of how a well-designed Web application behaves: it is a network of Web resources where the user progresses through the application by selecting resource identifiers such as http://www.example.com/articles/21 and resource operations such as GET or POST, resulting in the next resource's representation being transferred to the end user for their use.
Roy Fielding defined REST in his 2000 PhD dissertation "Architectural Styles and the Design of Network-based Software Architectures" at UC Irvine. He developed the REST architectural style in parallel with HTTP 1.1 of 1996–1999, based on the existing design of HTTP 1.0 of 1996. In a retrospective look at the development of REST, Fielding said: The constraints of the REST architectural style affect the following architectural properties: performance in component interactions, which can be the dominant factor in user-perceived performance and network efficiency. Roy Fielding describes REST's effect on scalability. Six guiding constraints define a RESTful system; these constraints restrict the ways that the server can process and respond to client requests so that, by operating within these constraints, the system gains desirable non-functional properties, such as performance, simplicity, visibility and reliability. If a system violates any of the required constraints, it cannot be considered RESTful.
The formal REST constraints are as follows: The principle behind the client–server constraints is the separation of concerns. Separating the user interface concerns from the data storage concerns improves the portability of the user interface across multiple platforms, it improves scalability by simplifying the server components. Most significant to the Web, however, is that the separation allows the components to evolve independently, thus supporting the Internet-scale requirement of multiple organizational domains; the client–server communication is constrained by no client context being stored on the server between requests. Each request from any client contains all the information necessary to service the request, session state is held in the client; the session state can be transferred by the server to another service such as a database to maintain a persistent state for a period and allow authentication. The client begins sending requests. While one or more requests are outstanding, the client is considered to be in transition.
The representation of each application state contains links that can be used the next time the client chooses to initiate a new state-transition. As on the World Wide Web and intermediaries can cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable or not to prevent clients from getting stale or inappropriate data in response to further requests. Well-managed caching or eliminates some client–server interactions, further improving scalability and performance. A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. Intermediary servers can improve system scalability by enabling load balancing and by providing shared caches, they can enforce security policies. Servers can temporarily extend or customize the functionality of a client by transf
Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
A web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over several other related protocols; the primary function of a web server is to store and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol. Pages delivered are most HTML documents, which may include images, style sheets and scripts in addition to the text content. A user agent a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so; the resource is a real file on the server's secondary storage, but this is not the case and depends on how the web server is implemented. While the primary function is to serve content, a full implementation of HTTP includes ways of receiving content from clients.
This feature is used for submitting web forms, including uploading of files. Many generic web servers support server-side scripting using Active Server Pages, PHP, or other scripting languages; this means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. This function is used to generate HTML documents dynamically as opposed to returning static documents; the former is used for retrieving or modifying information from databases. The latter is much faster and more cached but cannot deliver dynamic content. Web servers can be found embedded in devices such as printers, routers and serving only a local network; the web server may be used as a part of a system for monitoring or administering the device in question. This means that no additional software has to be installed on the client computer since only a web browser is required. In March 1989 Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system.
The project resulted in Berners-Lee writing two programs in 1990: A Web browser called WorldWideWeb The world's first web server known as CERN httpd, which ran on NeXTSTEPBetween 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among scientific organizations and universities, subsequently to the industry. In 1994 Berners-Lee decided to constitute the World Wide Web Consortium to regulate the further development of the many technologies involved through a standardization process. Web servers are able to map the path component of a Uniform Resource Locator into: A local file system resource An internal or external program name For a static request the URL path specified by the client is relative to the web server's root directory. Consider the following URL as it would be requested by a client over HTTP: http://www.example.com/path/file.html The client's user agent will translate it into a connection to www.example.com with the following HTTP 1.1 request: GET /path/file.html HTTP/1.1 Host: www.example.com The web server on www.example.com will append the given path to the path of its root directory.
On an Apache server, this is /home/www. The result is the local file system resource: /home/www/path/file.html The web server reads the file, if it exists, sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or is unavailable. A web server can be either incorporated in user space. Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can mean useless buffer copies which are another handicap for user-mode web servers. A web server has defined load limits, because it can handle only a limited number of concurrent client connections per IP address and it can serve only a certain maximum number of requests per second depending on: its own settings, the HTTP request type, whether the content is static or dynamic, whether the content is cached, the hardware and software limitations of the OS of the computer on which the web server runs.
When a web server is near to or over its limit, it becomes unresponsive. At any time web servers can be overloaded due to: Excess legitimate web traffic. Thousands or millions of clients connecting to the web site in a short interval, e.g. Slashdot effect. A denial-of-service attack or distributed denial-of-service attack is an attempt to make a computer or network resource unavailable to its intended users.
In computer science, inter-process communication or interprocess communication refers to the mechanisms an operating system provides to allow the processes to manage shared data. Applications can use IPC, categorized as clients and servers, where the client requests data and the server responds to client requests. Many applications are both clients and servers, as seen in distributed computing. Methods for doing IPC are divided into categories which vary based on software requirements, such as performance and modularity requirements, system circumstances, such as network bandwidth and latency. IPC is important to the design process for microkernels and nanokernels. Microkernels reduce the number of functionalities provided by the kernel; those functionalities are obtained by communicating with servers via IPC, increasing drastically the number of IPC compared to a regular monolithic kernel. Depending on the solution, an IPC mechanism may provide synchronization or leave it up to processes and threads to communicate amongst themselves.
While synchronization will include some information it is not an information-passing communication mechanism per se. Examples of synchronization primitives are: Semaphore Spinlock Barrier Mutual exclusion Java's Remote Method Invocation ONC RPC XML-RPC or SOAP JSON-RPC Message Bus. NET Remoting The following are messaging and information systems that utilize IPC mechanisms, but don't implement IPC themselves: The following are platform or programming language-specific APIs: The following are platform or programming language specific-APIs that use IPC, but do not themselves implement it: Computer network programming Communicating Sequential Processes Data Distribution Service Protected procedure call Linux ipc man page describing System V IPC Windows IPC Unix Network Programming by W. Richard Stevens Interprocess Communication and Pipes in C DIPC, Distributed System V IPC
Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing and hardware. It is considered one of the Big Four technology companies, alongside Amazon and Facebook. Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph. D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock, they incorporated Google as a held company on September 4, 1998. An initial public offering took place on August 19, 2004, Google moved to its headquarters in Mountain View, nicknamed the Googleplex. In August 2015, Google announced plans to reorganize its various interests as a conglomerate called Alphabet Inc. Google is Alphabet's leading subsidiary and will continue to be the umbrella company for Alphabet's Internet interests. Sundar Pichai was appointed CEO of Google.
The company's rapid growth since incorporation has triggered a chain of products and partnerships beyond Google's core search engine. It offers services designed for work and productivity, email and time management, cloud storage, instant messaging and video chat, language translation and navigation, video sharing, note-taking, photo organizing and editing; the company leads the development of the Android mobile operating system, the Google Chrome web browser, Chrome OS, a lightweight operating system based on the Chrome browser. Google has moved into hardware. Google has experimented with becoming an Internet carrier. Google.com is the most visited website in the world. Several other Google services figure in the top 100 most visited websites, including YouTube and Blogger. Google is the most valuable brand in the world as of 2017, but has received significant criticism involving issues such as privacy concerns, tax avoidance, antitrust and search neutrality. Google's mission statement is "to organize the world's information and make it universally accessible and useful".
The companies unofficial slogan "Don't be evil" was removed from the company's code of conduct around May 2018. Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in Stanford, California. While conventional search engines ranked results by counting how many times the search terms appeared on the page, the two theorized about a better system that analyzed the relationships among websites, they called this new technology PageRank. Page and Brin nicknamed their new search engine "BackRub", because the system checked backlinks to estimate the importance of a site, they changed the name to Google. The domain name for Google was registered on September 15, 1997, the company was incorporated on September 4, 1998, it was based in the garage of a friend in California. Craig Silverstein, a fellow PhD student at Stanford, was hired as the first employee. Google was funded by an August 1998 contribution of $100,000 from Andy Bechtolsheim, co-founder of Sun Microsystems.
Google received money from three other angel investors in 1998: Amazon.com founder Jeff Bezos, Stanford University computer science professor David Cheriton, entrepreneur Ram Shriram. Between these initial investors and family Google raised around 1 million dollars, what allowed them to open up their original shop in Menlo Park, California After some additional, small investments through the end of 1998 to early 1999, a new $25 million round of funding was announced on June 7, 1999, with major investors including the venture capital firms Kleiner Perkins and Sequoia Capital. In March 1999, the company moved its offices to Palo Alto, home to several prominent Silicon Valley technology start-ups; the next year, Google began selling advertisements associated with search keywords against Page and Brin's initial opposition toward an advertising-funded search engine. To maintain an uncluttered page design, advertisements were text-based. In June 2000, it was announced that Google would become the default search engine provider for Yahoo!, one of the most popular websites at the time, replacing Inktomi.
In 2003, after outgrowing two other locations, the company leased an office complex from Silicon Graphics, at 1600 Amphitheatre Parkway in Mountain View, California. The complex became known as the Googleplex, a play on the word googolplex, the number one followed by a googol zeroes. Three years Google bought the property from SGI for $319 million. By that time, the name "Google
Douglas Crockford first popularized the JSON format. The acronym originated at State Software, a company co-founded by Crockford and others in March 2001; the co-founders agreed to build a system that used standard browser capabilities and provided an abstraction layer for Web developers to create stateful Web applications that had a persistent duplex connection to a Web server by holding two HTTP connections open and recycling them before standard browser time-outs if no further data were exchanged. The co-founders had a round-table discussion and voted whether to call the data format JSML or JSON, as well as under what license type to make it available. Crockford, being inspired by the words of President Bush, should be credited with coming up with the "evil-doers" JSON license in order to open-source the JSON libraries, but force corporate lawyers, or those who are overly pedantic, to seek to pay for a license from State. Chip Morningstar developed the idea for the State Application Framework at State Software.
String: a sequence of zero or more Unicode characters. Strings support a backslash escaping syntax. Boolean: either of the values true or false Array: an ordered list of zero or more values, each of which may be of any type. Arrays use square bracket elements are comma-separated. Object: an unordered collection of name–value pairs where the names are strings. Since objects are intended to represent associative arrays, it is recommended, though not required, that each key is unique within an object. Objects are delimited with curly brackets and use commas to separate each pair, while within each pair the colon':' character separates the key or name from its value. Null: An empty value, using the word nullLimited whitespace is allowed and ignored around or between syntactic elements. Only four specific characters are considered whitespace for this purpose: space, horizontal tab, line feed, carriage return. In particular, the byte orde