World Wide Web
The World Wide Web known as the Web, is an information space where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, are accessible over the Internet. The resources of the WWW may be accessed by users by a software application called a web browser. English scientist Tim Berners-Lee invented the World Wide Web in 1989, he wrote the first web browser in 1990 while employed at CERN near Switzerland. The browser was released outside CERN in 1991, first to other research institutions starting in January 1991 and to the general public in August 1991; the World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web resources may be any type of downloaded media, but web pages are hypertext media that have been formatted in Hypertext Markup Language; such formatting allows for embedded hyperlinks that contain URLs and permit users to navigate to other web resources.
In addition to text, web pages may contain images, video and software components that are rendered in the user's web browser as coherent pages of multimedia content. Multiple web resources with a common theme, a common domain name, or both, make up a website. Websites are stored in computers that are running a program called a web server that responds to requests made over the Internet from web browsers running on a user's computer. Website content can be provided by a publisher, or interactively where users contribute content or the content depends upon the users or their actions. Websites may be provided for a myriad of informative, commercial, governmental, or non-governmental reasons. Tim Berners-Lee's vision of a global hyperlinked information system became a possibility by the second half of the 1980s. By 1985, the global Internet began to proliferate in Europe and the Domain Name System came into being. In 1988 the first direct IP connection between Europe and North America was made and Berners-Lee began to discuss the possibility of a web-like system at CERN.
While working at CERN, Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On March 12, 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN for a system called "Mesh" that referenced ENQUIRE, a database and software project he had built in 1980, which used the term "web" and described a more elaborate information management system based on links embedded as text: "Imagine the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. There is no reason, the proposal continues, why such hypertext links could not encompass multimedia documents including graphics and video, so that Berners-Lee goes on to use the term hypermedia.
With help from his colleague and fellow hypertext enthusiast Robert Cailliau he published a more formal proposal on 12 November 1990 to build a "Hypertext project" called "WorldWideWeb" as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. At this point HTML and HTTP had been in development for about two months and the first Web server was about a month from completing its first successful test; this proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, WebDAV, Web 2.0 and RSS/Atom. The proposal was modelled after the SGML reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University.
The Dynatext system, licensed by CERN, was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. A NeXT Computer was used by Berners-Lee as the world's first web server and to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser and the first web server; the first web site, which described the project itself, was published on 20 December 1990. The first web page may be lost, but Paul Jones of UNC-Chapel Hill in North Carolina announced in May 2013 that Berners-Lee gave him what he says is the oldest known web page during a 1991 visit to UNC. Jones stored it on his NeXT computer. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext.
This date is sometimes confused with the public availability of the first web servers, which had occurred months earlier. As another example of such confusion, several news media reported that the first photo on the Web was published by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes taken by Silvano de Gennaro.
A computer cluster is a set of loosely or connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task and scheduled by software; the components of a cluster are connected to each other through fast local area networks, with each node running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups, different operating systems can be used on each computer, or different hardware. Clusters are deployed to improve performance and availability over that of a single computer, while being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, software for high-performance distributed computing.
They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single unit fault tolerant mainframes with modular redundancy were employed. In contrast to high-reliability mainframes clusters are cheaper to scale out, but have increased complexity in error handling, as in clusters error modes are not opaque to running programs; the desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach connects a number of available computing nodes via a fast local area network; the activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which use many nodes, but with a far more distributed nature. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer; the developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may be used to achieve high levels of performance.
The TOP500 organization's semiannual list of the 500 fastest supercomputers includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture. Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s; the formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster; the first production system designed as a cluster was the Burroughs B5700 in the mid-1960s.
This allowed up to four computers, each with either one or two processors, to be coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation; the first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" system, developed in 1977, using ARCnet as the cluster interface. Clustering per se did not take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system; the ARC and VAXcluster products not only supported parallel computing, but shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan and the IBM S/390 Parallel Sysplex. Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer.
Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K compu
A web browser is a software application for accessing information on the World Wide Web. Each individual web page and video is identified by a distinct Uniform Resource Locator, enabling browsers to retrieve these resources from a web server and display them on the user's device. A web browser is not the same thing as a search engine, though the two are confused. For a user, a search engine is just a website, such as google.com, that stores searchable data about other websites. But to connect to a website's server and display its web pages, a user needs to have a web browser installed on their device; the most popular browsers are Chrome, Safari, Internet Explorer, Edge. The first web browser, called WorldWideWeb, was invented in 1990 by Sir Tim Berners-Lee, he recruited Nicola Pellow to write the Line Mode Browser, which displayed web pages on dumb terminals. 1993 was a landmark year with the release of Mosaic, credited as "the world's first popular browser". Its innovative graphical interface made the World Wide Web system easy to use and thus more accessible to the average person.
This, in turn, sparked the Internet boom of the 1990s when the Web grew at a rapid rate. Marc Andreessen, the leader of the Mosaic team, soon started his own company, which released the Mosaic-influenced Netscape Navigator in 1994. Navigator became the most popular browser. Microsoft debuted Internet Explorer in 1995. Microsoft was able to gain a dominant position for two reasons: it bundled Internet Explorer with its popular Microsoft Windows operating system and did so as freeware with no restrictions on usage; the market share of Internet Explorer peaked at over 95% in 2002. In 1998, desperate to remain competitive, Netscape launched what would become the Mozilla Foundation to create a new browser using the open source software model; this work evolved into Firefox, first released by Mozilla in 2004. Firefox reached a 28% market share in 2011. Apple released its Safari browser in 2003, it remains the dominant browser on Apple platforms. The last major entrant to the browser market was Google, its Chrome browser, which debuted in 2008, has been a huge success.
Once a web page has been retrieved, the browser's rendering engine displays it on the user's device. This includes video formats supported by the browser. Web pages contain hyperlinks to other pages and resources; each link contains a URL, when it is clicked, the browser navigates to the new resource. Thus the process of bringing content to the user begins again. To implement all of this, modern browsers are a combination of numerous software components. Web browsers can be configured with a built-in menu. Depending on the browser, the menu may be named Options, or Preferences; the menu has different types of settings. For example, users can change their home default search engine, they can change default web page colors and fonts. Various network connectivity and privacy settings are usually available. During the course of browsing, cookies received from various websites are stored by the browser; some of them contain login credentials or site preferences. However, others are used for tracking user behavior over long periods of time, so browsers provide settings for removing cookies when exiting the browser.
Finer-grained management of cookies requires a browser extension. The most popular browsers have a number of features in common, they allow users to browse in a private mode. They can be customized with extensions, some of them provide a sync service. Most browsers have these user interface features: Allow the user to open multiple pages at the same time, either in different browser windows or in different tabs of the same window. Back and forward buttons to go back to the previous page forward to the next one. A refresh or reload button to reload the current page. A stop button to cancel loading the page. A home button to return to the user's home page. An address bar to display it. A search bar to input terms into a search engine. There are niche browsers with distinct features. One example is text-only browsers that can benefit people with slow Internet connections or those with visual impairments. Mobile browser List of web browsers Comparison of web browsers Media related to Web browsers at Wikimedia Commons
A database is an organized collection of data stored and accessed electronically from a computer system. Where databases are more complex they are developed using formal design and modeling techniques; the database management system is the software that interacts with end users and the database itself to capture and analyze the data. The DBMS software additionally encompasses; the sum total of the database, the DBMS and the associated applications can be referred to as a "database system". The term "database" is used to loosely refer to any of the DBMS, the database system or an application associated with the database. Computer scientists may classify database-management systems according to the database models that they support. Relational databases became dominant in the 1980s; these model data as rows and columns in a series of tables, the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, referred to as NoSQL because they use different query languages.
Formally, a "database" refers to the way it is organized. Access to this data is provided by a "database management system" consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database; the DBMS provides various functions that allow entry and retrieval of large quantities of information and provides ways to manage how that information is organized. Because of the close relationship between them, the term "database" is used casually to refer to both a database and the DBMS used to manipulate it. Outside the world of professional information technology, the term database is used to refer to any collection of related data as size and usage requirements necessitate use of a database management system. Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups: Data definition – Creation and removal of definitions that define the organization of the data.
Update – Insertion and deletion of the actual data. Retrieval – Providing information in a form directly usable or for further processing by other applications; the retrieved data may be made available in a form the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database. Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, recovering information, corrupted by some event such as an unexpected system failure. Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, database. Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are multiprocessor computers, with generous memory and RAID disk arrays used for stable storage.
RAID is used for recovery of data. Hardware database accelerators, connected to one or more servers via a high-speed channel, are used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs rely on a standard operating system to provide these functions. Since DBMSs comprise a significant market and storage vendors take into account DBMS requirements in their own development plans. Databases and DBMSs can be categorized according to the database model that they support, the type of computer they run on, the query language used to access the database, their internal engineering, which affects performance, scalability and security; the sizes and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, computer networks.
The development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, post-relational. The two main early navigational data models were the hierarchical model and the CODASYL model The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links; the relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems. By the early 1990s, relational systems dominated in all large-scale data processing applications, as of 2018 they remain dominant: IBM DB2, Oracle, MySQL, Microsoft SQL Server are the most searched DBMS; the dominant database language, standardised SQL for the relational model, has influenced database languages for other data models. Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch, which led to the coining of the term "post-relational" and the development of hybrid object-relational databas
An email client, email reader or more formally mail user agent is a computer program used to access and manage a user's email. A web application which provides message management and reception functions may act as an email client, "email client" may refer to a piece of computer hardware or software whose primary or most visible role is to work as an email client. Like most client programs, an email client is only active; the most common arrangement is for an email user to make an arrangement with a remote Mail Transfer Agent server for the receipt and storage of the client's emails. The MTA, using a suitable mail delivery agent, adds email messages to a client's storage as they arrive; the remote mail storage is referred to as the user's mailbox. The default setting on many Unix systems is for the mail server to store formatted messages in mbox, within the user's HOME directory. Of course, users of the system can log-in and run a mail client on the same computer that hosts their mailboxes. Emails are stored in the user's mailbox on the remote server until the user's email client requests them to be downloaded to the user's computer, or can otherwise access the user's mailbox on the remote server.
The email client can be set up to connect to multiple mailboxes at the same time and to request the download of emails either automatically, such as at pre-set intervals, or the request can be manually initiated by the user. A user's mailbox can be accessed in two dedicated ways; the Post Office Protocol allows the user to download messages one at a time and only deletes them from the server after they have been saved on local storage. It is possible to leave messages on the server to permit another client to access them. However, there is no provision for flagging a specific message as seen, answered, or forwarded, thus POP is not convenient for users who access the same mail from different machines. Alternatively, the Internet Message Access Protocol allows users to keep messages on the server, flagging them as appropriate. IMAP provides folders and sub-folders, which can be shared among different users with different access rights; the Sent and Trash folders are created by default. IMAP features an idle extension for real time updates, providing faster notification than polling, where long lasting connections are feasible.
See the remote messages section below. In addition, the mailbox storage can be accessed directly by programs running on the server or via shared disks. Direct access is less portable as it depends on the mailbox format. Email clients contain user interfaces to display and edit text; some applications permit the use of a program-external editor. The email clients will perform formatting according to RFC 5322 for headers and body, MIME for non-textual content and attachments. Headers include the destination fields, To, Cc, Bcc, the originator fields From, the message's author, Sender in case there are more authors, Reply-To in case responses should be addressed to a different mailbox. To better assist the user with destination fields, many clients maintain one or more address books and/or are able to connect to an LDAP directory server. For originator fields, clients may support different identities. Client settings require the user's real name and email address for each user's identity, a list of LDAP servers.
When a user wishes to create and send an email, the email client will handle the task. The email client is set up automatically to connect to the user's mail server, either an MSA or an MTA, two variations of the SMTP protocol; the email client which uses the SMTP protocol creates an authentication extension, which the mail server uses to authenticate the sender. This method eases nomadic computing; the older method was for the mail server to recognize the client's IP address, e.g. because the client is on the same machine and uses internal address 127.0.0.1, or because the client's IP address is controlled by the same Internet service provider that provides both Internet access and mail services. Client settings require the name or IP address of the preferred outgoing mail server, the port number, the user name and password for the authentication, if any. There is a non-standard port 465 for SSL encrypted SMTP sessions, that many clients and servers support for backward compatibility. With no encryption, much like for postcards, email activity is plainly visible by any occasional eavesdropper.
Email encryption enables privacy to be safeguarded by encrypting the mail sessions, the body of the message, or both. Without it, anyone with network access and the right tools can monitor email and obtain login passwords. Examples of concern include the government censorship and surveillance and fellow wireless network users such as at an Internet cafe. All relevant email protocols have an option to encrypt the whole session, to prevent a user's name and password from being sniffed, they are suggested for nomadic users and whenever the Internet access provider is not trusted. When sending mail, users can only control encryption at the first hop from a client to its configured outgoing mail server. At any further hop, messages may be transmitted with or without encryption, depending on the general configuration of the transmitting server and the capabilities of the receiving one. Encrypted mail sessions deliver messages in their original format, i.e. plain text or encrypted body, o
A web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over several other related protocols; the primary function of a web server is to store and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol. Pages delivered are most HTML documents, which may include images, style sheets and scripts in addition to the text content. A user agent a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so; the resource is a real file on the server's secondary storage, but this is not the case and depends on how the web server is implemented. While the primary function is to serve content, a full implementation of HTTP includes ways of receiving content from clients.
This feature is used for submitting web forms, including uploading of files. Many generic web servers support server-side scripting using Active Server Pages, PHP, or other scripting languages; this means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. This function is used to generate HTML documents dynamically as opposed to returning static documents; the former is used for retrieving or modifying information from databases. The latter is much faster and more cached but cannot deliver dynamic content. Web servers can be found embedded in devices such as printers, routers and serving only a local network; the web server may be used as a part of a system for monitoring or administering the device in question. This means that no additional software has to be installed on the client computer since only a web browser is required. In March 1989 Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system.
The project resulted in Berners-Lee writing two programs in 1990: A Web browser called WorldWideWeb The world's first web server known as CERN httpd, which ran on NeXTSTEPBetween 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among scientific organizations and universities, subsequently to the industry. In 1994 Berners-Lee decided to constitute the World Wide Web Consortium to regulate the further development of the many technologies involved through a standardization process. Web servers are able to map the path component of a Uniform Resource Locator into: A local file system resource An internal or external program name For a static request the URL path specified by the client is relative to the web server's root directory. Consider the following URL as it would be requested by a client over HTTP: http://www.example.com/path/file.html The client's user agent will translate it into a connection to www.example.com with the following HTTP 1.1 request: GET /path/file.html HTTP/1.1 Host: www.example.com The web server on www.example.com will append the given path to the path of its root directory.
On an Apache server, this is /home/www. The result is the local file system resource: /home/www/path/file.html The web server reads the file, if it exists, sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or is unavailable. A web server can be either incorporated in user space. Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they are not always satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can mean useless buffer copies which are another handicap for user-mode web servers. A web server has defined load limits, because it can handle only a limited number of concurrent client connections per IP address and it can serve only a certain maximum number of requests per second depending on: its own settings, the HTTP request type, whether the content is static or dynamic, whether the content is cached, the hardware and software limitations of the OS of the computer on which the web server runs.
When a web server is near to or over its limit, it becomes unresponsive. At any time web servers can be overloaded due to: Excess legitimate web traffic. Thousands or millions of clients connecting to the web site in a short interval, e.g. Slashdot effect. A denial-of-service attack or distributed denial-of-service attack is an attempt to make a computer or network resource unavailable to its intended users.
The Wikimedia Foundation, Inc. is an American non-profit and charitable organization headquartered in San Francisco, California. It is known for participating in the Wikimedia movement, it hosts sites like Wikipedia. The foundation was founded in 2003 by Jimmy Wales as a way to fund Wikipedia and its sibling projects through non-profit means; as of 2017, the foundation employs over 300 people, with annual revenues in excess of US$109.9 million. María Sefidari is chair of the board. Katherine Maher has been the executive director since March 2016; the Wikimedia Foundation has the stated goal of developing and maintaining open content, wiki-based projects and providing the full contents of those projects to the public free of charge. Another main objective of the Wikimedia Foundation is political advocacy; the Wikimedia Foundation was granted section 501 status by the U. S. Internal Revenue Code as a public charity in 2005, its National Taxonomy of Exempt Entities code is B60. The foundation's by-laws declare a statement of purpose of collecting and developing educational content and to disseminate it and globally.
In 2001, Jimmy Wales, an Internet entrepreneur, Larry Sanger, an online community organizer and philosophy professor, founded Wikipedia as an Internet encyclopedia to supplement Nupedia. The project was funded by Bomis, Jimmy Wales's for-profit business; as Wikipedia's popularity increased, revenues to fund the project stalled. Since Wikipedia was depleting Bomis's resources and Sanger thought of a charity model to fund the project; the Wikimedia Foundation was incorporated in Florida on June 20, 2003. It applied to the United States Patent and Trademark Office to trademark Wikipedia on September 14, 2004; the mark was granted registration status on January 10, 2006. Trademark protection was accorded by Japan on December 16, 2004, and, in the European Union, on January 20, 2005. There were plans to license the use of the Wikipedia trademark for some products, such as books or DVDs; the name "Wikimedia", a compound of wiki and media, was coined by American author Sheldon Rampton in a post to the English mailing list in March 2003, three months after Wiktionary became the second wiki-based project hosted on Wales' platform.
In April 2005, the U. S. Internal Revenue Service approved the foundation as an educational foundation in the category "Adult, Continuing education", meaning all contributions to the foundation are tax-deductible for U. S. federal income tax purposes. On December 11, 2006, the foundation's board noted that the corporation could not become the membership organization planned but never implemented due to an inability to meet the registration requirements of Florida statutory law. Accordingly, the by-laws were amended to remove all reference to membership activities; the decision to change the bylaws was passed by the board unanimously. On September 25, 2007, the foundation's board gave notice that the operations would be moving to the San Francisco Bay Area. Major considerations cited for choosing San Francisco were proximity to like-minded organizations and potential partners, a better talent pool, as well as cheaper and more convenient international travel than is available from St. Petersburg, Florida.
The move from Florida was completed by 31 January 2008 with the headquarters on Stillman Street in San Francisco. In 2009, the Wikimedia Foundation's headquarters moved to New Montgomery Street. Lila Tretikov was appointed executive director of the Wikimedia Foundation in May 2014, she resigned in March 2016. Former chief communications officer Katherine Maher was appointed the interim executive director, a position made permanent in June 2016. In October 2017, the headquarters moved to One Montgomery Tower. Content on most Wikimedia Foundation websites is licensed for redistribution under v3.0 of the Attribution and Share-alike Creative Commons licenses. This content is sourced from contributing volunteers and from resources with few or no copyright restrictions, such as copyleft material and works in the public domain. In addition to Wikipedia, the foundation operates eleven other wikis that follow the free content model with their main goal being the dissemination of knowledge; these include, by launch date: Several additional projects exist to provide infrastructure or coordination of the free knowledge projects.
For instance, Outreach gives guidelines for best practices on encouraging the use of Wikimedia sites. These include: Wikimedia movement affiliates are independent, but formally recognized, groups of people intended to work together to support and contribute to the Wikimedia movement; the Wikimedia Foundation's Board of Trustees has approved three active models for movement affiliates: chapters, thematic organizations, user groups. Movement affiliates are intended to organize and engage in activities to support and contribute to the Wikimedia movement, such as regional conferences, edit-a-thons, public relations, public policy advocacy, GLAM engagement, Wikimania. Recognition of a chapter and thematic organization is approved by the foundation's board. Recommendations on recognition of chapters and thematic organizations are made to the foundation's board by an Affiliations Committee, composed of Wikimedia community volunteers; the Affiliations Committee approves the recognition of individual user groups.
While movement affiliates are formally recognized by the Wikimedia Foundation, they are independent of the Wikimedia Foundation, with no legal control of nor responsibility for the Wikimedia projects. The foundation began recognizing chapters in 2004. In 2010, development on additional models began. In 2012, the foundation approved, fina