The public domain consists of all the creative works to which no exclusive intellectual property rights apply. Those rights may have been forfeited, expressly waived, or may be inapplicable; the works of William Shakespeare and Beethoven, most early silent films, are in the public domain either by virtue of their having been created before copyright existed, or by their copyright term having expired. Some works are not covered by copyright, are therefore in the public domain—among them the formulae of Newtonian physics, cooking recipes, all computer software created prior to 1974. Other works are dedicated by their authors to the public domain; the term public domain is not applied to situations where the creator of a work retains residual rights, in which case use of the work is referred to as "under license" or "with permission". As rights vary by country and jurisdiction, a work may be subject to rights in one country and be in the public domain in another; some rights depend on registrations on a country-by-country basis, the absence of registration in a particular country, if required, gives rise to public-domain status for a work in that country.
The term public domain may be interchangeably used with other imprecise or undefined terms such as the "public sphere" or "commons", including concepts such as the "commons of the mind", the "intellectual commons", the "information commons". Although the term "domain" did not come into use until the mid-18th century, the concept "can be traced back to the ancient Roman Law, as a preset system included in the property right system." The Romans had a large proprietary rights system where they defined "many things that cannot be owned" as res nullius, res communes, res publicae and res universitatis. The term res nullius was defined as things not yet appropriated; the term res communes was defined as "things that could be enjoyed by mankind, such as air and ocean." The term res publicae referred to things that were shared by all citizens, the term res universitatis meant things that were owned by the municipalities of Rome. When looking at it from a historical perspective, one could say the construction of the idea of "public domain" sprouted from the concepts of res communes, res publicae, res universitatis in early Roman law.
When the first early copyright law was first established in Britain with the Statute of Anne in 1710, public domain did not appear. However, similar concepts were developed by French jurists in the 18th century. Instead of "public domain", they used terms such as publici juris or propriété publique to describe works that were not covered by copyright law; the phrase "fall in the public domain" can be traced to mid-19th century France to describe the end of copyright term. The French poet Alfred de Vigny equated the expiration of copyright with a work falling "into the sink hole of public domain" and if the public domain receives any attention from intellectual property lawyers it is still treated as little more than that, left when intellectual property rights, such as copyright and trademarks, expire or are abandoned. In this historical context Paul Torremans describes copyright as a, "little coral reef of private right jutting up from the ocean of the public domain." Copyright law differs by country, the American legal scholar Pamela Samuelson has described the public domain as being "different sizes at different times in different countries".
Definitions of the boundaries of the public domain in relation to copyright, or intellectual property more regard the public domain as a negative space. According to James Boyle this definition underlines common usage of the term public domain and equates the public domain to public property and works in copyright to private property. However, the usage of the term public domain can be more granular, including for example uses of works in copyright permitted by copyright exceptions; such a definition regards work in copyright as private property subject to fair-use rights and limitation on ownership. A conceptual definition comes from Lange, who focused on what the public domain should be: "it should be a place of sanctuary for individual creative expression, a sanctuary conferring affirmative protection against the forces of private appropriation that threatened such expression". Patterson and Lindberg described the public domain not as a "territory", but rather as a concept: "here are certain materials – the air we breathe, rain, life, thoughts, ideas, numbers – not subject to private ownership.
The materials that compose our cultural heritage must be free for all living to use no less than matter necessary for biological survival." The term public domain may be interchangeably used with other imprecise or undefined terms such as the "public sphere" or "commons", including concepts such as the "commons of the mind", the "intellectual commons", the "information commons". A public-domain book is a book with no copyright, a book, created without a license, or a book where its copyrights expired or have been forfeited. In most countries the term of protection of copyright lasts until January first, 70 years after the death of the latest living author; the longest copyright term is in Mexico, which has life plus 100 years for all deaths since July 1928. A notable exception is the United States, where every book and tale published prior to 1924 is in the public domain.
In artificial intelligence, an intelligent agent is an autonomous entity which acts, directing its activity towards achieving goals, upon an environment using observation through sensors and consequent actuators. Intelligent agents may learn or use knowledge to achieve their goals, they may be simple or complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent. Intelligent agents are described schematically as an abstract functional system similar to a computer program. For this reason, intelligent agents are sometimes called abstract intelligent agents to distinguish them from their real world implementations as computer systems, biological systems, or organizations; some definitions of intelligent agents emphasize their autonomy, so prefer the term autonomous intelligent agents. Still others considered goal-directed behavior as the essence of intelligence and so prefer a term borrowed from economics, "rational agent". Intelligent agents in artificial intelligence are related to agents in economics, versions of the intelligent agent paradigm are studied in cognitive science, the philosophy of practical reason, as well as in many interdisciplinary socio-cognitive modeling and computer social simulations.
Intelligent agents are closely related to software agents. In computer science, the term intelligent agent may be used to refer to a software agent that has some intelligence, regardless if it is not a rational agent by Russell and Norvig's definition. For example, autonomous programs used for operator assistance or data mining are called "intelligent agents". Intelligent agents have been defined in many different ways. According to Nikola Kasabov AI systems should exhibit the following characteristics: Accommodate new problem solving rules incrementally Adapt online and in real time Are able to analyze themselves in terms of behavior and success. Learn and improve through interaction with the environment Learn from large amounts of data Have memory-based exemplar storage and retrieval capacities Have parameters to represent short and long term memory, forgetting, etc. A simple agent program can be defined mathematically as an function f which maps every possible percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or constant that affects eventual actions: f: P ∗ → A Agent function is an abstract concept as it could incorporate various principles of decision making like calculation of utility of individual options, deduction over logic rules, fuzzy logic, etc.
The program agent, maps every possible percept to an action. We use the term percept to refer to the agent's perceptional inputs at any given instant. In the following figures an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Weiss defines four classes of agents: Logic-based agents – in which the decision about what action to perform is made via logical deduction. An agent can be constructed by separating the body into the sensors and actuators, so that it operates with a complex perception system that takes the description of the world as input for a controller and outputs commands to the actuator. However, a hierarchy of controller layers is necessary to balance the immediate reaction desired for low-level tasks and the slow reasoning about complex, high-level goals. Russell & Norvig group agents into five classes based on their degree of perceived intelligence and capability: simple reflex agents model-based reflex agents goal-based agents utility-based agents learning agents Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history.
The agent function is based on the condition-action rule: "if condition action". This agent function only succeeds when the environment is observable; some reflex agents can contain information on their current state which allows them to disregard conditions whose actuators are triggered. Infinite loops are unavoidable for simple reflex agents operating in observable environments. Note: If the agent can randomize its actions, it may be possible to escape from infinite loops. A model-based agent can handle observable environments, its current state is stored inside the agent maintaining some kind of structure which describes the part of the world which cannot be seen. This knowledge about "how the world works" is called a model of the world, hence the name "model-based agent". A model-based reflex agent should maintain some sort of internal model that depends on the percept history and thereby reflects at least some of the unobserved aspects of the current state. Percept history and impact of action on the environment can be determined by using internal model.
It chooses an action in the same way as reflex agent. An agent may use models to describe and predict the behav
Open-source software is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration. Open-source software development generates an more diverse scope of design perspective than any company is capable of developing and sustaining long term. A 2008 report by the Standish Group stated that adoption of open-source software models have resulted in savings of about $60 billion per year for consumers. In the early days of computing and developers shared software in order to learn from each other and evolve the field of computing; the open-source notion moved to the way side of commercialization of software in the years 1970-1980. However, academics still developed software collaboratively. For example Donald Knuth in 1979 with the TeX typesetting system or Richard Stallman in 1983 with the GNU operating system.
In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and free-software principles. The paper received significant attention in early 1998, was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as free software; this source code subsequently became the basis behind SeaMonkey, Mozilla Firefox and KompoZer. Netscape's act prompted Raymond and others to look into how to bring the Free Software Foundation's free software ideas and perceived benefits to the commercial software industry, they concluded that FSF's social activism was not appealing to companies like Netscape, looked for a way to rebrand the free software movement to emphasize the business potential of sharing and collaborating on software source code. The new term they chose was "open source", soon adopted by Bruce Perens, publisher Tim O'Reilly, Linus Torvalds, others; the Open Source Initiative was founded in February 1998 to encourage use of the new term and evangelize open-source principles.
While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, commercial software vendors found themselves threatened by the concept of distributed software and universal access to an application's source code. A Microsoft executive publicly stated in 2001 that "open source is an intellectual property destroyer. I can't imagine something that could be worse than this for the software business and the intellectual-property business." However, while Free and open-source software has played a role outside of the mainstream of private software development, companies as large as Microsoft have begun to develop official open-source presences on the Internet. IBM, Oracle and State Farm are just a few of the companies with a serious public stake in today's competitive open-source market. There has been a significant shift in the corporate philosophy concerning the development of FOSS; the free-software movement was launched in 1983. In 1998, a group of individuals advocated that the term free software should be replaced by open-source software as an expression, less ambiguous and more comfortable for the corporate world.
Software licenses grant rights to users which would otherwise be reserved by copyright law to the copyright holder. Several open-source software licenses have qualified within the boundaries of the Open Source Definition; the most prominent and popular example is the GNU General Public License, which "allows free distribution under the condition that further developments and applications are put under the same licence", thus free. The open source label came out of a strategy session held on April 7, 1998 in Palo Alto in reaction to Netscape's January 1998 announcement of a source code release for Navigator. A group of individuals at the session included Tim O'Reilly, Linus Torvalds, Tom Paquin, Jamie Zawinski, Larry Wall, Brian Behlendorf, Sameer Parekh, Eric Allman, Greg Olson, Paul Vixie, John Ousterhout, Guido van Rossum, Philip Zimmermann, John Gilmore and Eric S. Raymond, they used the opportunity before the release of Navigator's source code to clarify a potential confusion caused by the ambiguity of the word "free" in English.
Many people claimed that the birth of the Internet, since 1969, started the open-source movement, while others do not distinguish between open-source and free software movements. The Free Software Foun
A Java servlet is a Java software component that extends the capabilities of a server. Although servlets can respond to any types of requests, they most implement web containers for hosting web applications on web servers and thus qualify as a server-side servlet web API; such web servlets are the Java counterpart to other dynamic web content technologies such as PHP and ASP. NET. A Java servlet processes or stores a Java class in Java EE that conforms to the Java Servlet API, a standard for implementing Java classes that respond to requests. Servlets could in principle communicate over any client–server protocol, but they are most used with the HTTP, thus "servlet" is used as shorthand for "HTTP servlet". Thus, a software developer may use a servlet to add dynamic content to a web server using the Java platform; the generated content is HTML, but may be other data such as XML and more JSON. Servlets can maintain state in session variables across many server transactions by using HTTP cookies, or URL mapping.
The Java servlet API has, to some extent, been superseded by two standard Java technologies for web services: the Java API for RESTful Web Services useful for AJAX, JSON and REST services, the Java API for XML Web Services useful for SOAP Web Services. To deploy and run a servlet, a web container must be used. A web container is the component of a web server that interacts with the servlets; the web container is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that the URL requester has the correct access rights. The Servlet API, contained in the Java package hierarchy javax.servlet, defines the expected interactions of the web container and a servlet. A Servlet is an object that generates a response based on that request; the basic Servlet package defines Java objects to represent servlet requests and responses, as well as objects to reflect the servlet's configuration parameters and execution environment. The package javax.servlet.http defines HTTP-specific subclasses of the generic servlet elements, including session management objects that track multiple requests and responses between the web server and a client.
Servlets may be packaged in a WAR file as a web application. Servlets can be generated automatically from JavaServer Pages by the JavaServer Pages compiler; the difference between servlets and JSP is that servlets embed HTML inside Java code, while JSPs embed Java code in HTML. While the direct usage of servlets to generate HTML has become rare, the higher level MVC web framework in Java EE still explicitly uses the servlet technology for the low level request/response handling via the FacesServlet. A somewhat older usage is to use servlets in conjunction with JSPs in a pattern called "Model 2", a flavor of the model–view–controller; the current version of Servlet is 4.0. The Java servlets API was first publicly announced at the inaugural JavaOne conference in May 1996. About two months after the announcements at the conference, the first public implementation was made available on the JavaSoft website; this was the first alpha of the Java Web Server which would be shipped as a product on June 5, 1997.
In his blog on java.net, Sun veteran and GlassFish lead Jim Driscoll details the history of servlet technology. James Gosling first thought of servlets in the early days of Java, but the concept did not become a product until December 1996 when Sun shipped JWS; this was. The Servlet1 specification was created by Pavni Diwanji while she worked at Sun Microsystems, with version 1.0 finalized in June 1997. Starting with version 2.2, the specification was developed under the Java Community Process. Three methods are central to the life cycle of a servlet; these are init and destroy. They are invoked at specific times by the server. During initialization stage of the servlet life cycle, the web container initializes the servlet instance by calling the init method, passing an object implementing the javax.servlet. ServletConfig interface; this configuration object allows the servlet to access name-value initialization parameters from the web application. After initialization, the servlet instance can service client requests.
Each request is serviced in its own separate thread. The web container calls the service method of the servlet for every request; the service method determines the kind of request being made and dispatches it to an appropriate method to handle the request. The developer of the servlet must provide an implementation for these methods. If a request is made for a method, not implemented by the servlet, the method of the parent class is called resulting in an error being returned to the requester; the web container calls the destroy method that takes the servlet out of service. The destroy method, like init, is called only once in the lifecycle of a servlet; the following is a typical user scenario of these methods. Assume that a user requests to visit a URL; the browser generates an HTTP request for this URL. This request is sent to the appropriate server; the HTTP request is forwarded to the servlet container. The container maps this request to a particular servlet; the servlet is dynamically loaded into the address space of the container.
The container invokes the init method of the servlet. This method is invoked only, it is possible to pass initialization parameters to the servlet so t
A computing platform or digital platform is the environment in which a piece of software is executed. It may be the hardware or the operating system a web browser and associated application programming interfaces, or other underlying software, as long as the program code is executed with it. Computing platforms have different abstraction levels, including a computer architecture, an OS, or runtime libraries. A computing platform is the stage. A platform can be seen both as a constraint on the software development process, in that different platforms provide different functionality and restrictions. For example, an OS may be a platform that abstracts the underlying differences in hardware and provides a generic command for saving files or accessing the network. Platforms may include: Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS. A browser in the case of web-based software; the browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser.
An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform. Software frameworks. Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together; the social networking sites Twitter and Facebook are considered development platforms. A virtual machine such as the Java virtual machine or. NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, executed by the VM. A virtualized version of a complete system, including virtualized hardware, OS, storage; these allow, for instance, a typical Windows program to run on. Some architectures have multiple layers, with each layer acting as a platform to the one above it.
In general, a component only has to be adapted to the layer beneath it. For instance, a Java program has to be written to use the Java virtual machine and associated libraries as a platform but does not have to be adapted to run for the Windows, Linux or Macintosh OS platforms. However, the JVM, the layer beneath the application, does have to be built separately for each OS. AmigaOS, AmigaOS 4 FreeBSD, NetBSD, OpenBSD IBM i Linux Microsoft Windows OpenVMS Classic Mac OS macOS OS/2 Solaris Tru64 UNIX VM QNX z/OS Android Bada BlackBerry OS Firefox OS iOS Embedded Linux Palm OS Symbian Tizen WebOS LuneOS Windows Mobile Windows Phone Binary Runtime Environment for Wireless Cocoa Cocoa Touch Common Language Infrastructure Mono. NET Framework Silverlight Flash AIR GNU Java platform Java ME Java SE Java EE JavaFX JavaFX Mobile LiveCode Microsoft XNA Mozilla Prism, XUL and XULRunner Open Web Platform Oracle Database Qt SAP NetWeaver Shockwave Smartface Universal Windows Platform Windows Runtime Vexi Ordered from more common types to less common types: Commodity computing platforms Wintel, that is, Intel x86 or compatible personal computer hardware with Windows operating system Macintosh, custom Apple Inc. hardware and Classic Mac OS and macOS operating systems 68k-based PowerPC-based, now migrated to x86 ARM architecture based mobile devices iPhone smartphones and iPad tablet computers devices running iOS from Apple Gumstix or Raspberry Pi full function miniature computers with Linux Newton devices running the Newton OS from Apple x86 with Unix-like systems such as Linux or BSD variants CP/M computers based on the S-100 bus, maybe the earliest microcomputer platform Video game consoles, any variety 3DO Interactive Multiplayer, licensed to manufacturers Apple Pippin, a multimedia player platform for video game console development RISC processor based machines running Unix variants SPARC architecture computers running Solaris or illumos operating systems DEC Alpha cluster running OpenVMS or Tru64 UNIX Midrange computers with their custom operating systems, such as IBM OS/400 Mainframe computers with their custom operating systems, such as IBM z/OS Supercomputer architectures Cross-platform Platform virtualization Third platform Ryan Sarver: What is a platform
In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented as if–then rules rather than through conventional procedural code; the first expert systems were created in the 1970s and proliferated in the 1980s. Expert systems were among the first successful forms of artificial intelligence software. An expert system is divided into two subsystems: the knowledge base; the knowledge base represents rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can include explanation and debugging abilities. Expert systems were introduced around 1965 by the Stanford Heuristic Programming Project led by Edward Feigenbaum, sometimes termed the "father of expert systems"; the Stanford researchers tried to identify domains where expertise was valued and complex, such as diagnosing infectious diseases and identifying unknown organic molecules.
The idea that "intelligent systems derive their power from the knowledge they possess rather than from the specific formalisms and inference schemes they use" – as Feigenbaum said – was at the time a significant step forward, since the past research had been focused on heuristic computational methods, culminating in attempts to develop general-purpose problem solvers. Expert systems became some of the first successful forms of artificial intelligence software. Research on expert systems was active in France. While in the US the focus tended to be on rule-based systems, first on systems hard coded on top of LISP programming environments and on expert system shells developed by vendors such as Intellicorp, in France research focused more on systems developed in Prolog; the advantage of expert system shells was. The advantage of Prolog environments was. In the 1980s, expert systems proliferated. Universities offered expert system courses and two thirds of the Fortune 500 companies applied the technology in daily business activities.
Interest was international with the Fifth Generation Computer Systems project in Japan and increased research funding in Europe. In 1981, the first IBM PC, with the PC DOS operating system, was introduced; the imbalance between the high affordability of the powerful chips in the PC, compared to the much more expensive cost of processing power in the mainframes that dominated the corporate IT world at the time, created a new type of architecture for corporate computing, termed the client-server model. Calculations and reasoning could be performed at a fraction of the price of a mainframe using a PC; this model enabled business units to bypass corporate IT departments and directly build their own applications. As a result, client server had a tremendous impact on the expert systems market. Expert systems were outliers in much of the business world, requiring new skills that many IT departments did not have and were not eager to develop, they were a natural fit for new PC-based shells that promised to put application development into the hands of end users and experts.
Until the main development environment for expert systems had been high end Lisp machines from Xerox and Texas Instruments. With the rise of the PC and client server computing, vendors such as Intellicorp and Inference Corporation shifted their priorities to developing PC based tools. New vendors financed by venture capital, started appearing regularly; the first expert system to be used in a design capacity for a large-scale product was the SID software program, developed in 1982. Written in LISP, SID generated 93% of the VAX 9000 CPU logic gates. Input to the software was a set of rules created by several expert logic designers. SID expanded the rules and generated software logic synthesis routines many times the size of the rules themselves; the combination of these rules resulted in an overall design that exceeded the capabilities of the experts themselves, in many cases out-performed the human counterparts. While some rules contradicted others, top-level control parameters for speed and area provided the tie-breaker.
The program was controversial, but used due to project budget constraints. It was terminated by logic designers after the VAX 9000 project completion. In the 1990s and beyond, the term expert system and the idea of a standalone AI system dropped from the IT lexicon. There are two interpretations of this. One is that "expert systems failed": the IT world moved on because expert systems didn't deliver on their over hyped promise; the other is the mirror opposite, that expert systems were victims of their success: as IT professionals grasped concepts such as rule engines, such tools migrated from being standalone tools for developing special purpose expert systems, to being one of many standard tools. Many of the leading major business application suite vendors integrated expert system abilities into their suite of products as a way of specifying business logic – rule engines are no longer for defining the rules an expert would use but for any type of complex and critical business logic.