GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
Vagrant is an open-source software product for building and maintaining portable virtual software development environments, e.g. for VirtualBox, KVM, Hyper-V, Docker containers, VMware, AWS. It tries to simplify software configuration management of virtualizations in order to increase development productivity. Vagrant is written in the Ruby language. Vagrant was first started as a personal side-project by Mitchell Hashimoto in January 2010; the first version of Vagrant was released in March 2010. In October 2010, Engine Yard declared; the first stable version, Vagrant 1.0, was released in March 2012 two years after the original version was released. In November 2012, Mitchell formed an organization called HashiCorp to support the full-time development of Vagrant. HashiCorp now works on creating commercial additions and provides professional support and training for Vagrant. Vagrant was tied to VirtualBox, but version 1.1 added support for other virtualization software such as VMware and KVM, for server environments like Amazon EC2.
"Box" is a format and an extension for Vagrant environments, copied to another machine in order to replicate the same environment. Official website List of Vagrant boxes
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Rochester Institute of Technology
Rochester Institute of Technology is a private doctoral university within the town of Henrietta in the Rochester, New York metropolitan area. RIT is composed of nine academic colleges, including the National Technical Institute for the Deaf; the Institute is one of a few engineering institutes in the State of New York, including New York Institute of Technology, SUNY Polytechnic Institute, Rensselaer Polytechnic Institute. It is most known for its fine arts, computing and imaging science programs; the university began as a result of an 1891 merger between Rochester Athenæum, a literary society founded in 1829 by Colonel Nathaniel Rochester and associates, The Mechanics Institute, a Rochester institute of practical technical training for local residents founded in 1885 by a consortium of local businessmen including Captain Henry Lomb, co-founder of Bausch & Lomb. The name of the merged institution at the time was called Rochester Athenæum and Mechanics Institute, despite of the fact The Mechanics Institute was considered as the surviving school by taken over The Rochester Athenaeum's charter and celebrating its founding year of 1885.
In 1944, the school changed its name to Rochester Institute of Technology, re-established it's founding charter of 1829 and became a full-fledged research university. The Institute resided within the city of Rochester, New York, proper, on a block bounded by the Erie Canal, South Plymouth Avenue, Spring Street, South Washington Street, its art department was located in the Bevier Memorial Building. By the middle of the twentieth century, RIT began to outgrow its facilities, surrounding land was scarce and expensive. In 1961, an unanticipated donation of $3.27 million from local Grace Watson, for whom RIT's dining hall was named, allowed the Institute to purchase land for a new 1,300-acre campus several miles south along the east bank of the Genesee River in suburban Henrietta. Upon completion in 1968, the Institute moved to the new suburban campus. In 1966, RIT was selected by the Federal government to be the site of the newly founded National Technical Institute for the Deaf. NTID admitted its first students in 1968, concurrent with RIT's transition to the Henrietta campus.
In 1979, RIT took over a liberal arts college located in Seneca Falls, New York. Despite making a 5-year commitment to keep Eisenhower open, RIT announced in July 1982 that the college would close immediately. One final year of operation by Eisenhower's academic program took place in the 1982–83 school year on the Henrietta campus; the final Eisenhower graduation took place in May 1983 back in Seneca Falls. In 1990, RIT started its first Ph. D. program, in Imaging Science – the first Ph. D. program of its kind in the U. S. RIT subsequently established Ph. D programs in six other fields: Astrophysical Sciences and Technology and Information Sciences, Color Science, Microsystems Engineering and Engineering. In 1996, RIT became the first college in the U. S to offer a Software Engineering degree at the undergraduate level; the current campus is housed on a 1,300 acres property. This property is covered with woodland and fresh-water swamp making it a diverse wetland, home to a number of somewhat rare plant species.
The campus comprises 5.1 million square feet of building space. The nearly universal use of bricks in the campus's construction — estimated at 14,673,565 bricks in late 2006 — prompted students to give it the semi-affectionate nickname "Brick City," reflected in the name of events such as the annual "Brick City Homecoming." Though the buildings erected in the first few decades of the campus's existence reflected the architectural style known as brutalism, the warm color of the bricks softened the impact somewhat. More recent additions to the campus have diversified the architecture while still incorporating the traditional brick colors. In October 2013, Travel+Leisure named it as one of the ugliest college campuses in the United States, citing the monotone brick and the suburbanization, leaving no youth activities within walking distance of the campus. In 2009, the campus was named a "Campus Sustainability Leader" by the Sustainable Endowments Institute; the residence halls and the academic side of campus are connected with a walkway called the "Quarter Mile."
Along the Quarter Mile, between the academic and residence hall side are various administration and support buildings. On the academic side of the walkway is a courtyard, known as the Infinity Quad due to a striking polished stainless steel sculpture of a continuous ribbon-like Möbius strip in the middle of it; these symbols represent time to infinity. The Quarter Mile is 0.41 miles long when measured between the mobius sculpture and the sundial. The name comes from a student fundraiser, where quarters were lined up from the sundial to the Infinity Sculpture. Standing near the Administration Building and the Student Alumni Union is The Sentinel, a steel structure created by the acclaimed metal sculptor, Alb
Rapid application development
Rapid-application development called Rapid-application building, is both a general term, used to refer to adaptive software development approaches, as well as the name for James Martin's approach to rapid development. In general, RAD approaches to software development put less emphasis on planning and more emphasis on an adaptive process. Prototypes are used in addition to or sometimes in place of design specifications. RAD is well suited for developing software, driven by user interface requirements. Graphical user interface builders are called rapid application development tools. Other approaches to rapid development include the adaptive, agile and unified models. Rapid application development was a response to plan-driven waterfall processes, developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design Method. One of the problems with these methods is that they were based on a traditional engineering model used to design and build things like bridges and buildings. Software is an inherently different kind of artifact.
Software can radically change the entire process used to solve a problem. As a result, knowledge gained from the development process itself can feed back to the requirements and design of the solution. Plan-driven approaches attempt to rigidly define the requirements, the solution, the plan to implement it, have a process that discourages changes. RAD approaches, on the other hand, recognize that software development is a knowledge intensive process and provide flexible processes that help take advantage of knowledge gained during the project to improve or adapt the solution; the first such RAD alternative was known as the spiral model. Boehm and other subsequent RAD approaches emphasized developing prototypes as well as or instead of rigorous design specifications. Prototypes had several advantages over traditional specifications: Risk reduction. A prototype could test some of the most difficult potential parts of the system early on in the life-cycle; this can provide valuable information as to the feasibility of a design and can prevent the team from pursuing solutions that turn out to be too complex or time consuming to implement.
This benefit of finding problems earlier in the life-cycle rather than was a key benefit of the RAD approach. The earlier a problem can be found the cheaper. Users are better at reacting than at creating specifications. In the waterfall model it was common for a user to sign off on a set of requirements but when presented with an implemented system to realize that a given design lacked some critical features or was too complex. In general most users give much more useful feedback when they can experience a prototype of the running system rather than abstractly define what that system should be. Prototypes can evolve into the completed product. One approach used in some RAD methods was to build the system as a series of prototypes that evolve from minimal functionality to moderately useful to the final completed system; the advantage of this besides the two advantages above was that the users could get useful business functionality much earlier in the process. Starting with the ideas of Barry Boehm and others, James Martin developed the rapid application development approach during the 1980s at IBM and formalized it by publishing a book in 1991, Rapid Application Development.
This has resulted in some confusion over the term RAD among IT professionals. It is important to distinguish between RAD as a general alternative to the waterfall model and RAD as the specific method created by Martin; the Martin method was tailored toward knowledge intensive and UI intensive business systems. These ideas were further developed and improved upon by RAD pioneers like James Kerr and Richard Hunter, who together wrote the seminal book on the subject, Inside RAD, which followed the journey of a RAD project manager as he drove and refined the RAD Methodology in real-time on an actual RAD project; these practitioners, those like them, helped RAD gain popularity as an alternative to traditional systems project life cycle approaches. The RAD approach matured during the period of peak interest in business re-engineering; the idea of business process re-engineering was to radically rethink core business processes such as sales and customer support with the new capabilities of Information Technology in mind.
RAD was an essential part of larger business re engineering programs. The rapid prototyping approach of RAD was a key tool to help users and analysts "think out of the box" about innovative ways that technology might radically reinvent a core business process; the James Martin approach to RAD divides the process into four distinct phases: Requirements planning phase – combines elements of the system planning and systems analysis phases of the Systems Development Life Cycle. Users, IT staff members discuss and agree on business needs, project scope and system requirements, it ends when the team agrees on obtains management authorization to continue. User design phase – during this phase, users interact with systems analysts and develop models and prototypes that represent all system processes and outputs; the RAD groups or subgroups use a combination of Joint Application Development techniques and CASE tools to translate user needs into working models. User Design is a continuous interactive process that allows users to understand and approve a working model of the system that meets their needs.
Construction phase – focuses on program and application development task similar to the SDLC. In RAD, users c
Matz's Ruby Interpreter or Ruby MRI was the reference implementation of the Ruby programming language named after Ruby creator Yukihiro Matsumoto. Until the specification of the Ruby language in 2011, the MRI implementation was considered the de facto reference since an independent attempt to create the specification had failed. Starting with Ruby 1.9, continuing with Ruby 2.x and above, the official Ruby interpreter has been YARV. The latest stable version is Ruby 2.5.0 Yukihiro Matsumoto started working on Ruby on February 24, 1993, released it to the public in 1995. "Ruby" was named as a gemstone because of a joke within Matsumoto's circle of friends alluding to the name of the Perl programming language. The 1.8 branch has been maintained until June 2013, 1.8.7 releases have been released since April 2008. This version provides bug fixes, but many Ruby feature enhancements; the RubySpec project has independently created a large test suite that captures 1.8.6/1.8.7/1.9 behavior as a reference conformance tool.
Ruby MRI 1.9.2 passed over 99% of RubySpec. MRI Ruby 2.2 crashed on one of the tests. As a result of the limited uptake by the MRI developers, RubySpec project has been discontinued as of end of 2014. Prior to release 1.9.3, the Ruby interpreter and libraries were distributed as dual-licensed free and open source software, under the GNU General Public License or the Ruby License. In release 1.9.3, Ruby's License has been changed from a dual license with GPLv2 to a dual license with the 2-clause BSD license. Ruby MRI is available for the following operating systems: This list may not be exhaustive. PowerPC64 performance Since version 2.2.1, Ruby MRI performance on PowerPC64 was improved. Noted limitations include: Backward compatibility Version 1.9 and 1.8 have slight semantic differences. The release of Ruby 2.0 sought to avoid such a conflict between different versions. YARV Official website
A software wizard or setup assistant is a user interface type that presents a user with a sequence of dialog boxes that lead the user through a series of well-defined steps. Tasks that are complex, infrequently performed, or unfamiliar may be easier to perform using a wizard. Before the 1990s, "wizard" was a common term for a technical expert, somewhat akin to "hacker."When developing the first version of its desktop publishing software, Microsoft Publisher, around 1991, Microsoft wanted to let users with no graphic design skill make documents that still looked good. Publisher was targeted at non-professionals, Microsoft figured that, no matter what tools the program had, users wouldn't know what to do with them. Publisher's "Page Wizards" instead provided a set of forms to produce a complete document layout, based on a professionally-designed template, which could be manipulated with the standard tools. Wizards had been in development at Microsoft for several years before Publisher, notably for Microsoft Access, which wouldn't ship until November 1992.
Wizards were intended to learn from how someone used a program and anticipate what they may want to do next, guiding them through more complex sets of tasks by structuring and sequencing them. They served to teach the product by example; as early as 1989, Microsoft discussed using voice and talking heads as guides, but multimedia-capable hardware was not yet widespread. The feature spread to other applications. In 1992, Excel 4.0 for Mac introduced wizards for tasks like building crosstab tables, Windows used wizards for tasks like printer or Internet configuration. By 2001, wizards had become commonplace in most consumer-oriented operating systems, although not always under the name "wizard." In Mac OS X they are called "assistants". GNOME refers to its wizards as "assistants". Today, a wizard-like experience is used to "onboard" users the first time they open an app. Many web applications, for instance online booking sites, make use of the wizard paradigm to complete lengthy interactive processes.
Oracle Designer uses wizards extensively. The Microsoft Manual of Style for Technical Publications urges technical writers to refer to these assistants as "wizards" and to use lowercase letters; the following screenshots show the installation wizard for Kubuntu 12.04, a free and open-source operating system. The wizard consists of seven steps. By the end of the step seven, the operation will be completed. Expert system Virtual assistant Office Assistant Wizards — Microsoft Windows Dev Center Wizards — Eclipse User Interface Guidelines