1.
Leet
–
Leet, also known as eleet or leetspeak, is an alternative alphabet for many languages that is used primarily on the Internet. It uses some characters to others in ways that play on the similarity of their glyphs via reflection or other resemblance. For example, leet spellings of the word leet include 1337 and l33t, the term leet is derived from the word elite. The leet alphabet is a form of symbolic writing. Leet may also be considered a substitution cipher, although many dialects or linguistic varieties exist in different online communities, the term leet is also used as an adjective to describe formidable prowess or accomplishment, especially in the fields of online gaming and in its original usage—computer hacking. Leet originated within bulletin board systems in the 1980s, where having elite status on a BBS allowed a user access to file folders, games, and special chat rooms. The Cult of the Dead Cow hacker collective has been credited with the coining of the term. Creative misspellings and ASCII-art-derived words were also a way to attempt to indicate one was knowledgeable about the culture of computer users, once the reserve of hackers, crackers, and script kiddies, leet has since entered the mainstream. It is now used to mock newbies, or newcomers, on web sites. Some consider emoticons and ASCII art, like smiley faces, to be leet, more obscure forms of leet, involving the use of symbol combinations and almost no letters or numbers, continue to be used for its original purpose of encrypted communication. It is also used as a script language. Leet symbols, especially the number 1337, are Internet memes that have spilled over into popular culture, signs that show the numbers 1337 are popular motifs for pictures and shared widely across the Internet. One of the hallmarks of leet is its approach to orthography, using substitutions of other characters, letters or otherwise. For more casual use of leet, the strategy is to use homoglyphs. The choice of symbol is not fixed—anything that the reader can make sense of is valid. e, anything that the average reader cannot make sense of is valid, a valid reader should himself try to make sense, if deserving of the underlying message. Another use for Leet orthographic substitutions is the creation of paraphrased passwords, limitations imposed by websites on password length and the characters permitted require less extensive forms of Leet when used in this application. Some examples of leet include B1ff and n00b, a term for the newbie, the l33t programming language, and the web-comic Megatokyo. Text rendered in leet is often characterized by distinctive, recurring forms, the -xor suffix The meaning of this suffix is parallel with the English -er and -r suffixes, in that it derives agent nouns from a verb stem
2.
Hexadecimal
–
In mathematics and computing, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, Hexadecimal numerals are widely used by computer system designers and programmers. As each hexadecimal digit represents four binary digits, it allows a more human-friendly representation of binary-coded values, one hexadecimal digit represents a nibble, which is half of an octet or byte. For example, a byte can have values ranging from 00000000 to 11111111 in binary form. In a non-programming context, a subscript is typically used to give the radix, several notations are used to support hexadecimal representation of constants in programming languages, usually involving a prefix or suffix. The prefix 0x is used in C and related languages, where this value might be denoted as 0x2AF3, in contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously, a numerical subscript can give the base explicitly,15910 is decimal 159,15916 is hexadecimal 159, which is equal to 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. example. com/name%20with%20spaces where %20 is the space character, thus ’, represents the right single quotation mark, Unicode code point number 2019 in hex,8217. In the Unicode standard, a value is represented with U+ followed by the hex value. Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits prefixed with #, white, CSS allows 3-hexdigit abbreviations with one hexdigit per component, #FA3 abbreviates #FFAA33. *nix shells, AT&T assembly language and likewise the C programming language, to output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed H or h, some assembly languages use the notation HABCD. Ada and VHDL enclose hexadecimal numerals in based numeric quotes, 16#5A3#, for bit vector constants VHDL uses the notation x5A3. Verilog represents hexadecimal constants in the form 8hFF, where 8 is the number of bits in the value, the Smalltalk language uses the prefix 16r, 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#, 16#5A3. For PostScript, binary data can be expressed as unprefixed consecutive hexadecimal pairs, in early systems when a Macintosh crashed, one or two lines of hexadecimal code would be displayed under the Sad Mac to tell the user what went wrong. Common Lisp uses the prefixes #x and #16r, setting the variables *read-base* and *print-base* to 16 can also used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H, &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix, 0h5A3 ALGOL68 uses the prefix 16r to denote hexadecimal numbers, binary, quaternary and octal numbers can be specified similarly
3.
Rebus
–
A rebus is an allusional device that uses pictures to represent words or parts of words. It was a form of heraldic expression used in the Middle Ages to denote surnames. For example, in its form, three salmon are used to denote the surname Salmon. A more sophisticated example was the rebus of Bishop Walter Lyhart of Norwich, Rebuses are used extensively as a form of heraldic expression as a hint to the name of the bearer, they are not synonymous with canting arms. A man might have a rebus as an identification device entirely separate from his armorials. An example of canting arms proper are those of the Borough of Congleton in Cheshire consisting of an eel, a lion. This word sequence conger-leo-tun enunciates the towns name, similarly, the coat of arms of St. Ignatius Loyola contains wolves and a kettle, said by some to be a rebus for Loyola. The arms of Elizabeth Bowes-Lyon feature bows and lions, a modern example of the rebus used as a form of word play is, H + = Hear, or Here. By extension, it uses the positioning of words or parts of words in relation to each other to convey a hidden meaning, for example, p walk ark. A rebus made up solely of letters is known as a gramogram, grammagram, the term rebus also refers to the use of a pictogram to represent a syllabic sound. A precursor to the development of the alphabet, this process one of the most important developments of writing. Fully developed hieroglyphs read in rebus fashion were in use at Abydos in Egypt as early as 3400 BCE, the writing of correspondence in rebus form became popular in the 18th century and continued into the 19th century. Lewis Carroll wrote the children he befriended picture-puzzle rebus letters, nonsense letters, and looking-glass letters, Rebus letters served either as a sort of code or simply as a pastime. In linguistics, the principle is the use of existing symbols, such as pictograms, purely for their sounds regardless of their meaning. Many ancient writing systems used the principle to represent abstract words. An example that illustrates the Rebus principle is the representation of the sentence I can see you by using the pictographs of eye—can—sea—ewe, some linguists believe that the Chinese developed their writing system according to the rebus principle, and Egyptian hieroglyphs sometimes used a similar system. A famous rebus statue of Ramses II uses three hieroglyphs to compose his name, Horus, for Ra, the child, mes, and the plant, su. Canada 1980s childrens game show Kidstreet featured a rebus during the bonus round, united Kingdom Catchphrase was a long-running game show which required contestants to decipher a rebus
4.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices
5.
Operating system
–
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, Linux, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
6.
Debugger
–
A debugger or debugging tool is a computer program that is used to test and debug other programs. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact, a trap occurs when the program cannot normally continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory, if it is a low-level debugger or a machine-language debugger it shows the line in the disassembly. Typically, debuggers offer a query processor, a symbol resolver, an interpreter. Some debuggers have the ability to modify program state while it is running and it may also be possible to continue execution at a different location in the program to bypass a crash or logical error. It often also makes it useful as a verification tool, fault coverage. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces, debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, and visualization features. Some debuggers include a feature called reverse debugging, also known as historical debugging or backwards debugging and these debuggers make it possible to step a programs execution backwards in time. Microsoft Visual Studio offers IntelliTrace reverse debugging for C#, Visual Basic. NET, and some other languages, reverse debuggers also exist for C, C++, Java, Python, Perl, and other languages. Some are open source, some are commercial software. Some reverse debuggers slow down the target by orders of magnitude, reverse debugging is very useful for certain types of problems, but is still not commonly used yet. Some debuggers operate on a specific language while others can handle multiple languages transparently. Some debuggers also incorporate memory protection to avoid storage violations such as buffer overflow and this may be extremely important in transaction processing environments where memory is dynamically allocated from memory pools on a task by task basis. Most modern microprocessors have at least one of features in their CPU design to make debugging easier, Hardware support for single-stepping a program. In-system programming allows an external hardware debugger to reprogram a system under test, many systems with such ISP support also have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set, processors used in embedded systems typically have extensive JTAG debug support. Micro controllers with as few as six pins need to use low pin-count substitutes for JTAG, such as BDM, Spy-Bi-Wire, debugWIRE, for example, uses bidirectional signaling on the RESET pin
7.
Microsoft Office
–
Microsoft Office is an office suite of applications, servers, and services developed by Microsoft. It was first announced by Bill Gates on 1 August 1988, initially a marketing term for a bundled set of applications, the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint. Over the years, Office applications have grown substantially closer with shared features such as a spell checker, OLE data integration. Microsoft also positions Office as a development platform for software under the Office Business Applications brand. On 10 July 2012, Softpedia reported that Office is used by over a billion people worldwide, Office is produced in several versions targeted towards different end-users and computing environments. The original, and most widely used version, is the version, available for PCs running the Windows. The most current desktop version is Office 2016 for Windows and macOS, released on 22 September 2015 and 9 July 2015, more recently, Microsoft developed Office Mobile, which are free-to-use versions of Office applications for mobile devices. Microsoft also produces and runs Office Online, a version of core Office apps. Microsoft Word is a word processor available for Windows and macOS, Word is also included in some editions of the now discontinued Microsoft Works. The first version of Word, released in the autumn of 1983, was for the MS-DOS operating system and had the distinction of introducing the mouse to a broad population, Word 1.0 could be purchased with a bundled mouse, though none was required. Following the precedents of LisaWrite and MacWrite, Word for Macintosh attempted to add closer WYSIWYG features into its package, Word for Mac was released in 1985. Word for Mac was the first graphical version of Microsoft Word and its proprietary Doc format is a de facto standard, although Word 2007 deprecated this format in favor of Office Open XML, which was later standardized by Ecma International as an open format. Support for Portable Document Format and OpenDocument was first introduced in Word for Windows with Service Pack 2 for Word 2007, Microsoft Excel is a spreadsheet program that originally competed with the dominant Lotus 1-2-3, and eventually outsold it. It is available for the Windows and macOS platforms, Microsoft released the first version of Excel for the Mac OS in 1985, and the first Windows version in November 1987. Microsoft PowerPoint is a program for Windows and macOS. It is used to create slideshows, composed of text, graphics, and other objects, Microsoft Access is a database management system for Windows that combines the relational Microsoft Jet Database Engine with a graphical user interface and software-development tools. Microsoft Access stores data in its own based on the Access Jet Database Engine. It can also import or link directly to data stored in other applications, Microsoft Outlook is a personal information manager
8.
Apple Inc.
–
Apple is an American multinational technology company headquartered in Cupertino, California that designs, develops, and sells consumer electronics, computer software, and online services. Apples consumer software includes the macOS and iOS operating systems, the media player, the Safari web browser. Its online services include the iTunes Store, the iOS App Store and Mac App Store, Apple Music, Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell personal computers. It was incorporated as Apple Computer, Inc. in January 1977, Apple joined the Dow Jones Industrial Average in March 2015. In November 2014, Apple became the first U. S. company to be valued at over US$700 billion in addition to being the largest publicly traded corporation in the world by market capitalization. The company employs 115,000 full-time employees as of July 2015 and it operates the online Apple Store and iTunes Store, the latter of which is the worlds largest music retailer. Consumers use more than one billion Apple products worldwide as of March 2016, Apples worldwide annual revenue totaled $233 billion for the fiscal year ending in September 2015. This revenue accounts for approximately 1. 25% of the total United States GDP.1 billion, the corporation receives significant criticism regarding the labor practices of its contractors and its environmental and business practices, including the origins of source materials. Apple was founded on April 1,1976, by Steve Jobs, Steve Wozniak, the Apple I kits were computers single-handedly designed and hand-built by Wozniak and first shown to the public at the Homebrew Computer Club. The Apple I was sold as a motherboard, which was less than what is now considered a personal computer. The Apple I went on sale in July 1976 and was market-priced at $666.66, Apple was incorporated January 3,1977, without Wayne, who sold his share of the company back to Jobs and Wozniak for $800. Multimillionaire Mike Markkula provided essential business expertise and funding of $250,000 during the incorporation of Apple, during the first five years of operations revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980 yearly sales grew from $775,000 to $118m, the Apple II, also invented by Wozniak, was introduced on April 16,1977, at the first West Coast Computer Faire. It differed from its rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics. While early Apple II models used ordinary cassette tapes as storage devices, they were superseded by the introduction of a 5 1/4 inch floppy disk drive and interface called the Disk II. The Apple II was chosen to be the platform for the first killer app of the business world, VisiCalc. VisiCalc created a market for the Apple II and gave home users an additional reason to buy an Apple II. Before VisiCalc, Apple had been a distant third place competitor to Commodore, by the end of the 1970s, Apple had a staff of computer designers and a production line
9.
Microsoft
–
Its best known software products are the Microsoft Windows line of operating systems, Microsoft Office office suite, and Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface tablet lineup, as of 2016, it was the worlds largest software maker by revenue, and one of the worlds most valuable companies. Microsoft was founded by Paul Allen and Bill Gates on April 4,1975, to develop and it rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows. The companys 1986 initial public offering, and subsequent rise in its share price, since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions. In May 2011, Microsoft acquired Skype Technologies for $8.5 billion, in June 2012, Microsoft entered the personal computer production market for the first time, with the launch of the Microsoft Surface, a line of tablet computers. The word Microsoft is a portmanteau of microcomputer and software, Paul Allen and Bill Gates, childhood friends with a passion for computer programming, sought to make a successful business utilizing their shared skills. In 1972 they founded their first company, named Traf-O-Data, which offered a computer that tracked and analyzed automobile traffic data. Allen went on to pursue a degree in science at Washington State University. The January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systemss Altair 8800 microcomputer, Allen suggested that they could program a BASIC interpreter for the device, after a call from Gates claiming to have a working interpreter, MITS requested a demonstration. Since they didnt actually have one, Allen worked on a simulator for the Altair while Gates developed the interpreter and they officially established Microsoft on April 4,1975, with Gates as the CEO. Allen came up with the name of Micro-Soft, as recounted in a 1995 Fortune magazine article. In August 1977 the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office, the company moved to a new home in Bellevue, Washington in January 1979. Microsoft entered the OS business in 1980 with its own version of Unix, however, it was MS-DOS that solidified the companys dominance. For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products, branding it as MS-DOS, following the release of the IBM PC in August 1981, Microsoft retained ownership of MS-DOS. Since IBM copyrighted the IBM PC BIOS, other companies had to engineer it in order for non-IBM hardware to run as IBM PC compatibles. Due to various factors, such as MS-DOSs available software selection, the company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkins disease, while jointly developing a new OS with IBM in 1984, OS/2, Microsoft released Microsoft Windows, a graphical extension for MS-DOS, on November 20,1985. Once Microsoft informed IBM of NT, the OS/2 partnership deteriorated, in 1990, Microsoft introduced its office suite, Microsoft Office
10.
Hyper-V
–
Microsoft Hyper-V, codenamed Viridian and formerly known as Windows Server Virtualization, is a native hypervisor, it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V supersedes Windows Virtual PC as the hardware component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks, Hyper-V was first released alongside Windows Server 2008, and has been available without charge for all the Windows Server and some client operating systems since. Hyper-V is also available on the Xbox One, in which it would launch both Xbox OS and Windows 10, a beta version of Hyper-V was shipped with certain x86-64 editions of Windows Server 2008. The finalized version was released on June 26,2008 and was delivered through Windows Update, Hyper-V has since been released with every version of Windows Server. Microsoft provides Hyper-V through two channels, Part of Windows, Hyper-V is a component of Windows Server 2008. It is also available in x64 SKUs of Pro and Enterprise editions of Windows 8, Windows 8.1, Hyper-V Server, It is a freeware edition of Windows Server with limited functionality and Hyper-V component. Hyper-V Server 2008 was released on October 1,2008 and it consists of Windows Server 2008 Server Core and Hyper-V role, other Windows Server 2008 roles are disabled, and there are limited Windows services. Hyper-V Server 2008 is limited to an interface used to configure the host OS, physical hardware. A menu driven CLI interface and some freely downloadable script files simplify configuration, in addition, Hyper-V Server supports remote access via Remote Desktop Connection. This allows much easier point and click configuration, and monitoring of the Hyper-V Server, Hyper-V Server 2008 R2 was made available in September 2009 and includes Windows PowerShell v2 for greater CLI control. Remote access to Hyper-V Server requires CLI configuration of network interfaces, also using a Microsoft Vista PC to administer Hyper-V Server 2008 R2 is not fully supported. Hyper-V implements isolation of virtual machines in terms of a partition, a partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. A hypervisor instance has to have at least one parent partition, the virtualization stack runs in the parent partition and has direct access to the hardware devices. The parent partition then creates the child partitions which host the guest OSs, a parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V. A child partition does not have access to the processor, nor does it handle its real interrupts. Instead, it has a view of the processor and runs in Guest Virtual Address. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition, the hypervisor handles the interrupts to the processor, and redirects them to the respective partition using a logical Synthetic Interrupt Controller
11.
IOS
–
IOS is a mobile operating system created and developed by Apple Inc. exclusively for its hardware. It is the system that presently powers many of the companys mobile devices, including the iPhone, iPad. It is the second most popular operating system globally after Android. IPad tablets are also the second most popular, by sales, originally unveiled in 2007 for the iPhone, iOS has been extended to support other Apple devices such as the iPod Touch and the iPad. As of January 2017, Apples App Store contains more than 2.2 million iOS applications,1 million of which are native for iPads and these mobile apps have collectively been downloaded more than 130 billion times. The iOS user interface is based upon direct manipulation, using multi-touch gestures, interface control elements consist of sliders, switches, and buttons. Internal accelerometers are used by applications to respond to shaking the device or rotating it in three dimensions. Apple has been praised for incorporating thorough accessibility functions into iOS, enabling users with vision. Major versions of iOS are released annually, the current version, iOS10, was released on September 13,2016. In iOS, there are four layers, the Core OS, Core Services, Media. In 2005, when Steve Jobs began planning the iPhone, he had a choice to either shrink the Mac, forstall was also responsible for creating a software development kit for programmers to build iPhone apps, as well as an App Store within iTunes. The operating system was unveiled with the iPhone at the Macworld Conference & Expo on January 9,2007, and released in June of that year. At the time of its unveiling in January, Steve Jobs claimed, iPhone runs OS X and runs applications, but at the time of the iPhones release. Initially, third-party native applications were not supported, Steve Jobs reasoning was that developers could build web applications through the Safari web browser that would behave like native apps on the iPhone. In October 2007, Apple announced that a native Software Development Kit was under development, on March 6,2008, Apple held a press event, announcing the iPhone SDK. The iOS App Store was opened on July 10,2008 with an initial 500 applications available.2 million in January 2017, as of March 2016,1 million apps are natively compatible with the iPad tablet computer. These apps have collectively been downloaded more than 130 billion times, App intelligence firm Sensor Tower has estimated that the App Store will reach 5 million apps by the year 2020. On September 5,2007, Apple released the iPod Touch, Apple also sold more than one million iPhones during the 2007 holiday season
12.
IPv6 address
–
An Internet Protocol Version 6 address is a numerical label that is used to identify a network interface of a computer or other network node participating in an IPv6 computer network. An IP address serves the purpose of identifying an individual network interface of a host, locating it on the network. For routing, IP addresses are present in fields of the packet header where they indicate source, IPv6 is the successor to the first addressing infrastructure of the Internet, Internet Protocol version 4. In contrast to IPv4, which defined an IP address as a 32-bit value, therefore, IPv6 has a vastly enlarged address space compared to IPv4. IPv6 addresses are classified by the primary addressing and routing methodologies common in networking, unicast addressing, anycast addressing, a unicast address identifies a single network interface. The Internet Protocol delivers packets sent to a unicast address to that specific interface, an anycast address is assigned to a group of interfaces, usually belonging to different nodes. A packet sent to an anycast address is delivered to just one of the interfaces, typically the nearest host. Anycast addresses cannot be identified easily, they have the format as unicast addresses. Almost any unicast address can be employed as an anycast address, a multicast address is also used by multiple hosts, which acquire the multicast address destination by participating in the multicast distribution protocol among the network routers. A packet that is sent to a multicast address is delivered to all interfaces that have joined the corresponding multicast group, IPv6 does not implement broadcast addressing. Broadcasts traditional role is subsumed by multicast addressing to the all-nodes link-local multicast group ff02,1, however, the use of the all-nodes group is not recommended, and most IPv6 protocols use a dedicated link-local multicast group to avoid disturbing every interface in the network. An IPv6 address consists of 128 bits, unicast and anycast addresses are typically composed of two logical parts, a 64-bit network prefix used for routing, and a 64-bit interface identifier used to identify a hosts network interface. The network prefix is contained in the most significant 64 bits of the address, the size of the routing prefix may vary, a larger prefix size means a smaller subnet id size. The bits of the subnet id field are available to the administrator to define subnets within the given network. A link-local address is based on the interface identifier. The prefix field contains the binary value 1111111010, the 54 zeroes that follow make the total network prefix the same for all link-local addresses, rendering them non-routable. Multicast addresses are formed according to several specific formatting rules, depending on the application, the prefix holds the binary value 11111111 for any multicast address. Currently,3 of the 4 flag bits in the flg field are defined, the 4-bit scope field is used to indicate where the address is valid and unique
13.
World IPv6 Day and World IPv6 Launch Day
–
World IPv6 Day was announced on January 12,2011 with five anchoring companies, Facebook, Google, Yahoo, Akamai Technologies, and Limelight Networks. The event started at 00,00 UTC on June 8,2011, the main motivation for the event was to evaluate the real world effects of the IPv6 brokenness as seen by various synthetic tests. To this end, during World IPv6 Day major web companies, although Internet service providers have been encouraged to participate, they were not expected to deploy anything active on that day, just increase their readiness to handle support issues. The concept was discussed at the 2010 Google IPv6 Conference. Many companies and organizations participated in the experiment, including the largest search engines, social networking websites, there were more than 400 participants in the original World IPv6 Day. Major carriers measured the percentage of IPv6 traffic of all Internet traffic as increasing from 0.024 to 0.041 with respect to native, most IPv6 traffic in consumer access networks was to Google sites. Demonstrating the need for content sites to adopt IPv6 for success, early results indicated that the day passed according to plan and without significant problems for the participants. Cisco and Google reported no significant issues during the test, Facebook called the results encouraging, and decided to leave their developer site IPv6-enabled as a result. But the consensus was that work needed to be done before IPv6 could consistently be applied. The participants will continue to perform detailed analyses of the data, many participants find it worthwhile to continue to maintain dual-stacks. The event was billed as this time, its for real, according to Alain Fiocco of Cisco, content that currently receives roughly 30% of global World Wide Web IPv4 pageviews should now have become available via IPv6 after World IPv6 Launch Day. IPv6 traffic on AMS-IX rose by 50% on the launch day, IPv6 traffic on AMS-IX was measured by ether type distribution as 0.4 percent, while IPv4 was measured as 99.6 percent on average in both daily and weekly graphs. IPv6 deployment IPv6 brokenness and DNS whitelisting Internet Society – World IPv6 Day Internet Society – World IPv6 Launch After World IPv6 Day, – Engineers from Cisco, Google, Hurricane Electric, and Yahoo. Discuss the deployment work done for World IPv6 Day and share the experience learned
14.
Plan 9 from Bell Labs
–
Plan 9 from Bell Labs is a distributed operating system, originally developed by the Computing Sciences Research Center at Bell Labs between the mid-1980s and 2002. It takes some of the principles of Unix, developed in the research group. In Plan 9, virtually all computing resources, including files, network connections, a unified network protocol called 9P ties a network of computers running Plan 9 together, allowing them to share all resources so represented. The name Plan 9 from Bell Labs is a reference to the Ed Wood 1959 cult science fiction Z-movie Plan 9 from Outer Space, also, Glenda, the Plan 9 Bunny, is presumably a reference to Woods film Glen or Glenda. The system continues to be used and developed by operating system researchers, Plan 9 from Bell Labs was originally developed, starting mid-1980s, by members of the Computing Science Research Center at Bell Labs, the same group that originally developed Unix and C. The Plan 9 team was led by Rob Pike, Ken Thompson, Dave Presotto and Phil Winterbottom. Over the years, many developers have contributed to the project including Brian Kernighan, Tom Duff, Doug McIlroy, Bjarne Stroustrup. Plan 9 replaced Unix as Bell Labss primary platform for operating systems research and it explored several changes to the original Unix model that facilitate the use and programming of the system, notably in distributed multi-user environments. After several years of development and internal use, Bell Labs shipped the system to universities in 1992. Three years later, in 1995, Plan 9 was made available for parties by AT&T via the book publisher Harcourt Brace. By early 1996, the Plan 9 project had been put on the burner by AT&T in favor of Inferno. In the late 1990s, Bell Labs new owner Lucent Technologies dropped commercial support for the project and in 2000, a fourth release under a new free software license occurred in 2002. A user and development community, including current and former Bell Labs personnel, the development source tree is accessible over the 9P and HTTP protocols and is used to update existing installations. In addition to the components of the OS included in the ISOs, Bell Labs also hosts a repository of externally developed applications. Plan 9 is a operating system, designed to make a network of heterogeneous. In a typical Plan 9 installation, users work at running the window system rio. Permanent data storage is provided by additional network hosts acting as file servers and its designers state that, he foundations of the system are built on two ideas, a per-process name space and a simple message-oriented file system protocol. The potential complexity of this setup is controlled by a set of locations for common resources
15.
Mach-O
–
Mach-O, short for Mach object file format, is a file format for executables, object code, shared libraries, dynamically-loaded code, and core dumps. A replacement for the a. out format, Mach-O offers more extensibility, Mach-O is used by most systems based on the Mach kernel. NeXTSTEP, macOS, and iOS are examples of systems that have used this format for executables, libraries. Each Mach-O file is made up of one Mach-O header, followed by a series of commands, followed by one or more segments. Mach-O uses the REL relocation format to handle references to symbols, when looking up symbols Mach-O uses a two-level namespace that encodes each symbol into an object/symbol name pair that is then linearly searched for by first the object and then the symbol name. The basic structure—a list of variable-length load commands that reference pages of data elsewhere in the also used in the executable file format for Accent. The Accent file format was in turn, based on an idea from Spice Lisp, under NeXTSTEP, OPENSTEP, macOS, and iOS, multiple Mach-O files can be combined in a multi-architecture binary. This allows a single file to contain code to support multiple instruction set architectures. For example, a multi-architecture binary for iOS can have 6 instruction set architectures, namely ARMv6, ARMv7, ARMv7s, ARMv8, x86, the difference stems from load commands that the dynamic linker, in previous Mac OS X versions, does not understand. Another significant change to the Mach-O format is the change in how the Link Edit tables function. In 10.6 these new Link Edit tables are compressed by removing unused and unneeded bits of information, however Mac OS X10.5, to make backwards-compatible executables, the linker flag -mmacosx-version-min= can be used. Some versions of NetBSD have had Mach-O support added as part of an implementation of binary compatibility, for Linux, a Mach-O loader was written by Shinichiro Hamaji that can load 10.6 binaries. As a more extensive solution based on this loader, The Darling Project aims at providing a complete environment allowing to run OS X applications on Linux
16.
Universal binary
–
The same mechanism that is used to select between the PowerPC or Intel builds of an application is also used to select between the 32-bit or 64-bit builds of either PowerPC or Intel architectures. At the same time, Apple does not specify whether or not such third-party software publishers must bundle separate builds for both the 32-bit and 64-bit variants of either architecture, Universal binaries typically include both PowerPC and x86 versions of a compiled application. The operating system detects a universal binary by its header, and this allows the application to run natively on any supported architecture, with no negative performance impact beyond an increase in the storage space taken up by the larger binary. Presently, fat binaries would only be necessary for software that is designed to have compatibility with older versions of Mac OS X running on older hardware. There are two general alternative solutions, the first is to simply provide two separate binaries, one compiled for the x86 architecture and one for the PowerPC architecture. However, this can be confusing to software users unfamiliar with the difference between the two, although the confusion can be remedied through improved documentation, or the use of hybrid CDs. The other alternative is to rely on emulation of one architecture by a running the other architecture. This approach results in performance, and is generally regarded an interim solution to be used only until universal binaries or specifically compiled binaries are available. Universal binaries are larger than single-platform binaries, because copies of the compiled code must be stored. However, because some resources are shared by the two architectures, the size of the resulting universal binary can be, and usually is. They also do not require extra RAM because only one of two copies is loaded for execution. Apple previously used a similar technique during the transition from 68k processors to PowerPC in the mid-1990s and these dual-platform executables were called fat binaries, referring to their larger file size. The binary format underlying the universal binary, a Mach-O archive, is the format used for the fat binary in NeXTSTEP. Apples Xcode 2.1 supports the creation of these files, Applications originally built using other development tools might require additional modification. These reasons have been given for the delay between the introduction of Intel-based Macintosh computers and the availability of third-party applications in universal binary format, Apples delivery of Intel-based computers several months ahead of their previously announced schedule is another factor in this gap. Many software developers have provided universal binary updates for their products since the 2005 WWDC, as of December 2008, Apples website now lists more than 7,500 Universal applications. On April 16,2007, Adobe Systems announced the release of Adobe Creative Suite 3, the Unix file command can identify Mach-O universal binaries and report which architecture they support. Snow Leopards System Profiler provides this information on the Applications tab, Apple Developer Transition Resource Center Apple Universal Binary Programming Guidelines
17.
Java bytecode
–
Java bytecode is the instruction set of the Java virtual machine. Each bytecode is composed of one, or in some cases two bytes that represent the instruction, along with zero or more bytes for passing parameters. Of the 256 possible byte-long opcodes, as of 2015,198 are in use,54 are reserved for future use, a Java programmer does not need to be aware of or understand Java bytecode at all. Many instructions have prefixes and/or suffixes referring to the types of operands they operate on and these are as follows, For example, iadd will add two integers, while dadd will add two doubles. The const, load, and store instructions may take a suffix of the form _n. The maximum n for const differs by type, the const instructions push a value of the specified type onto the stack. For example, iconst_5 will push an integer 5, while dconst_1 will push a double 1, there is also an aconst_null, which pushes null. The n for the load and store instructions specifies the location in the table to load from or store to. The aload_0 instruction pushes the object in variable 0 onto the stack, istore_1 stores the integer on the top of the stack into variable 1. For variables with higher numbers the suffix is dropped and operands must be used, some projects provide Java assemblers to enable writing Java bytecode by hand. Assembly code may be generated by machine, for example by a compiler targeting a Java virtual machine. Java syntax is used for class or interface definition, method bodies are specified using bytecode instructions. Krakatau Bytecode Tools, currently contains three tools, a decompiler and disassembler for Java classfiles and an assembler to create classfiles, lilac, an assembler and disassembler for the Java virtual machine. Some processors can execute Java bytecode natively, such processors are termed Java processors. The Java virtual machine provides some support for dynamically typed languages, JSR292 added a new invokedynamic instruction at the JVM level, to allow method invocation relying on dynamic type checking. The Da Vinci Machine is a virtual machine implementation that hosts JVM extensions aimed at supporting dynamic languages. All JVMs supporting JSE7 also include the invokedynamic opcode, the bytecode can be viewed as text using F3
18.
Peet's Coffee
–
Peets Coffee & Tea is a San Francisco Bay Area based specialty coffee roaster and retailer. Alfred Peet grew up in the business while living in the Netherlands as a child. Moving to San Francisco when he was 35, he began roasting coffee in the 1960s, Peet started Peets Coffee, Tea & Spices as a single store in 1966 in Berkeley, California. Peets original outlet is located on the corner of Walnut and Vine in the Gourmet Ghetto of North Berkeley. That location now contains a museum, displaying memorabilia and historical coffee equipment, Peets predates Starbucks and served as a model for that enterprise. Peet sold his business in 1979 to Sal Bonavita, staying on as a consultant until 1984. In 1984, Jerry Baldwin, one of the founders of Starbucks, along with co-owner Jim Reynolds, the buyer. In 1987, Baldwin and Peets owners sold the Starbucks chain to focus on Peets, the company went public in January 2001. After a successful IPO, shares struggled through the first year, in 2003, the first full-service Peets store on a university campus was opened within the Clark Center building at Stanford University. Peets coffee is currently served at all Stanford Dining locations. In 2005, UC Berkeley opened its own Peets franchise on campus in Dwinelle Hall, similarly in 2009, locations opened at the UW–Madisons Memorial Union, Villanova University, and at UC San Diego. In 2007, Peets opened a plant in Alameda. This new plant replaced the former operations in Emeryville, California, Peets operated 193 retail locations as of the first quarter of 2010, most are in California, with further locations in Washington, Oregon, Colorado, Illinois, Ohio, Pennsylvania and Massachusetts. In May 2011, Peets had a cap of roughly $636 million. Caribou stores in the remainder of the country that would open, about half of the total. In August 2014, Peets acquired Mighty Leaf Tea, a specialty tea brand based in the Bay Area and it was announced on October 30,2015 that Peets Coffee & Tea was acquiring a majority stake in Chicago-based Intelligentsia Coffee & Tea. Making this the second purchase in a month including Portland, Oregons Stumptown Coffee, by 2010, Peets was offering health, dental and vision plans to part-time workers who had worked at least 500 hours, and were averaging over 21-hour workweeks. Peets became a target for union organizing as early as 2002, at that time, workers at the Santa Cruz, California branch of Peet’s sought affiliation with the United Food and Commercial Workers International Union Local 839
19.
Java (software platform)
–
Java is used in a wide variety of computing platforms from embedded devices and mobile phones to enterprise servers and supercomputers. While they are less common than standalone Java applications, Java applets run in secure, sandboxed environments to provide many features of native applications, in addition, several languages have been designed to run natively on the JVM, including Scala, Clojure and Apache Groovy. Java syntax borrows heavily from C and C++, but object-oriented features are modeled after Smalltalk, Java eschews certain low-level constructs such as pointers and has a very simple memory model where every object is allocated on the heap and all variables of object types are references. Memory management is handled through integrated automatic garbage collection performed by the JVM, on November 13,2006, Sun Microsystems made the bulk of its implementation of Java available under the GNU General Public License. The latest version is Java 8, the only supported version as of 2016, Oracle has announced that using older versions of their JVM implementation presents serious risks due to unresolved security issues. The Java platform is a suite of programs that facilitate developing and running programs written in the Java programming language. A Java platform will include an engine, a compiler. Java ME, Specifies several different sets of libraries for devices with limited storage, display and it is often used to develop applications for mobile devices, PDAs, TV set-top boxes, and printers. Java SE, For general-purpose use on desktop PCs, servers, Java EE, Java SE plus various APIs which are useful for multi-tier client–server enterprise applications. The Java platform consists of programs, each of which provides a portion of its overall capabilities. For example, the Java compiler, which converts Java source code into Java bytecode, is provided as part of the Java Development Kit, the Java Runtime Environment, complementing the JVM with a just-in-time compiler, converts intermediate bytecode into native machine code on the fly. The Java platform also includes a set of libraries. The heart of the Java platform is the concept of a machine that executes Java bytecode programs. This bytecode is the same no matter what hardware or operating system the program is running under, there is a JIT compiler within the Java Virtual Machine, or JVM. The JIT compiler translates the Java bytecode into native processor instructions at run-time, the use of bytecode as an intermediate language permits Java programs to run on any platform that has a virtual machine available. Since JRE version 1.2, Suns JVM implementation has included a just-in-time compiler instead of an interpreter, although Java programs are cross-platform or platform independent, the code of the Java Virtual Machines that execute these programs is not. Every supported operating platform has its own JVM, in most modern operating systems, a large body of reusable code is provided to simplify the programmers job. This code is provided as a set of dynamically loadable libraries that applications can call at runtime
20.
Endianness
–
Endianness refers to the sequential order used to numerically interpret a range of bytes in computer memory as a larger, composed word value. It also describes the order of transmission over a digital link. Little-endian format reverses the order of the sequence and stores the least significant byte at the first location with the most significant byte being stored last. The order of bits within a byte can also have endianness, however, both big and little forms of endianness are widely used in digital electronics. As examples, the IBM z/Architecture mainframes use big-endian while the Intel x86 processors use little-endian, the designers chose endianness in the 1960s and 1970s respectively. Big-endian is the most common format in data networking, fields in the protocols of the Internet protocol suite, such as IPv4, IPv6, TCP, for this reason, big-endian byte order is also referred to as network byte order. Little-endian storage is popular for microprocessors, in due to significant influence on microprocessor designs by Intel Corporation. Mixed forms also exist, for instance the ordering of bytes in a 16-bit word may differ from the ordering of 16-bit words within a 32-bit word, such cases are sometimes referred to as mixed-endian or middle-endian. There are also some bi-endian processors that operate in either little-endian or big-endian mode, big-endianness may be demonstrated by writing a decimal number, say one hundred twenty-three, on paper in the usual positional notation understood by a numerate reader,123. The digits are written starting from the left and to the right, with the most significant digit,1 and this is analogous to the lowest address of memory being used first. This is an example of a big-endian convention taken from daily life, the little-endian way of writing the same number, one hundred twenty-three, would place the hundreds-digit 1 in the right-most position,321. A person following conventional big-endian place-value order, who is not aware of this ordering, would read a different number. Endianness in computing is similar, but it applies to the ordering of bytes. The illustrations to the right, where a is a memory address, danny Cohen introduced the terms Little-Endian and Big-Endian for byte ordering in an article from 1980. Computer memory consists of a sequence of storage cells, each cell is identified in hardware and software by its memory address. If the total number of cells in memory is n. Computer programs often use data structures of fields that may consist of data than is stored in one memory cell. For the purpose of this article where its use as an operand of an instruction is relevant, in addition to that, it has to be of numeric type in some positional number system
21.
GameCube
–
The GameCube is a home video game console released by Nintendo in Japan on September 14,2001, in North America on November 18,2001, in Europe on May 3,2002, and in Australia on May 17,2002. The sixth-generation console is the successor to the Nintendo 64 and competed with Sony Computer Entertainments PlayStation 2, the GameCube is the first Nintendo console to use optical discs as its primary storage medium. The discs are similar to the format, as a result of their smaller size and the consoles small disc compartment. Contemporary reception of the GameCube was generally positive, the console was praised for its controller, extensive software library and high-quality games, but was criticized for its exterior design and lack of features. Nintendo sold 21.74 million GameCube units worldwide before it was discontinued in 2007 and its successor, the Wii, which is backwards-compatible with most GameCube software, was released in November 2006. In 1997, a hardware design company called ArtX was launched, staffed by twenty engineers who had previously worked at SGI on the design of the Nintendo 64s graphics hardware. The team was led by Dr. Wei Yen, who had been SGIs head of Nintendo Operations, at Nintendos press conference in May 1999, the console was first publicly announced as Project Dolphin, the successor to the Nintendo 64. At the conference, Nintendos Howard Lincoln said of ArtX, This company is headed up by Dr. Wei Yen, Dr. Yen has assembled at ArtX one of the best teams of 3D graphics engineers on the planet. Subsequently, Nintendo began providing development kits to game developers, Nintendo also formed a strategic partnership with IBM for the production of Dolphins CPU, code-named Gekko. ArtX was acquired by ATI in April 2000, whereupon the Flipper graphics processor design had already mostly completed by ArtX and was not overtly influenced by ATI. In total, ArtX team cofounder Greg Buchner recalled that their portion of the hardware design timeline had arced from inception in 1998 to completion in 2000. Of ATIs acquisition of ArtX, an ATI spokesperson said, ATI now becomes a major supplier to the console market via Nintendo. The Dolphin platform is reputed to be king of the hill in terms of graphics, the console was announced as the Nintendo GameCube at a press conference in Japan on August 24,2000, abbreviated as NGC in Japan and GCN in North America. Nintendo unveiled its software lineup for the console at E32001, focusing on fifteen launch titles, including Luigis Mansion and Star Wars Rogue Squadron II. Several titles that were scheduled to launch with the console were delayed. It is also the first console in the history not to accompany a Mario platform title at launch. Long prior to the launch, Nintendo had developed and patented an early prototype of motion controls for the GameCube. These motion control concepts would not be deployed to consumers for several years, prior to the Nintendo GameCubes release, Nintendo focused resources on the launch of the Game Boy Advance, a handheld game console and successor to the original Game Boy and Game Boy Color
22.
Wii
–
The Wii is a home video game console released by Nintendo on November 19,2006. As a seventh-generation console, the Wii competed with Microsofts Xbox 360, Nintendo states that its console targets a broader demographic than that of the two others. The Wii introduced the Wii Remote controller, which can be used as a pointing device. Another notable feature of the console is the now defunct WiiConnect24, like other seventh-generation consoles, it features a game download service, called Virtual Console, which features emulated games from past systems. It succeeded the GameCube, and early models are fully backward-compatible with all GameCube games, Nintendo first spoke of the console at the E32004 press conference and later unveiled it at E32005. Nintendo CEO Satoru Iwata revealed a prototype of the controller at the September 2005 Tokyo Game Show, at E32006, the console won the first of several awards. By December 8,2006, it had completed its launch in the four key markets, in late 2011, Nintendo released a reconfigured model, the Wii Family Edition, which lacks Nintendo GameCube compatibility, this model was not released in Japan. The Wii Mini, Nintendos first major redesign since the compact SNES. The Wii Mini can only play Wii optical discs, as it omits GameCube compatibility and all networking capabilities, the Wiis successor, the Wii U, was released on November 18,2012. On October 20,2013, Nintendo confirmed it had discontinued production of the Wii in Japan and Europe, although the Wii Mini is still in production, the console was conceived in 2001, as the Nintendo GameCube was first released. According to an interview with Nintendo game designer Shigeru Miyamoto, the concept involved focusing on a new form of player interaction, the consensus was that power isnt everything for a console. Too many powerful consoles cant coexist and its like having only ferocious dinosaurs. They might fight and hasten their own extinction, in 2003, game engineers and designers were brought together to develop the concept further. By 2005 the controller interface had taken form, but a showing at that years Electronic Entertainment Expo was canceled. Miyamoto stated that the company had some troubleshooting to do, so we decided not to reveal the controller and instead we displayed just the console. Nintendo president Satoru Iwata later unveiled and demonstrated the Wii Remote at the September Tokyo Game Show, the Nintendo DS is said to have influenced the Wiis design. Designer Kenichiro Ashida noted, We had the DS on our minds as we worked on the Wii and we thought about copying the DSs touch-panel interface and even came up with a prototype. The idea was rejected because of the notion that the two gaming systems would be identical
23.
Jazz Jackrabbit 2
–
Jazz Jackrabbit 2 is a platform game produced by Epic MegaGames, now known as Epic Games. It was accidentally confirmed by Arjan Brussee in 1994 and released in 1998 for PCs running Windows, like the prequel, Jazz Jackrabbit, Jazz Jackrabbit 2 is a side-scrolling platform game but features additionally multiplayer options, including the ability to play over a LAN or the Internet. Jazz chases his nemesis Devan Shell through time, in order to retrieve the ring with which he planned to wed Eva, Jazzs brother, Spaz and Jazzs sister Lori, were introduced as new playable characters. Just like its predecessor Jazz Jackrabbit, Jazz Jackrabbit 2 is a 2D side-scroller that incorporates elements of shooting and platforming, the player must venture through a series of levels populated with enemies and environmental hazards that may hinder the players progress. The Player is given a selection of characters to choose from, namely, Jazz, Spaz, each character has certain traits that are exclusive to each other. For an example, Jazz can launch himself vertically higher than others, Spaz can double jump, each Character is equipped with a gun that can fire an inexhaustible supply of projectiles in a straight manner. During the course of the game, however, the player can encounter additional ammunition that can provide the player with greater fire-power and range and these ammo types can result in different effects from flamethrowing to firing ice shards. In addition to ammo, players usually come across certain items. Some of these may include a 1-up, a variety of food, a variety of diamonds, a carrot, a bird in a cage. Scattered throughout the levels are coins that the player can pick up and these are used as currency for when the player encounters the merchant. Players can also participate in multiplayer, the games Splitscreen mode supports up to 4 players, where as the Online mode can support up to 32. The game also has local TCP/IPX network support, in Multiplayer there are five game types that players could participate in, namely, Cooperative, Battle, Race, Treasure Hunt, and Capture the Flag. Jazz Jackrabbit 2 was produced by Epic MegaGames, now known as Epic Games and it was accidentally confirmed on August 24,1994 by Arjan Brussee and released on February 1,1998 for PCs running Windows, and later for Macintosh computers. Jazz Jackrabbit 2 has a level editor called Jazz Creation Station. The level editor was not included in the Mac versions or shareware editions, there were several variants and releases of Jazz Jackrabbit 2. Jazz Jackrabbit 2 - Shareware Edition Released on April 9,1998 and it featured three single-player levels and two multiplayer levels. It was released to promote the game, Jazz Jackrabbit 2 - Holiday Hare 98 This Christmas edition was released on November 6,1998 for the PC, but only in North America. Unlike the previous editions, this game is commercial rather than shareware
24.
Xen
–
Xen Project is a hypervisor using a microkernel design, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently. It was developed by the Linux Foundation and is supported by Intel, the University of Cambridge Computer Laboratory developed the first versions of Xen. The Xen Project community develops and maintains Xen Project as free and open-source software, subject to the requirements of the GNU General Public License, Xen Project is currently available for the IA-32, x86-64 and ARM instruction sets. Xen Project runs in a more privileged CPU state than any other software on the machine, from the dom0 the hypervisor can be managed and unprivileged domains can be launched. The dom0 domain is typically a version of Linux or BSD, Xen Project boots from a bootloader such as GNU GRUB, and then usually loads a paravirtualized host operating system into the host domain. The first public release of Xen was made in 2003, Xen Project has been supported originally by XenSource Inc. and since the acquisition of XenSource by Citrix in October 2007. This organisation supports the development of the software project and also sells enterprise versions of the software. On October 22,2007, Citrix Systems completed its acquisition of XenSource, the Xen Advisory Board advises the Xen Project leader and is responsible for the Xen trademark, which Citrix has freely licensed to all vendors and projects that implement the Xen hypervisor. Citrix has also used the Xen brand itself for some proprietary products unrelated to Xen, including at least XenApp, on April 15,2013, it was announced that the Xen Project was moved under the auspices of the Linux Foundation as a Collaborative Project. The Linux Foundation launched a new trademark for Xen Project to differentiate the project from any use of the older Xen trademark. A new community website was launched at xenproject. org as part of the transfer, Project members at the time of the announcement included, Amazon, AMD, Bromium, CA Technologies, Calxeda, Cisco, Citrix, Google, Intel, Oracle, Samsung, and Verizon. The Xen project itself is self-governing, since version 3.0 of the Linux kernel, Xen support for dom0 and domU exists in the mainline kernel. Internet hosting service companies use hypervisors to provide virtual private servers, Amazon EC2, IBM SoftLayer, Liquid Web, Fujitsu Global Cloud Platform, Linode, OrionVM and Rackspace Cloud use Xen as the primary VM hypervisor for their product offerings. Virtual machine monitors also often operate on mainframes and large servers running IBM, HP, virtualization also has benefits when working on development, running the new system as a guest avoids the need to reboot the physical computer whenever a bug occurs. Sandboxed guest systems can help in computer-security research, allowing study of the effects of some virus or worm without the possibility of compromising the host system. Finally, hardware vendors may decide to ship their appliance running several guest systems. Xen supports five different approaches to running the guest operating system, HVM, HVM with PV drivers, PVHVM, PVH, Xen supports a form of virtualization known as paravirtualization, in which guests run a modified operating system. The guests are modified to use a special hypercall ABI, instead of certain architectural features, through paravirtualization, Xen can achieve high performance even on its host architecture which has a reputation for non-cooperation with traditional virtualization techniques
25.
RS/6000
–
RISC System/6000, or RS/6000 for short, is a family of RISC-based UNIX servers, workstations and supercomputers made by IBM in the 1990s. The RS/6000 family replaced the IBM RT computer platform in February 1990 and was the first computer line to see the use of IBMs POWER, RS/6000 was renamed eServer pSeries in October 2000. The first generations of RS/6000 systems used the Micro Channel bus technology, some later models used the standardized PReP and CHRP platforms, co-developed with Apple and Motorola, with Open Firmware. Linux is widely used on CHRP based RS/6000s, but support was added after the RS/6000 name was changed to eServer pSeries in 2000, RS/6000 types of computers are the POWERserver servers, POWERstation workstations and Scalable POWERparallel supercomputer platform. While most machines were desktops, desksides, or rack mounted, there was a laptop model too, some models were marketed under the RS/6000 POWERstation and POWERserver names. The Model N40 was PowerPC-based notebook developed and manufactured by Tadpole Technology for IBM and it became available on 25 March 1994, priced at US$12,000. The internal batteries could power the system for 45 minutes only and these workstations were marketed under the PowerStation name. This type was for Xstations, IBMs line of X terminal, the 380,390, and 39H servers correspond to the 3AT, 3BT, and 3CT workstations. Many RS/6000 and subsequent pSeries machines came with a service processor, the service processor could call a phone number in case of serious failure with the machine. Late in the RS/6000 cycle, the processor was converged with the one used on the AS/400 machines. IBM POWER microprocessors IBM AIX IBM Scalable POWERparallel IBM pSeries IBM System p General 27 years of IBM RISC IBM Archives, A Brief History of RISC, the IBM RS/6000 and the IBM eServer pSeries
26.
Macintosh operating systems
–
The family of Macintosh operating systems developed by Apple Inc. In 1984, Apple debuted the operating system that is now known as the Classic Mac OS with its release of the original Macintosh System Software. The system, rebranded Mac OS in 1996, was preinstalled on every Macintosh until 2002, noted for its ease of use, it was also criticized for its lack of modern technologies compared to its competitors. The current Mac operating system is macOS, originally named Mac OS X until 2012, the current macOS is preinstalled with every Mac and is updated annually. It is the basis of Apples current system software for its devices, iOS, watchOS. Apples effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects, code named Star Trek, Taligent, the Macintosh is credited with having popularized this concept. The classic Mac OS is the original Macintosh operating system that was introduced in 1984 alongside the first Macintosh and remained in primary use on Macs through 2001. Apple released the original Macintosh on January 24,1984, its system software was partially based on the Lisa OS and the Xerox PARC Alto computer. It was originally named System Software, or simply System, Apple rebranded it as Mac OS in 1996 due in part to its Macintosh clone program that ended a year later, Mac OS is characterized by its monolithic system. Nine major versions of the classic Mac OS were released, the name Classic that now signifies the system as a whole is a reference to a compatibility layer that helped ease the transition to Mac OS X. Although the system was marketed as simply version 10 of Mac OS. Precursors to the release of Mac OS X include OpenStep, Apples Rhapsody project. MacOS makes use of the BSD codebase and the XNU kernel, the first desktop version of the system was released on March 24,2001, supporting the Aqua user interface. Since then, several more versions adding newer features and technologies have been released, since 2011, new releases have been offered on an annual basis. It was followed by several more official server-based releases, server functionality has instead been offered as an add-on for the desktop system since 2011. The first version of the system was ready for use in February 1988, in 1988, Apple released its first Unix-based OS, A/UX, which was a Unix operating system with the Mac OS look and feel. It was not very competitive for its time, due in part to the crowded Unix market, A/UX had most of its success in sales to the U. S. government, where POSIX compliance was a requirement that Mac OS could not meet. The Macintosh Application Environment was a software package introduced by Apple in 1994 that allowed users of certain Unix-based computer workstations to run Apple Macintosh application software, MAE used the X Window System to emulate a Macintosh Finder-style graphical user interface
27.
PowerPC
–
PowerPC is a RISC instruction set architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM. PowerPC was the cornerstone of AIMs PReP and Common Hardware Reference Platform initiatives in the 1990s and it has since become niche in personal computers, but remain popular as embedded and high-performance processors. Its use in game consoles and embedded applications provided an array of uses. In addition, PowerPC CPUs are still used in AmigaOne and third party AmigaOS4 personal computers, the history of RISC began with IBMs 801 research project, on which John Cocke was the lead developer, where he developed the concepts of RISC in 1975–78. 801-based microprocessors were used in a number of IBM embedded products, the RT was a rapid design implementing the RISC architecture. The result was the POWER instruction set architecture, introduced with the RISC System/6000 in early 1990, the original POWER microprocessor, one of the first superscalar RISC implementations, was a high performance, multi-chip design. IBM soon realized that a microprocessor was needed in order to scale its RS/6000 line from lower-end to high-end machines. Work on a one-chip POWER microprocessor, designated the RSC began, in early 1991, IBM realized its design could potentially become a high-volume microprocessor used across the industry. IBM approached Apple with the goal of collaborating on the development of a family of single-chip microprocessors based on the POWER architecture and this three-way collaboration became known as AIM alliance, for Apple, IBM, Motorola. In 1991, the PowerPC was just one facet of an alliance among these three companies. The PowerPC chip was one of joint ventures involving the three, in their efforts to counter the growing Microsoft-Intel dominance of personal computing. For Motorola, POWER looked like an unbelievable deal and it allowed them to sell a widely tested and powerful RISC CPU for little design cash on their own part. It also maintained ties with an important customer, Apple, and seemed to offer the possibility of adding IBM too, at this point Motorola already had its own RISC design in the form of the 88000 which was doing poorly in the market. Motorola was doing well with their 68000 family and the majority of the funding was focused on this, the 88000 effort was somewhat starved for resources. However, the 88000 was already in production, Data General was shipping 88000 machines, the 88000 had also achieved a number of embedded design wins in telecom applications. The result of various requirements was the PowerPC specification. The differences between the earlier POWER instruction set and PowerPC is outlined in Appendix E of the manual for PowerPC ISA v.2.02, when the first PowerPC products reached the market, they were met with enthusiasm. In addition to Apple, both IBM and the Motorola Computer Group offered systems built around the processors, Microsoft released Windows NT3.51 for the architecture, which was used in Motorolas PowerPC servers, and Sun Microsystems offered a version of its Solaris OS
28.
Commodore International
–
Commodore International was a North American home computer and electronics manufacturer. Commodore International, along with its subsidiary Commodore Business Machines, participated in the development of the computer industry in the 1970s and 1980s. The company developed and marketed one of the worlds best-selling desktop computers, the company that would become Commodore Business Machines, Inc. was founded in 1954 in Toronto as the Commodore Portable Typewriter Company by Polish immigrant and Auschwitz survivor Jack Tramiel. He moved to Toronto to start production, by the late 1950s a wave of Japanese machines forced most North American typewriter companies to cease business, but Tramiel instead turned to adding machines. In 1955, the company was incorporated as Commodore Business Machines. In 1962, Commodore went public on the New York Stock Exchange, in the late 1960s, history repeated itself when Japanese firms started producing and exporting adding machines. The companys main investor and chairman, Irving Gould, suggested that Tramiel travel to Japan to understand how to compete, instead, he returned with the new idea to produce electronic calculators, which were just coming on the market. Commodore soon had a profitable calculator line and was one of the popular brands in the early 1970s. However, in 1975, Texas Instruments, the supplier of calculator parts, entered the market directly. Commodore obtained an infusion of cash from Gould, which Tramiel used beginning in 1976 to purchase several second-source chip suppliers, including MOS Technology, Inc. in order to assure his supply. He agreed to buy MOS, which was having troubles of its own, through the 1970s, Commodore also produced numerous peripherals and consumer electronic products such as the Chessmate, a chess computer based around a MOS6504 chip, released in 1978. In December 2007 when Tramiel was visiting the Computer History Museum in Mountain View, California, for the 25th anniversary of the Commodore 64 and he said, I wanted to call my company General, but theres so many Generals in the U. S. Then I went to Admiral, but that was taken, so I wind up in Berlin, Germany, with my wife, and we were in a cab, and the cab made a short stop, and in front of us was an Opel Commodore. Tramiel gave this account in interviews, but Opels Commodore didnt debut until 1967. Once Chuck Peddle had taken over engineering at Commodore, he convinced Jack Tramiel that calculators were already a dead end, from PETs 1977 debut, Commodore would be a computer company. The operational headquarters, where research and development of new products occurred, retained the name Commodore Business Machines, in 1980 Commodore launched production for the European market in Braunschweig. By 1980 Commodore was one of the three largest microcomputer companies, and the largest in the Common Market and this was addressed with the introduction of the VIC-20 in 1981, which was introduced at a cost of US$299 and sold in retail stores. Commodore took out ads featuring William Shatner asking consumers Why buy just a video game
29.
Amiga
–
The Amiga is a family of personal computers sold by Commodore in the 1980s and 1990s. The Amiga provided a significant upgrade from earlier 8-bit home computers, the Amiga 1000 was officially released in July 1985, but a series of production problems meant it did not become widely available until early 1986. The best selling model, the Amiga 500, was introduced in 1987 and became one of the home computers of the late 1980s. The A3000, introduced in 1990, started the second generation of Amiga systems, followed by the A500+, finally, as the third generation, the A1200 and the A4000 were released in late 1992. The platform became particularly popular for gaming and programming demos and it also found a prominent role in the desktop video, video production, and show control business, leading to video editing systems such as the Video Toaster. The Amigas native ability to play back multiple digital sound samples made it a popular platform for early tracker music software. It was also an expensive alternative to the Apple Macintosh. Initially, the Amiga was developed alongside various Commodore PC clones, Commodore ultimately went bankrupt in April 1994 after the Amiga CD32 model failed in the marketplace. Since the demise of Commodore, various groups have marketed successors to the original Amiga line, including Genesi, Eyetech, ACube Systems Srl, likewise, AmigaOS has influenced replacements, clones and compatible systems such as MorphOS, AmigaOS4 and AROS. The Amiga was so far ahead of its time that almost nobody—including Commodores marketing department—could fully articulate what it was all about. Today, its obvious the Amiga was the first multimedia computer, but in those days it was derided as a machine because few people grasped the importance of advanced graphics, sound. Nine years later, vendors are still struggling to make systems that work like 1985 Amigas, Jay Miner joined Atari in the 1970s to develop custom integrated circuits, and led development of the Atari 2600s TIA. Almost as soon as its development was complete, the team developing a much more sophisticated set of chips, CTIA, ANTIC and POKEY. With the 8-bit lines launch in 1979, Miner again started looking at a next generation chipset, Miner wanted to start work with the new Motorola 68000, but management was only interested in another MOS6502 based system. Miner left the company, and the industry, shortly thereafter, in 1982, Larry Kaplan was approached by a number of investors who wanted to develop a new game platform. Kaplan hired Miner to run the side of the newly formed company. The system was code-named Lorraine in keeping with Miners policy of giving systems female names, in case the company presidents wife. When Kaplan left the late in 1982 to rejoin Atari, Miner was promoted to head engineer
30.
Sun Microsystems
–
Sun contributed significantly to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, and virtualized computing. Sun was founded on February 24,1982, at its height, the Sun headquarters were in Santa Clara, California, on the former west campus of the Agnews Developmental Center. On January 27,2010, Oracle Corporation acquired Sun by for US$7.4 billion, other technologies include the Java platform, MySQL, and NFS. Sun was a proponent of open systems in general and Unix in particular, the initial design for what became Suns first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim originally designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation and it was designed around the Motorola 68000 processor with an advanced memory management unit to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanfords Department of Computer Science, on February 24,1982, Vinod Khosla, Andy Bechtolsheim, and Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a developer of the Berkeley Software Distribution. The Sun name is derived from the initials of the Stanford University Network, Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68000-based systems with high-quality graphics that were the only computers other than DECs VAX to run 4. 2BSD and it licensed the computer design to other manufacturers, which typically used it to build Multibus-based systems running Unix from UniSoft. Suns initial public offering was in 1986 under the stock symbol SUNW, the symbol was changed in 2007 to JAVA, Sun stated that the brand awareness associated with its Java platform better represented the companys current strategy. Suns logo, which features four interleaved copies of the sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt. The initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was rotated to stand on one corner and re-colored purple. In the dot-com bubble, Sun began making more money. It also began spending more, hiring workers and building itself out. Some of this was because of demand, but much was from web start-up companies anticipating business that would never happen. Sales in Suns important hardware division went into free-fall as customers closed shop, several quarters of steep losses led to executive departures, rounds of layoffs, and other cost cutting. In December 2001, the fell to the 1998, pre-bubble level of about $100. But it kept falling, faster than many other tech companies, a year later it had dipped below $10 but bounced back to $20
31.
Solaris (operating system)
–
Solaris is a Unix operating system originally developed by Sun Microsystems. It superseded their earlier SunOS in 1993, Oracle Solaris, so named as of 2010, has been owned by Oracle Corporation since the Sun acquisition by Oracle in January 2010. Solaris is known for its scalability, especially on SPARC systems, Solaris supports SPARC-based and x86-based workstations and servers from Oracle and other vendors, with efforts underway to port to additional platforms. Solaris is registered as compliant with the Single Unix Specification, historically, Solaris was developed as proprietary software. In June 2005, Sun Microsystems released most of the codebase under the CDDL license, with OpenSolaris, Sun wanted to build a developer and user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution, in August 2010, Oracle discontinued providing public updates to the source code of the Solaris kernel, effectively turning Solaris 11 back into a closed source proprietary operating system. Following that, in 2011 the Solaris 11 kernel source code leaked to BitTorrent, however, through the Oracle Technology Network, industry partners can still gain access to the in-development Solaris source code. Source code for the source components of Solaris 11 is available for download from Oracle. In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time, BSD, System V and this became Unix System V Release 4. On September 4,1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS4 and this was identified internally as SunOS5, but a new marketing name was introduced at the same time, Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but also the OpenWindows graphical user interface and Open Network Computing functionality. Although SunOS4.1. x micro releases were retroactively named Solaris 1 by Sun, for releases based on SunOS5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS5.4. After Solaris 2.6, the 2. was dropped from the name, so Solaris 7 incorporates SunOS5.7. Although SunSoft stated in its initial Solaris 2 press release their intent to support both SPARC and x86 systems, the first two Solaris 2 releases,2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and it included the Wabi emulator to support Windows applications. At the time, Sun also offered the Interactive Unix system that it had acquired from Interactive Systems Corporation, in 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. Solaris uses a code base for the platforms it supports, SPARC
32.
OpenVMS
–
OpenVMS is a computer operating system for use in general-purpose computing. It is the successor to the VMS Operating System, that was produced by Digital Equipment Corporation, in the 1990s, it was used for the successor series of DEC Alpha systems. OpenVMS also runs on the HP Itanium-based families of computers, as of 2015, a port to the X86-64 architecture is underway. The name VMS is derived from virtual memory system, according to one of its architectural features. OpenVMS is a operating system, but source code listings are available for purchase. OpenVMS is a multi-user, multiprocessing virtual memory-based operating system designed for use in time sharing, batch processing, when process priorities are suitably adjusted, it may approach real-time operating system characteristics. The system offers high availability through clustering and the ability to distribute the system over multiple physical machines and this allows the system to be tolerant against disasters that may disable individual data-processing facilities. OpenVMS contains a user interface, a feature that was not available on the original VAX-11/VMS system. Versions of VMS running on DEC Alpha workstations in the 1990s supported OpenGL, customers using OpenVMS include banks and financial services, hospitals and healthcare, network information services, and large-scale industrial manufacturers of various products. As of mid-2014, Hewlett Packard licensed the development of OpenVMS exclusively to VMS Software Inc, VMS Software will be responsible for developing OpenVMS, supporting existing hardware and providing roadmap to clients. The company has a team of developers that originally developed the software during DECs ownership. In April 1975, Digital Equipment Corporation embarked on a project, code named Star. A companion software project, code named Starlet, was started in June 1975 to develop a new operating system, based on RSX-11M. These two projects were integrated from the beginning. Gordon Bell was the VP lead on the VAX hardware and its architecture, the Star and Starlet projects culminated in the VAX 11/780 computer and the VAX-11/VMS operating system. The Starlet name survived in VMS as a name of several of the system libraries, including STARLET. OLB. Over the years the name of the product has changed, in 1980 it was renamed, with version 2.0 release, to VAX/VMS. g. The smallest MicroVAX2000 had a 40MB RD32 hard disk and a maximum of 6MB of RAM, microVMS kits were released for VAX/VMS4.4 to 4.7 on TK50 tapes and RX50 floppy disks, but discontinued with VAX/VMS5.0
33.
OpenWrt
–
OpenWrt is an embedded operating system based on Linux, primarily used on embedded devices to route network traffic. The main components are Linux, util-linux, uClibc or musl, all components have been optimized for size, to be small enough for fitting into the limited storage and memory available in home routers. OpenWrt is configured using an interface, or a web interface. There are about 3500 optional software packages available for installation via the package management system. OpenWrt can run on types of devices, including CPE routers, residential gateways, smartphones, pocket computers. It is also possible to run OpenWrt on personal computers, which are most commonly based on the x86 architecture, the project came into being because Linksys built the firmware for their WRT54G series of wireless routers from publicly available code licensed under the GPL. Support was originally limited to the WRT54G series, but has since expanded to include many other chipsets, manufacturers and device types, including Plug Computers. Using this code as a base and later as a reference, some features formerly required proprietary software. With the Attitude Adjustment release of OpenWrt, all devices with 16 MB or less RAM are no longer supported as they can run out of memory easily. Older Backfire is recommended instead for bcm47xx devices, as issues for those devices came from dropping support for the legacy Broadcom target brcm-2.4, OpenWrt follows the bazaar-philosophy and is known for an abundance of options. Features include, A writable root file system, enabling users to add and this is accomplished by using overlayfs to overlay a read-only compressed SquashFS file system with a writable JFFS2 file system in a copy-on-write fashion. The package manager opkg, similar to dpkg, enables users to install, the package repository contains about 3500 packages. This contrasts with Linux-based firmwares based on file systems without the possibility to modify the installed software without rebuilding and flashing a complete firmware image. Exhaustive possibilities to configure network-related features, like, IPv4 support, IPv6 native stack, Prefix Handling, Native IPv6 configuration, IPv6 transitioning technologies, Downstream IPv6 configuration. Routing through iproute2, Quagga, BIRD, Babel etc, wireless security, Packet injection, e. g. Airpwn, lorcon, e. a. Stateful firewall, NAT and port forwarding through netfilter, additionally PeerGuardian is available Dynamically-configured port forwarding protocols UPnP and NAT-PMP through upnpd, etc. Port knocking via knockd and knock TR-069 client IPS via Snort Active queue management through the network scheduler of the Linux kernel, coDel has been backported to Kernel 3.3. Domain Name System and DHCP through Dnsmasq, MaraDNS, etc.09, in OpenWrt releases 8.09 and newer, a more capable web interface is included
34.
Blue Screen of Death
–
BSoDs have been present in Windows NT3.1 and all Windows operating systems released afterwards. In the Windows 9x era, incompatible DLLs or bugs in the system kernel could also cause BSoDs. Because of the instability and lack of protection in Windows 9x. The article was about the creation of the first rudimentary task manager in Windows 3. x, engadget later updated its article to correct the mistake. Until Windows Server 2012, BSoDs showed silver text on a blue background with information about current memory values. Windows Server 2012, Windows 8 and Windows 10 use a background instead. Windows 95,98 and ME BSoDs use 80×25 text mode, BSoDs in the Windows NT family use 80×50 text mode on a 720×400 screen. Windows XP BSoDs use the Lucida Console font while the Windows Vista and 7 BSoD uses the Consolas font, Windows 8, Windows Server 2012 use Segoe UI and attempt to render the BSoD at native resolution, otherwise defaulting to 640x480. Windows 10 uses the format as Windows 8 and up. In Windows NT family of operating systems, the screen of death occurs when the kernel or a driver running in kernel mode encounters an error from which it cannot recover. This is usually caused by an illegal operation being performed, the only safe action the operating system can take in this situation is to restart the computer. As a result, data may be lost, as users are not given an opportunity to save data that has not yet been saved to the hard drive. Depending on the code, it may display the address where the problem occurred. Under Windows NT, the second and third sections of the screen may contain information on all loaded drivers and a stack dump, respectively. The driver information is in three columns, the first lists the address of the driver, the second lists the drivers creation date. By default, Windows will create a dump file when a stop error occurs. Depending on the OS version, there may be several formats this can be saved in, the resulting memory dump file may be debugged later, using a kernel debugger. For Windows WinDBG or KD debuggers from Debugging Tools for Windows are used, by default, Windows XP is configured to save only a 64kB minidump when it encounters a stop error, and to then automatically reboot the computer
35.
Android (operating system)
–
Android is a mobile operating system developed by Google, based on the Linux kernel and designed primarily for touchscreen mobile devices such as smartphones and tablets. In addition to devices, Google has further developed Android TV for televisions, Android Auto for cars. Variants of Android are also used on notebooks, game consoles, digital cameras, beginning with the first commercial Android device in September 2008, the operating system has gone through multiple major releases, with the current version being 7.0 Nougat, released in August 2016. Android applications can be downloaded from the Google Play store, which features over 2.7 million apps as of February 2017, Android has been the best-selling OS on tablets since 2013, and runs on the vast majority of smartphones. In September 2015, Android had 1.4 billion monthly active users, Android is popular with technology companies that require a ready-made, low-cost and customizable operating system for high-tech devices. The success of Android has made it a target for patent, Android Inc. was founded in Palo Alto, California in October 2003 by Andy Rubin, Rich Miner, Nick Sears, and Chris White. Rubin described the Android project as tremendous potential in developing smarter mobile devices that are aware of its owners location. The early intentions of the company were to develop an operating system for digital cameras. Despite the past accomplishments of the founders and early employees, Android Inc. operated secretly and that same year, Rubin ran out of money. Steve Perlman, a friend of Rubin, brought him $10,000 in cash in an envelope. In July 2005, Google acquired Android Inc. for at least $50 million and its key employees, including Rubin, Miner and White, joined Google as part of the acquisition. Not much was known about Android at the time, with Rubin having only stated that they were making software for mobile phones, at Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradeable system, Google had lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation. Speculation about Googles intention to enter the communications market continued to build through December 2006. In September 2007, InformationWeek covered an Evalueserve study reporting that Google had filed several patent applications in the area of mobile telephony, the first commercially available smartphone running Android was the HTC Dream, also known as T-Mobile G1, announced on September 23,2008. Since 2008, Android has seen numerous updates which have improved the operating system, adding new features. Each major release is named in order after a dessert or sugary treat, with the first few Android versions being called Cupcake, Donut, Eclair. In 2010, Google launched its Nexus series of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions
36.
Dalvik (software)
–
Dalvik is a discontinued process virtual machine in Googles Android operating system that executes applications written for Android. Dalvik is open-source software, originally written by Dan Bornstein, who named it after the village of Dalvík in Eyjafjörður. The compact Dalvik Executable format is designed for systems that are constrained in terms of memory, the successor of Dalvik is Android Runtime, which uses the same bytecode and. dex files, with the succession aiming at performance improvements transparent to the end users. Unlike Java VMs, which are machines, the Dalvik VM uses a register-based architecture that requires fewer, typically more complex. Dalvik programs are written in Java using the Android application programming interface, compiled to Java bytecode, a tool called dx is used to convert Java. class files into the. dex format. Multiple classes are included in a single. dex file, duplicate strings and other constants used in multiple class files are included only once in the. dex output to conserve space. Java bytecode is also converted into an instruction set used by the Dalvik VM. An uncompressed. dex file is typically a few percent smaller in size than a compressed Java archive derived from the same. class files, the Dalvik executables may be modified again when installed onto a mobile device. Being optimized for low memory requirements, Dalvik has some characteristics that differentiate it from other standard VMs. The constant pool has been modified to use only 32-bit indices to simplify the interpreter, standard Java bytecode executes 8-bit stack instructions. Local variables must be copied to or from the stack by separate instructions. Dalvik instead uses its own 16-bit instruction set that works directly on local variables, the local variable is commonly picked by a 4-bit virtual register field. This lowers Dalviks instruction count and raises its interpreter speed, according to Google, the design of Dalvik permits a device to run multiple instances of the VM efficiently. While Dalvik interprets the rest of applications bytecode, native execution of those short bytecode segments, the relative merits of stack machines versus register-based approaches are a subject of ongoing debate. This difference is of importance to VM interpreters, for which opcode dispatch tends to be expensive, in 2012, academic benchmarks confirmed the factor of 3 between HotSpot and Dalvik on the same Android board, also noting that Dalvik code was not smaller than Hotspot. Furthermore, as of March 2014, benchmarks performed on an Android device still show up to a factor 100 between native applications and a Dalvik application on the same Android device. Upon running benchmarks using the early interpreter of 2009, both Java Native Interface and native code showed an order of magnitude speedup, Dalvik is published under the terms of the Apache License 2.0. Oracle and some dispute this