Apple Inc. is an American multinational technology company headquartered in Cupertino, that designs and sells consumer electronics, computer software, online services. It is considered one of the Big Four of technology along with Amazon and Facebook; the company's hardware products include the iPhone smartphone, the iPad tablet computer, the Mac personal computer, the iPod portable media player, the Apple Watch smartwatch, the Apple TV digital media player, the HomePod smart speaker. Apple's software includes the macOS and iOS operating systems, the iTunes media player, the Safari web browser, the iLife and iWork creativity and productivity suites, as well as professional applications like Final Cut Pro, Logic Pro, Xcode, its online services include the iTunes Store, the iOS App Store, Mac App Store, Apple Music, Apple TV+, iMessage, iCloud. Other services include Apple Store, Genius Bar, AppleCare, Apple Pay, Apple Pay Cash, Apple Card. Apple was founded by Steve Jobs, Steve Wozniak, Ronald Wayne in April 1976 to develop and sell Wozniak's Apple I personal computer, though Wayne sold his share back within 12 days.
It was incorporated as Apple Computer, Inc. in January 1977, sales of its computers, including the Apple II, grew quickly. Within a few years and Wozniak had hired a staff of computer designers and had a production line. Apple went public in 1980 to instant financial success. Over the next few years, Apple shipped new computers featuring innovative graphical user interfaces, such as the original Macintosh in 1984, Apple's marketing advertisements for its products received widespread critical acclaim. However, the high price of its products and limited application library caused problems, as did power struggles between executives. In 1985, Wozniak departed Apple amicably and remained an honorary employee, while Jobs and others resigned to found NeXT; as the market for personal computers expanded and evolved through the 1990s, Apple lost market share to the lower-priced duopoly of Microsoft Windows on Intel PC clones. The board recruited CEO Gil Amelio to what would be a 500-day charge for him to rehabilitate the financially troubled company—reshaping it with layoffs, executive restructuring, product focus.
In 1997, he led Apple to buy NeXT, solving the failed operating system strategy and bringing Jobs back. Jobs pensively regained leadership status, becoming CEO in 2000. Apple swiftly returned to profitability under the revitalizing Think different campaign, as he rebuilt Apple's status by launching the iMac in 1998, opening the retail chain of Apple Stores in 2001, acquiring numerous companies to broaden the software portfolio. In January 2007, Jobs renamed the company Apple Inc. reflecting its shifted focus toward consumer electronics, launched the iPhone to great critical acclaim and financial success. In August 2011, Jobs resigned as CEO due to health complications, Tim Cook became the new CEO. Two months Jobs died, marking the end of an era for the company. Apple is well known for its size and revenues, its worldwide annual revenue totaled $265 billion for the 2018 fiscal year. Apple is the world's largest information technology company by revenue and the world's third-largest mobile phone manufacturer after Samsung and Huawei.
In August 2018, Apple became the first public U. S. company to be valued at over $1 trillion. The company employs 123,000 full-time employees and maintains 504 retail stores in 24 countries as of 2018, it operates the iTunes Store, the world's largest music retailer. As of January 2018, more than 1.3 billion Apple products are in use worldwide. The company has a high level of brand loyalty and is ranked as the world's most valuable brand. However, Apple receives significant criticism regarding the labor practices of its contractors, its environmental practices and unethical business practices, including anti-competitive behavior, as well as the origins of source materials. Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, Ronald Wayne; the company's first product is the Apple I, a computer designed and hand-built by Wozniak, first shown to the public at the Homebrew Computer Club. Apple I was sold as a motherboard —a base kit concept which would now not be marketed as a complete personal computer.
The Apple I went on sale in July 1976 and was market-priced at $666.66. Apple Computer, Inc. was incorporated on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded Apple. Multimillionaire Mike Markkula provided essential business expertise and funding of $250,000 during the incorporation of Apple. During the first five years of operations revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to $118 million, an average annual growth rate of 533%; the Apple II invented by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differs from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. While early Apple II models use ordinary cassette tapes as storage devices, they were superseded by the introduction of a 5 1⁄4-inch floppy disk drive and interface called the Disk II.
The Apple II was chosen to be the desktop platform for the first "killer app" of the business world: VisiCalc, a spreadsheet program. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office. Before VisiCalc, Apple had been a distant third place c
Time Machine (macOS)
Time Machine is a backup software application distributed as part of macOS, desktop operating system developed by Apple. The software is designed to work with AirPort Time Capsule, the Wi-Fi router with built-in hard disk, as well as other internal and external disk drives, it was introduced in Mac OS X Leopard. Time Machine creates incremental backups of files that can be restored at a date, it allows the user to restore the whole system or specific files from the Recovery HD or the macOS Install DVD. It works within Mail, iWork, iLife, several other compatible programs, making it possible to restore individual objects without leaving the application. According to an Apple support statement: “Time Machine is a backup utility, not an archival utility, it is not intended as offline storage. Time Machine captures the most recent state of your data on your disk; as snapshots age, they are prioritized progressively lower compared to your more recent ones.” For backups to a network drive, Time Machine allows the user to back up Mac computers over the network, supports backing up to certain network attached storage devices or servers, depending on the version of Time Machine.
Earlier versions worked with a wide variety of NAS servers, but versions require the server to support a recent version of Apple’s Apple Filing Protocol, Time Machine no longer works with servers using the Server Message Block protocol typical for Windows servers. Some of the legacy support can be re-enabled by using hand-tuned configuration options, accessed through the Terminal. Apple's Time Capsule acts as a network storage device for Time Machine backups, allowing both wired and wireless backups to the Time Capsule's internal hard drive. Time Machine may be used with any external or internal volume. Time Machine saves hourly backups for the past 24 hours, daily backups for the past month, weekly backups for everything older than a month until the volume runs out of space. At that point, Time Machine deletes the oldest weekly backup. Time Machine's user interface when retrieving a file uses Apple's Core Animation API. Upon its launch, Time Machine "floats" the active Finder or application window from the user's desktop to a backdrop depicting the user's blurred desktop wallpaper.
Behind the current active window are stacked windows, with each window representing a snapshot of how that folder or application looked on the given date and time in the past. When toggling through the previous snapshots, the stacked windows extend backwards, giving the impression of flying through a "time tunnel." While paging through these "windows from the past," a previous version of the data may be retrieved. Time Machine works with locally connected storage disks, which must be formatted in the HFS+ volume format, as well as with remote storage media shared from other systems, including Time Capsule, via the network; when using remote storage, Time Machine uses sparse bundles. This acts as an isolation layer, which makes the storage neutral to the actual file system used by the network server, permits the replication of the backup from one storage medium to another. Sparse bundles are mounted by macOS like any other device, presenting their content as a HFS+ formatted volume, functionally similar to a local storage.
Time Machine places strict requirements on the backup storage medium. The only supported configurations are: A hard drive or partition connected directly to the computer, either internally or by a bus like USB or FireWire, formatted as journaled HFS+. A folder on a journaled HFS+ file system shared by another Mac on the same network running at least Leopard. A drive shared by an Apple Time Capsule on the same network. A drive connected to an Apple AirPort Extreme 802.11ac model on the same network. Local network volumes connected using the Apple Filing Protocol or via an SMB3 share that advertises a number of capabilities. On a Time Capsule, the backup data is stored in an HFS+ disk image and accessed via Apple Filing Protocol. Although it is not supported and manufacturers have configured FreeBSD and Linux servers and network-attached storage systems to serve Time Machine-enabled Macs. Time Machine creates a folder on the designated Time Machine volume into which it copies the directory tree of all locally attached disk drives, except for files and directories that the user has specified to omit, including the Time Machine volume itself.
Every hour thereafter, it creates a new subordinate folder and copies only files that have changed since the last backup and creates hard links to files that exist on the backup drive. A user can browse the directory hierarchy of these copies as if browsing the primary disk; some other backup utilities save deltas for file changes, much like version control systems. Such an approach permits more frequent backups of minor changes, but can complicate the interaction with the backup volume. By contrast, it is possible to manually browse a Time Machine backup volume without using the Time Machine interface. Time Machine appears to create multiple hard links to unmodified directories. Multiple linking of directories is different from conventional UNIX operating systems; as a result, tools like rsync cannot be used to replicate a Time Machine volume. Apple system events record; this means that instead of examining every file's modification date when it is activated, Time Machine only needs to scan the directo
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
In cryptography, encryption is the process of encoding a message or information in such a way that only authorized parties can access it and those who are not authorized cannot. Encryption does not itself prevent interference, but denies the intelligible content to a would-be interceptor. In an encryption scheme, the intended information or message, referred to as plaintext, is encrypted using an encryption algorithm – a cipher – generating ciphertext that can be read only if decrypted. For technical reasons, an encryption scheme uses a pseudo-random encryption key generated by an algorithm, it is in principle possible to decrypt the message without possessing the key, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can decrypt the message with the key provided by the originator to recipients but not to unauthorized users. In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key.
An example of a symmetric key scheme would be the one used by the German Enigma Machine that sent information from a central location to troops in various other locations in secret. When the Allies captured one of these machines and figured out how it worked, they were able to decipher the information encoded within the messages as soon as they could discover the encryption key for a given day's transmissions. In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key. Public-key encryption was first described in a secret document in 1973. Although published subsequently, the work of Diffie and Hellman, was published in a journal with a large readership, the value of the methodology was explicitly described and the method became known as the Diffie Hellman key exchange. A publicly available public key encryption application called Pretty Good Privacy was written in 1991 by Phil Zimmermann, distributed free of charge with source code.
Encryption has long been used by governments to facilitate secret communication. It is now used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices. In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering, is another somewhat different example of using encryption on data at rest. In response to encryption of data at rest, cyber-adversaries have developed new types of attacks; these more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, ransomware attacks.
Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, corrupt, or destroy. Encryption is used to protect data in transit, for example data being transferred via networks, mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users. Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message. Standards for cryptographic software and hardware to perform encryption are available, but using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks.
Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See, e.g. traffic analysis, TEMPEST, or Trojan horse. Digital signature and encryption must be applied to the ciphertext when it is created to avoid tampering. Encrypting at the time of creation is only secure if the encryption device itself has not been tampered with. Conventional methods for deleting data permanently from a storage device involve overwriting its whole content with zeros, ones or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of the medium. Cryptography offers a way of making the erasure instantaneous; this method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated'Effaceable Storage'; because the
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
A solid-state drive is a solid-state storage device that uses integrated circuit assemblies as memory to store data persistently. It is sometimes called a solid-state device or a solid-state disk, although SSDs do not have physical disks. SSDs can use traditional hard disk drive interfaces and form factors, or newer form factors and interfaces that have been developed to address specific advantages of the flash memory technology used in SSDs. Traditional interfaces, standard HDD form factors allow such SSDs to be used as drop-in replacements for HDDs in computers and other devices. Newer form factors such as mSATA, M.2, U.2, Ruler SSD and higher speed interfaces such as NVMe over PCI Express can increase performance over HDD performance. SSDs have no moving mechanical components; this distinguishes them from conventional electromechanical drives such as hard disk drives or floppy disks, which contain spinning disks and movable read/write heads. Compared with electromechanical drives, SSDs are more resistant to physical shock, run silently, have quicker access time and lower latency.
While the price of SSDs has continued to decline over time, SSDs are still more expensive per unit of storage than HDDs and are expected to remain so into the next decade. As of 2017, most SSDs use 3D TLC NAND-based flash memory. NAND is non-volatile memory, which retains data when power is removed. For applications requiring fast access but not data persistence after power loss, SSDs may be constructed from random-access memory; such devices may employ batteries as integrated power sources to retain data for a certain amount of time after external power is lost. Since 2018, some SSDs have 3D QLC NAND, which increases capacity and lowers costs, but at the expense of a lower endurance rating. For example, a 1 TB QLC NAND SSD has about the same endurance rating as a 500 GB TLC NAND SSD. High-performance SSDs may use SLC or MLC NAND, which can be much faster than TLC or QLC NAND, but have lower capacity and are more expensive, making them better suited for caches or other applications that require high performance.
However, all SSDs still store data in electrical charges, which leak over time if left without power. This causes worn out drives to start losing data after one to two years in storage. Therefore, SSDs are not suitable for archival storage; the only exception to this rule are SSDs based on 3D XPoint memory, which stores data not by storing electrical charges in cells, but by changing the electrical resistance of the cells. 3D XPoint, however, is a new technology with unknown data-retention characteristics and may not be suitable for archival purposes. Hybrid drives or solid-state hybrid drives, such as Apple's Fusion Drive, combine features of SSDs and HDDs in the same unit, containing a large hard disk drive and an SSD cache to improve performance of frequently-accessed data. An early if not first semiconductor storage device compatible with a hard drive interface was the 1978 StorageTek STC 4305; the STC 4305, a plug-compatible replacement for the IBM 2305 fixed head disk drive used charged coupled devices for storage and was reported to be seven times faster than the IBM product at about half the price.
It switched to DRAM. Prior to the StorageTek SSD there were many DRAM and core products sold as alternatives to HDDs but these products had memory interfaces and were not SSDs as defined. In the late 1980s Zitel, Inc. offered a family DRAM based SSD products, under the trade name "RAMDisk," for use on systems by UNIVAC and Perkin-Elmer, among others. In 1991, SanDisk Corporation shipped the Flash based first SSD, it was used by IBM in a ThinkPad laptop. In 1998 SanDisk introduced SSDs in 2 3 1/2 form factors with PATA interfaces. In 1995, STEC, Inc. entered the flash memory business for consumer electronic devices. In 1995, M-Systems introduced flash-based solid-state drives as HDD replacements for the military and aerospace industries, as well as for other mission-critical applications; these applications require the SSD's ability to withstand extreme shock and temperature ranges. In 1999, BiTMICRO made a number of introductions and announcements about flash-based SSDs, including an 18 GB 3.5-inch SSD.
In 2007, Fusion-io announced a PCIe-based Solid state drive with 100,000 input/output operations per second of performance in a single card, with capacities up to 320 gigabytes. At Cebit 2009, OCZ Technology demonstrated, it achieved a maximum write speed of 654 megabytes per maximum read speed of 712 MB/s. In December 2009, Micron Technology announced an SSD using a 6 gigabits per second SATA interface. In 2016, Seagate demonstrates 10GB/S transfer speeds from a 16 lane PCIe SSD and demonstrates a 60TB SSD in a 3.5 inch form factor. Samsung launches to market a 15.36TB SSD with a price tag of US$10,000 using a SAS interface, using a 2.5 inch form factor but with the thickness of 3.5 inch drives. This was the first time a commercially available SSD had more capacity than the largest available HDD. In 2017, the first products with 3D Xpoint memory are released. 3D Xpoint is different from NAND Flash and stores data using different principles. In 2018, both Samsung and Toshiba introduce to market 30.72
Apple Worldwide Developers Conference
The Apple Worldwide Developers Conference is a conference held annually by Apple Inc. in San Jose, California. Apple uses the event to showcase its new software and technologies for software developers. Attendees can participate in hands-on labs with Apple engineers and attend in-depth sessions covering a wide variety of topics. WWDC began in 1987 in Santa Clara. After 15 years in nearby San Jose, the conference moved to San Francisco, where it became Apple's primary media event of the year and sold out. WWDC returned to San Jose 13 years later. A $1,599 ticket is required to enter the conference. Tickets are obtained through an online lottery. Scholarships are available for members of STEM organizations. Attendees must be 13 years or older and must be a member of an Apple Developer program; until 2007, the number of attendees varied between 2,000 and 4,200. The WWDC events held from 2008 to 2015 were capped, sold out at 5,000 attendees. WWDC 2018 had 6,000 attendees from 77 countries, including 350 scholarship recipients.
WWDC is held annually from Monday through Friday on one week in June. The conference consists of a keynote address, presentation sessions, one-on-one "lab" consultations, special get-togethers and events; the conference begins with a Monday morning keynote address by other Apple executives. It is attended by both conference attendees and the media, since Apple makes product announcements at the event. Hardware announced during the address is sometimes exhibited in the conference hall afterwards; the keynote address is followed in the afternoon by a Platforms State of the Union address, which highlights and demonstrates changes in Apple's software developer platforms that are detailed in sessions in the week. The Apple Design Awards are announced on the first day of the conference. Several session tracks run from Tuesday through Friday; the presentations cover programming and other topics and range from introductory to advanced. All scheduled presentations are delivered by Apple employees; these presentations are streamed live, recordings can be viewed on demand on the Apple Developer website in the conference's iOS and tvOS applications.
Lunchtime sessions are given by a variety of guest speakers who are industry experts in technology and science. In the past, some sessions included question-and-answer time, a popular Stump the Experts session featured interaction between Apple employees and attendees. At the labs, which run throughout the week, Apple engineers are available for one-on-one consultations with developers in attendance. Experts in user interface design and accessibility are available for consultations by appointment. Apple organizes social get-togethers during the conference for various groups, such as women in technology or developers interested in internationalization or machine learning; the Thursday evening Bash at a nearby park features live music and drinks for all attendees 21 years or older. In 1989, System 7 was announced. In 1991, WWDC saw the first public demonstration of QuickTime. In 1995, WWDC'95 focused fully on the Copland project, which by this time was able to be demonstrated to some degree. Gil Amelio stated that the system was on-schedule to ship in beta form in summer with an initial commercial release in the late fall.
However few live demos were offered, no beta of the operating system was offered. In 1996, WWDC'96's primary emphasis was a new software component technology called OpenDoc, which allowed end users to compile an application from components offering features they desired most; the OpenDoc consortium included Adobe, Lotus and Apple. Apple touted OpenDoc as the future foundation for application structure under Mac OS; as proof of concept, Apple demonstrated a new end-user product called Cyberdog, a comprehensive Internet application component suite offering users an integrated browser, email, FTP, telnet and other services built of user-exchangeable OpenDoc components. ClarisWorks, a principal product in Apple's wholly owned subsidiary Claris Corporation, was demonstrated as an example of a pre-OpenDoc component architecture application modified to be able to contain functional OpenDoc components. In 1997, WWDC marked the return of Steve Jobs as a consultant. WWDC'97 was the first show after the purchase of NeXT, focused on the efforts to use OpenStep as the foundation of the next Mac OS.
The plan at that time was to introduce a new system named Rhapsody, which would consist of a version of OpenStep modified with a more Mac-like look and feel, the Yellow Box, along with a Blue Box that allowed extant Mac applications to run under OS emulation. The show focused on the work in progress, including a short history of development efforts since the two development teams had been merged on February 4. Several new additions to the system were demonstrated, including tabbed and outline views, a new object-based graphics layer. In 1998, in response to developer comments about the new operating system, the big announcement at WWDC'98 was the introduction of Carbon a version of the classic Mac OS API implemented on OpenStep. Under the original Rhapsody plans, classic applications would run in sandboxed installation of the classic Mac OS, have no access to the new Mac OS X features. To receive new features, such a